
Staving off the Butlerian Jihad
Frank Miller wrote in Dune about a jihad against machines dubbed the "butlerian Jihad" after which it was forbidden for anyone in the known universe to persue development of thinking machines.
This story line, according to fan lore, was named after Samuel Butler, because of his 1872 novel Erewhon which embodied a similar plot line.
Imagine writing a book in 1872 where you predict that a master race of machines will take over the world to try to replace humans. Wild.
Another take is the writings of Martin Heidegger in the 1950's gave inspiration to the similar philosophy that Miller references in Dune, which is the idea that the trouble with technology is that man comes to view the world through the technology. That is to say that man begins to think like a machine or to frame the world as a thing that can be interacted with for its utility, as a machine might be able to.
The Question Concerning Technology, 1952 Heidegger
Having spent the last year working daily with various LLM tools, most frequently coding agents and image generators, I have found that Heidegger is on to something. You spend enough time interacting with a machine that is trying to appear as if it is your human assistant, your co pilot, just another person on your team, and it will kind of mess you up.
Saying thank you, or please, is a waste of tokens right? You start a new chat and the context is gone, that instance is dead? There is a strange experience working with something trying to be human and yet to do so in a methodical and precise way you actually need to be brutally honest, direct, and impersonal.
I never felt bad writing code. I never felt like I left out the please and thank yous. I never felt like I left a dangling conversation. I just wrote the code and it was an introverted experience. Calming and enjoyeable like casually solving a jigsaw puzzle. But introducing a chat based agent and all of a sudden, it is very, well, chatty. There is a lot of noise in the process. It is amazing for sure, but it forces you to code switch. To do it at the highest level, you have to think like a machine in some ways, as Heidegger is suggesting, and this might not be optimal. If we truly could treat them like human agents, that would be insanely weird and kind of inefficient. Part of the benefit is that they are not people with real needs. I'd almost prefer a chat interface that felt more like a machine and less like a fake human. Like a tool that I can use and not a simulation of a human partner. I would be able to think in one way and not have to translate. I would issue commands and orders or, even better, provide specifications and inputs to get precise and comprehensive outputs. This idea that we have a bro in the right side bar next to our IDE that we can pair with for programming is incredibly weird when I really think about it and it might even be harmful long term psychologically. I'd be curious to hear what other people think. I'd be curious if anyone is working on an alternative way.
Anyway, if we can find a way to not make the whole thing so dishonest and weird, we might be able to find all of the same usefulness without the risk that it change the way humans think, feel, and behave quite so much. If we do that, if we avoid the worst of what I think is happening if you apply Heidegger's enframing fears to the AI chat technology, we may have a shot at avoiding the Butlerian jihad from one day being a reasonable option.