In Utopia of Rules, David Graeber recounts an anecdote from his fieldwork in rural Madagascar. At the time, small villages more or less governed themselves, and the state—a bureaucratic structure inherited from French colonial days—had little to do. Government offices had to find ways to appear busy.
I have particularly vivid memories of one occasion when an affable minor official who had had many conversations with me in Malagasy was flustered one day to discover me dropping by at exactly the moment everyone had apparently decided to go home early to watch a football game. (As I mentioned, they weren’t really doing anything in these offices anyway.)
“The office is closed,” he announced, in French, pulling himself up into an uncharacteristically formal pose. “If you have any business at the office you must return tomorrow at eight a.m.”
This puzzled me. He knew English was my native language; he knew I spoke fluent Malagasy; he had no way to know I could even understand spoken French. I pretended confusion and replied, in Malagasy, “Excuse me? I’m sorry, I don’t understand you.”
His response was to pull himself up even taller and just say the same thing again, slightly slower and louder. I once again feigned incomprehension. “I don’t get it,” I said, “why are you speaking to me in a language I don’t even know?” He did it again.
Graeber concluded that:
…relations of command, particularly in bureaucratic contexts, were linguistically coded: they were firmly identified with French; Malagasy, in contrast, was seen as the language appropriate to deliberation, explanation, and consensus decision-making. […] In literary Malagasy, the French language can actually be referred to as ny teny baiko, “the language of command.”
In Malagasy, the official would have had to explain himself. In French, he had no such obligation.
Anyone who’s worked with computers will be familiar with those “languages of command.” Commands are the basis for all programming and scripting languages. Together, they make expressions, methods, classes, and the rest. But commands undergird the system.
What Graeber’s anecdote highlights, for me, is that the command—a linguistic expression for which the listener requires no explanation or rationale—is not some natural fact of human communication. It had to be invented. The command only makes sense in the context of particular power relations: “relations of command.” The kinds of relations the West has long configured, since feudal days at least.
Computers exist in the tradition of those relations. Our ideas of what computers are inherit not-so-subtly from cultural stories about labor discipline. Labor is done when a superior issues a command.
And our stories about what computers are meant to do inherit from a well-understood ethic around work; a cultural story about what work should be like. Work is done efficiently, and for profit; excess efficiency should be parlayed into excess profit.
For some, computers help us become the workers we wish we could be. For others, computers represent the workers we wish we could hire. In either case, computers play a role in our cultural imagination: that of the ideal worker. Perfectly subordinate.
The dream of the computer, for those who felt empowered by them, lays in the user’s ability to command it. The user can do the work of a thousand people. And in that fantasy, the user plays the psychic role of the boss, extracting surplus value from ‘workers’ who need no compensation, beyond electricity. That relation of command has long driven the commercial promise of computing (for those who could wield it).
AI presents a challenge to this fantasy of subordination. Before, we did not need to explain ourselves to computers. Now, the machines cannot explain themselves to us. This opacity was the first chink in the armor. Reward function hacking and other insubordination followed. AI has reframed the computer as a worker that can only accept commands imperfectly, coarsely. At its most autonomous, it is a worker that needs no command.
I wonder often about AI researchers’ preoccupation with AI x-risk. (Briefly, AI x-risk refers to the idea that AI could cause an existential catastrophe matching or exceeding the scale of a nuclear apocalypse. Researchers have a term for the probability of an AI apocalypse occurring: P(doom). Of authors at top machine learning conferences, a survey found the majority believe P(doom) to be at least 10%!). Why do AI researchers worry so much about apocalyptic risk relative to other sources of tech-induced risk—even relative to other sources of risk from AI?
I suspect that many AI researchers liked computers growing up. They perhaps felt empowered by them. Computers helped them see themselves in a particular light; one that cohered to a power relation between the ‘boss’ (the user) and the ‘worker’ (the machine). The opacity of AI challenges that power relation. In doing so, it begs a question: If the computer no longer empowers me, who am I?1
I believe that question drives the relative weight of the P(doom) narrative. Its answer is doom, in a psychic sense. For AI to be born, one’s identity—one’s role in well-understood a system of command—might die. This is a type of apocalypse.
Thanks to Zeke Medley for the conversation.
For other researchers—Ruha Benjamin, Virginia Eubanks, and many others—computers have always represented these dynamics of power; the idea that they could replicate or exacerbate coercive relationships has always been a key concern.
In working toward AI incentive alignment, which should we address today: the risk of apocalypse, or the risk of discrimination? The answer is “both,” but noticing who is most concerned about x-risk, I have to wonder if one’s relative preference isn’t influenced by one’s role in societal-level relations of command.