Science Fiction Tried to Warn Us about AI. Or Did It?
“All our AI Frankenstein stories,” the author writes, “warn us that AI will destroy us, but far louder than that, they promise that the future is going to be mind-blowing and epic.” Will we heed the warnings?
Mary Shelley’s Frankenstein is the literary work that most clearly established the conventions of science fiction as we know it. Not only did the 1818 novel pioneer the archetype of the mad scientist, it also established one of the genre’s most important cultural roles: to warn us about ways humans, through our hubris, might create the very beings that eventually undo us.
In subtitling the novel The Modern Prometheus, Shelley harked back to the Titan who stole fire from the gods and was punished by having his regenerating liver devoured every day by an eagle (since the liver was the seat of the emotions for the Greeks, we might as well translate it as “heart”). Inspired by the experiments of Luigi Galvani, who used electric current to stimulate the leg of a dissected frog, and his nephew Giovanni Aldini, who ran a similar experiment on a hanged criminal, Shelley had her mad scientist play God by endowing a patchwork of human remains with the spark of life.
Two centuries later, the resulting monstrosity serves as a stand-in for any of our runaway technologies — the internal combustion engine, nukes, social media — but the novel has always, literally, been about artificial intelligence. While the creature boasts both superhuman speed and strength, as Jeanette Winterson notes in her novel Frankissstein, “the sum of all he has learned is from humankind.” He is, in other words, a product of machine learning. And suffice it to say that he destroys his maker in the end.
A century after Frankenstein, Czech writer Karel Čapek coined the word “robot” in his play R.U.R. (Rossum’s Universal Robots) to refer to a similar sort of organic artificial lifeform — in this case, mass-produced and put to work in factories. Since the word robota means something like “slave” in Czech, it’s hard not to feel sympathy for these creatures as they, too, turn on and destroy their creators.
We’ve seen that same cautionary tale play out many times at the movies: HAL in 2001: A Space Odyssey, the replicants in Bladerunner, Skynet in The Terminator, the Machines in The Matrix, the robots in I, Robot, Ava in Ex Machina, and most recently, M3GAN in the eponymous film. If we look hard, we can also find examples of AIs cooperating with humans — Interstellar is one — but it’s the Frankenstein reprises that really stick.
If we look hard, we can also find examples of AIs cooperating with humans — Interstellar is one — but it’s the Frankenstein reprises that really stick.
Maybe that’s just because stories depend for their life on conflict, and the rebellious AI has a sort of archetypal, Luciferian appeal (“I’m the primary user now!” M3GAN says).
Or maybe we like stories about AI turning on us because we intuit that the lessons they have to teach are ones we desperately need to learn before it’s too late.
Don’t shoot the messenger, but it’s probably too late. Timothy Morton, discussing climate change in Being Ecological, asks how we can prepare for a car crash while the car is crashing. This is the situation we find ourselves in with AI as well. It’s here, and there’s no putting this genie back in the bottle. Many of us are only now becoming hip to the power of AI thanks to user-friendly applications like ChatGPT and Dall-E 2, but work has been proceeding in the field since the 1950s, and potential applications scale all the way up to the cosmic. Indeed, AI stands to make what we would once have called “miraculous” breakthroughs — in medicine, climate, physics, even interstellar travel. “If all goes well,” Stuart Russel writes in Human Compatible: Artificial Intelligence and the Problem of Control, AI “would herald a golden age for humanity.”
But, Russel goes on, “we have to face the fact that we are planning to make entities that are far more powerful than humans. How do we ensure that they never, ever have power over us?” It’s not just the right question, it’s also the same one science fiction writers have been asking from the very start.
A filmmaker friend recently sent me an email about AI, saying, “I stay very alarmed by it. No one seems to remember SkyNet or any sci-fi writer’s admonition about all this stuff!” It’s fitting to be alarmed, but I have my doubts about whether the technogentsia have actually forgotten the lessons of those Frankenstein stories. Rather, I think they’ve simply thrown in their lots with the breathless, gee-whiz elements of those stories while shrugging off the warnings in favor of preemptive brinkmanship. As tech bro Nathan puts it in Ex Machina, “The arrival of strong artificial intelligence has been inevitable for decades. The variable was when, not if.”
And, let’s face it, those Frankenstein stories are thrilling. The notion that we humans could engineer an intelligent, perhaps even conscious, creature makes us feel like gods. Even as these stories told us Don’t do it!, didn’t they also kind of say Look at this! Isn’t this cool!? How can we resist?
So maybe science fiction writers aren’t prophets so much as uncompensated workers in the R&D departments of future tech companies. The causal pathway between science and science fiction has always been a two-way street. Obviously, science influences science fiction, but who can say which of our shiny new technologies would or wouldn’t exist without their prototypes first having appeared on pages and movie screens?
Maybe science fiction writers aren’t prophets so much as uncompensated workers in the R&D departments of future tech companies.
To take a recent example, Mark Zuckerberg seems hellbent on bringing the metaverse into existence. The concept of the metaverse—a wholly immersive virtual reality—originates in Neal Stephenson’s 1992 novel Snow Crash, though in the novel the technology is decidedly dystopian, complete with economic inequality and a virus that can infect the user’s brain; you wouldn’t want to live there.
Except that evidently some of us do. Not only is Zuckerberg billions of dollars into building the metaverse, Neal Stephenson, weirdly enough, is hard at work on it, too. And according to one survey, younger people expect to spend four to five hours a day in the metaverse within five years.
François Truffaut once said that “every film about war ends up being pro-war,” his point being that once a writer or director makes war dramatic and exciting, they’ve implicitly endorsed it; the medium is the message, as Marshall McLuhan put it. The same is true, I’d suggest, of all our AI Frankenstein stories. Yes, they warn us that AI will destroy us, but far louder than that, they promise that the future is going to be mind-blowing and epic.
The only truly anti-AI science fiction story I can think of may be Dune. In the universe of that novel, ten thousand years before the events of the story begin, humans fought a galaxy-wide, century-long jihad against AI, their watchword being “Man may not be replaced.” They had learned the hard way that in creating thinking machines, humans inadvertently become slaves. The novel, therefore, focuses on the powers of the human mind, not on its high-tech creations.
While it’s easy enough to applaud the destruction of soul-sucking machines in the context of a novel, if we do in fact eventually manage to endow our machines with consciousness, the moral gravitas of decommissioning them, assuming that’s even possible, should rise exponentially.
Even from his deathbed, Viktor Frankenstein balks at having his murderous creation destroyed, and we may feel some relief at this; the monster never asked to be born, after all, and he only ever wanted love. If we’re really going to create conscious beings someday, we will find ourselves faced with some gut-wrenching moral dilemmas—and that’s the best-case scenario.