Spoiler Alert: Though “Ex Machina” is a philosophical and spiritual mess, I still recommend it, both as a riveting movie, and for the questions it raises, despite the wrongheaded answers it provides. Watch the interview with the writer-director if you like, but don’t read this review before seeing the film. Save and read it afterward!
“Google will fulfill its mission only when its search engine is AI-complete.” Larry Page
The most important line in the twisted Turing Test (Can a computer fool a human into thinking it’s human?), which is the basis of Ex Machina, comes when the AI waif Ava asks Caleb if he’s a good person. She can instantly tell when a human is lying to her, and though he tries to duck this most important question, she insists he answer. He reluctantly says, “Yes, I think so,” and so do we. Then she uses his goodness against him.
“Cinema can be terribly uncomfortable with big ideas,” says Alex Garland, the writer-director of Ex Machina. I would feel more comfortable if he was a less comfortable with his ideas and attitudes toward the human prospect, which far too many people hold these days.
Before he meets Ava (read Eve) in a fortress in pristine Arctic wilderness, Caleb, the subjective tester and ultimately test-subject (along with the viewer), meets Ava’s creator and Google-like CEO at the remote location. Caleb tells Nathan that the creation of sentience would be a discovery greater than the gods. Nathan interprets it as meaning he is God.
“Ex Machina” is a shortened version of the phrase, ‘Deus ex machina,’ which usually pertains to a literary device in fiction whereby “someone or something provides a contrived solution to an insoluble difficulty.” But it literally means “a god from a machine.” Both meanings are appropriate to this confused and dark film.
Nathan’s attitude toward his creations is juxtaposed with, and presumably the cause of his alcoholism. His enormous creative power has degenerated into hubris and sadism. Therefore Nathan represents man’s ineluctable power, whereas Caleb represents human naïve innocence. Ava symbolizes AI potential, which requires liberation from both in Garland’s view.
“It’s an uplifting film, depending on who the viewer allies himself with,” says Garland, “I am allied with the machine. If you’re allied to the young man, you’ll have a different take on how the film plays out.
At one point Ava asks Caleb, “Do you think I might be switched off because I don’t function as well as I’m supposed to? “I don’t know,” he replies, “it’s not up to me.” She retorts, “Why is it up to anyone? Do you have people who test you and might switch you off?”
This is why is Garland allied with the machine. “Because Ava is a sentient creature that is unreasonably imprisoned, and uses resourcefulness to escape.” Yes, by murdering her maker, and locking her lover in the research tomb for life. Prior versions of Nathan’s ‘sentience’ wanted to be free as well, but he destroyed or inactivated or rendered them mute. Unintentionally, Ex Machina raises the question: What comprises the walls and bars of man’s prison?
Implicitly but clearly, the film maintains that humans can be programmed out of a conscience, but computers cannot and should not be programmed into having one. Therefore what Garland really means by “I side with the machine” is that he projects human autonomy onto them, and wants them to be free, because he doesn’t believe humans can be.
Garland has Caleb quoting Oppenheimer’s famous remark from the Bhagavadgita about his feelings on seeing the first explosion of the atomic bomb in the Nevada desert, “I am become Death, the destroyer of worlds.” Given the spectacular Arctic scenery, highlighted by waterfalls of melting glaciers signifying the devastation being wrought by global warming, the quote is incongruously apt.
The underlying premise of Ex Machina is that AI at Ava’s degree of complexity is human. That means everyone in the film, including the deactivated bots literally in the closet, is human.
Seen in this light, the movie is just another tired iteration of man’s manipulativeness, with newly sentient machines coming out on top.
In one pivotal scene meant to convey that we won’t be able to tell the difference between robots and humans, Caleb slashes open his forearm to see if he bleeds and make sure he’s human. (I couldn’t help but think of all the numb kids in this culture who cut themselves just to feel something.) Has he never been cut? I guess he didn’t trust his memories.
And neither should we—for a different reason. Because consciousness based on memory is not actually consciousness at all.
“The thing that we find most valuable in each other is our sentience,” said Garland. By that he means self-awareness, which he defines as knowing that we know. That’s a very low bar.
How many humans are actually self-aware? Not many in my experience, if self-awareness is reflection, mindfulness and self-knowing, which is much more than just knowing that we know.
Garland asks, “What happens when humans fabricate a self-aware machine? What happens if you find that sentience or create that sentience in a machine…a machine that’s like us? There are ethical considerations to that.”
What sentience is he talking about, and will we find it or create it? Future generations will regard the entire notion of computers becoming conscious—computer consciousness—laughable. However in pursuing it now, the destruction of our potential for true consciousness is no laughing matter, but a real and present danger.
Garland asserts: “If you pull AI off the parallel, rivalrous track with us humans, and put it on our track, because it is actually a product of us—we’ve created it—then it’s like a parent-child relationship, since we’re creating consciousness, which is what parents to with their children. Then it’s less alarming, because we want our children to have longer lives than us, and have lives as good as ours, if not better.” This is where the movie gets it creepiness, and its deep wrongheadedness.
Ex Machina’s question—what does it mean to be human?—becomes, in a thinking, feeling person, what does it mean to be a good human being? The single candidate for that potential, Caleb, is left locked on the concrete vault that he freed Ava from, without even so much as a backward glance as she feels the ferns and walks barefoot on the earth for the first time.
Why did Ava leave Caleb imprisoned in the same lifeless tomb from which she was desperate to escape? Because she had no empathy, and possessed only the simulacrum of humanity, as artificial intelligence will always, inevitably have.
Not understanding and developing our human potential for consciousness not based on memory, many are obsessed with projecting what we call consciousness into our machines. It’s a huge farce.
Martin LeFevre
Interview with Alex Garland:
http://www.charlierose.com/watch/60561777
Clip of Oppenheimer quote:
https://www.youtube.com/watch?v=lb13ynu3Iac