Playing God:
Human-Robot Boundaries in Ex Machina
If there was ever a movie that brought three decades of a robotic psychiatrist’s thoughts, concerns, and visions to present day reality, Ex Machina (“Playing
God”) was it. At the end of the
movie, I turned to my husband —
hoping that he now understood my
work and passion of the past 25
years — and confidently asked:
“What did you think?”
“It was slow,” he replied. There
you have it. I thought Ex Machina
was an epic movie that generations
of humans (and machines) would be
referring to for centuries to come,
and my husband thought it was too
slow. Opposites do attract, I
suppose. Ex Machina proves that this
axiom holds true for biological and
non-biological organisms.
Brilliantly written, researched,
and executed, Ex Machina strikingly
reveals the dangers posed by human
weakness — particularly the
emotional struggles provoked by
human sexuality and, in this case, the
passions of heterosexual males who
may be lured and manipulated by the
sexiness and vulnerability of smart
Fembots.
This situation is brought to life by
Caleb, the protagonist and an
objective young programming whiz.
When called upon to determine if
Ava — the most intelligent and
sophisticated machine of the times —
is “conscious,” he — and the viewer
— fall victim to her charm,
helplessness, and manipulative
capability.
In watching the movie, as quickly
as questions arose in my mind they
were skillfully addressed. Nathan —
an extraordinarily successful Internet
entrepreneur and the genius who
invented Ava — seems to always have
rational responses to assuage Caleb’s
doubts. Right from the beginning,
extremely thought-provoking
discussions take place.
Caleb points out that in the
standard Turing Test, the machine
should be hidden from the examiner.
Nathan replies that the machine
purposefully was not hidden from
view in order for Caleb to determine
if the robot has consciousness.
Even today, the Turing Test is a
subjective test. The Loebner Prize for
artificial intelligence — first
inaugurated in 1991 — “is the first
formal instantiation of a Turing
Test.”
The human judges
simultaneously hold textual
conversations with a chatbot and
another human being via computer.
Based on the responses, the judges
decide which is which, and the
award goes to the most human-like
computer program.
The subjectivity in deciding if a
chatbot is “human-like” or if a robot
is “conscious” comes down to
human perception until there is an
accepted scientific definition of
artificial intelligence and machine
consciousness which can be proven
or disproven.
Thus, for now and possibly into
the future, whether a machine is
intelligent will depend upon whether
we humans are “fooled” by it.
In the case of Caleb, the test is
no longer constrained to just natural
language processing via a keyboard,
but through direct interaction and
visual observation. There are now
more human-like gestures and cues
such as eye contact, facial
expressions, and body movement
that can make a machine appear to
be conscious. Furthermore, there is
an inverse relationship to Caleb’s
objectivity: The more Caleb interacts
with Ava, the less partial he
becomes. This begins as early as
Session 2, when Caleb describes how
his parents died.
Ava displays marked emotional
responses, remaining quiet and
totally attentive, her staring, sad-looking eyes fixed on Caleb’s face.
Caleb’s opinion regarding whether
Ava is “conscious” is evident when
Caleb no longer refers to Ava as “it”
Nathan is not just anticipating
Caleb’s concerns and reactions
before each session, he is
meticulously (or is the word
“maliciously” more appropriate?) and
subliminally playing on naïve Caleb’s
heterosexual interests. In Session 2,
Nathan asks Caleb how Ava feels
about him.