Mind / Iron
by Bryan Bergeron, Editor ª
Published Monthly By
T & L Publications, Inc.
430 Princeland Ct., Corona, CA 92879-1300
(951) 371-8497
FAX (951) 371-3052
Webstore Only 1-800-783-4624
www.servomagazine.com
Subscriptions
Toll Free 1-877-525-2539
Outside US 1-818-487-4545
P.O. Box 15277, N. Hollywood, CA 91615
PUBLISHER
Larry Lemieux
publisher@servomagazine.com
ASSOCIATE PUBLISHER/
ADVERTISING SALES
Robin Lemieux
robin@servomagazine.com
EDITOR
Bryan Bergeron
techedit-servo@yahoo.com
VP of OPERATIONS
Vern Graner
vern@servomagazine.com
CONTRIBUTING EDITORS
Tom Carroll Kevin Berry
R. Steven Rainwater Michael Simpson
Steve Koci John Leeman
Jeff Eckert Jenn Eckert
Steven Nelson Holden Berry
Chris Savage Mark Elam
Henry Aird
CIRCULATION DEPARTMENT
subscribe@servomagazine.com
WEBSTORE MARKETING
COVER GRAPHICS
Brian Kirkpatrick
sales@servomagazine.com
WEBSTORE MANAGER/
PRODUCTION
Sean Lemieux
sean@servomagazine.com
ADMINISTRATIVE STAFF
Re Gandara
Copyright 2016 by
T & L Publications, Inc.
All Rights Reserved
All advertising is subject to publisher’s approval.
We are not responsible for mistakes, misprints,
or typographical errors. SERVO Magazine assumes
no responsibility for the availability or condition of
advertised items or for the honesty of the
advertiser. The publisher makes no claims for the
legality of any item advertised in SERVO. This is the
sole responsibility of the advertiser. Advertisers and
their agencies agree to indemnify and protect the
publisher from any and all claims, action, or expense
arising from advertising placed in SERVO. Please send
all editorial correspondence, UPS, overnight mail,
and artwork to: 430 Princeland Court, Corona,
CA 92879.
ERVO FOR THE ROBOT INNOVATOR
6 SERVO 09.2016
Two unrelated events that occurred this summer involving robotic
systems point out the serious side of
robotics. The first is the traffic fatality
associated with the semi-autonomous
self-driving feature of the Tesla. The
second event is the use of a bomb-carrying robot to kill a civilian
suspected of killing several policemen
in Dallas. These two events — both
the first of their kind — clearly fall
within the purview of what robotics is
designed to handle: the dull, dirty,
and dangerous.
At the time of this writing, the
blame for the driver fatality hasn’t
been placed on either the driver or
the Tesla, and that isn’t the point.
What matters is how the accident
foreshadows our increasing
dependence on and interaction with
automated systems. At some point in
the future, fatalities associated with
automated trains, planes, and cars
will no longer make front line news.
While hopefully rare, these events will
simply be expected because
machines, programmers, and human
users aren’t perfect. Even if they
were, there’s no avoiding someone
intent on causing an incident. As far
as I know, a smart car capable of
avoiding a human driver intent on
causing a head-on collision has yet to
be developed, or even contemplated.
The use of bomb-carrying robots
is certainly nothing new in the
military. Guided missiles of all sorts
have been around for decades.
What’s new is the use on a civilian on
US soil, and that makes the event
more real — at least to me. I can’t
help thinking of SkyNet in the
Terminator movies or of the RoboCop
series. I’m not saying that robotics
shouldn’t be used to save lives in a
situation like that in Texas, just that
the implications for future armed
robotic systems deployed domestically
are worth considering. For example,
I’m all for developing drones to
support our troops overseas, but I’m
not ready for armed drones to be
circling overhead in Boston.
Clearly, gone are the Three Laws
of Robotics as proposed by the sci-fi
writer, Asimov:
• A robot may not injure a
human being or, through inaction,
allow a human being to come to
harm.
• A robot must obey the orders
given it by human beings except
where such orders would conflict
with the First Law.
• A robot must protect its own
existence as long as such protection
does not conflict with the First or
Second Laws.
If I may offer a more pragmatic,
modern set of laws that follows
current use patterns, they would read
as follows:
• A robot may not injure a
second human being or, through
inaction, allow a second human being
to come to harm, unless directed to
do so by a first human being, either
through software instruction or direct
command.
• A robot must obey the orders
given it by the first human being,
with direct command taking priority
over software instructions.
• A robot must protect its own
existence as long as such protection
does not endanger the first human
being.
In my set of laws, there’s clearly
an “us versus them” backdrop, with
the second humans that deserve to
be harmed as separate from the first
humans who are in control of the
robots.
I’m sure we’ll work out the
technical issues in self-driving cars,
weaponized robots, and the like. I’m
much less certain that the humans
involved in the decision making of
when and how to rely on these
technologies will consistently make
the right choices. I’m much more
comfortable with Asimov’s proposed
laws than I am with my set of laws.
I’d love to hear your comments on
the serious side of robotics. SV
The Serious Side of Robotics