Taking the
STAIRs to
Advance
Robot
Development
By Jeremy Kerfs
Commercial robotics centers
almost entirely around the
machines on assembly lines. Their
precisely controlled movement
repeats the same motion every
time, screwing in bolts or checking
for proper alignment. The vast
majority of these do not have a
single sensor on them to
determine anything about their
environment. Instead, they follow
preprogrammed instructions
because it’s assumed they will
always encounter the exact same
situation to deal with.
38 SERVO 05.2010
Researchers at Stanford and Cornell are pioneering new vision algorithms and methods of teaching robots to interact more fluently with their
surroundings in the hopes of creating machines that can
respond to changes around them. While many robots
already employ similar vision, they are usually attached to
research or hobbyist work. The possibilities of mobile robots
that can grasp and manipulate objects with dexterity close
to that of humans would enable them to maneuver plants,
inspect pipes for leaks, or clean up spills (for example)
without being teleoperated.
Most of this pioneering research uses the STAIR
(STanford Artificial Intelligence Robot) as the base. This
robot has a single arm that can be manipulated at several
joints in different directions. At the end is a claw that can
grasp even breakable objects like glassware. The robot
platform includes a computer, power source, and a
complete system of servos and wheels for maneuvering.
The variability of the robot comes in the vision equipment
that is used. Different experimentation has resulted in the
addition of many cameras, lasers, and other sensors to
detect motion, objects, and the environment.
The most recent achievement of this robot was
opening doors and maneuvering elevators. This was not a
small feat for a robot to perform, and it only came about
after using a novel approach to vision. Most robots take in
data from cameras that they run through programs to
detect faces, walls, rough terrain, or similar objects. The
robot computes this with data from the color and lighting