by David Geer
Contact the author at geercom@windstream.net
Discuss this article in the SERVO Magazine forums at http://forum.servomagazine.com
Roboticist Achieves
Human-Aided Robot
Hand-Eye Coordination
Taylor Valtrop encountered some obstacles when he set out to program a Nao
to exhibit autonomous hand-to-eye coordination. “I wanted the robot to
autonomously interact with objects in its environment,” Valtrop commented.
Using home-built stereo cameras as a 3D sensor to retrieve the environmental
data he needed, Valtrop tried to capture views that would enable the eye portion
of the hand-to-eye interaction. However, the data quality was unsatisfactory.
Noting that the 3D data from a Kinect system was of much higher quality,
Valtrop moved in that direction. “But the Kinect was too big to mount on my
robot and it couldn’t see short enough range to see the robot’s own hands, so
this failed to meet my needs as well,” he explained.
Valtrop ultimately settled on using the Kinect system to collect his own skeleton
position data so that he could feed that into the robot. In the end, Valtrop was
able to achieve some remarkable human-aided, hand-to-eye coordination
activities on his Nao robot.
A com-PILE-ation
of all the hardware
that Taylor Valtrop
uses to control his
Nao robot, to see
what his Nao sees.
Teleoperation
There are many software libraries
available for Kinect that make the
software accessible. “I use OpenNI/NITE
which is easily discovered using Google,”
Valtrop says. Valtrop was able to make
the Kinect software, hardware, and the
eyewear work for his purposes without
any modifications. He simply used the
available software libraries to access
the data from these systems and fed
that into his own system. The Nao is a
stock robot with no customizations for
the current research.
However, the data from Kinect is
not instantaneously ready for use in
robot control. “The pure data from the
Kinect is a depth map which is a 2D
image that gives the distance of each
point in the image to the Kinect
10 SERVO 04.2012