people’s location relative to the robot, and no
information about color.
Robots moving from legacy sensors to real
time machine vision is like asking people to walk
around a room with their eyes closed, illustrates
Professor Stone. People would likely use their
hands as bump sensors to sense objects and feel
their way around the room.
This would make moving around slow,
complex, and herky-jerky. Humans can do a lot
more if they are allowed to open their eyes and
process vision in real time. So can robots.
Robot machine vision has made a lot of
progress, though not all the problems surrounding
it have been resolved. At the same time, there has
been progress accomplishing vision tasks using a
new three-dimensional laser range finder that
retrieves more than 1M range points per second.
This gives the robot 3D range pictures of the
world around it, according to Professor Stone.
“We are researching this here,” he comments.
Velodyne is making this range finder — the
LIDAR HDL-64E. The 64-element sensor offers 360-
degrees of HFOV (horizontal field of view) and
26. 8 degrees of VFOV (vertical field of view),
according to Velodyne. It reads 1.3M data points
This provides the distance and intensity data
about the environment that are necessary to
achieve a 3D view of the world.
While the HDL-64E is too big for a small
humanoid robot, it might fit onto a full-sized
humanoid robot, according to Professor Stone.
“We have an HDL-64E on one of our autonomous
cars. We are able to get a much richer stream of
information than was ever possible before,” he
Learn the Language
For humanoid robots to become as intelligent
as people, they will have to be imbued with the
ability to understand natural human language. This
is called natural language processing. “Right now,
if you type a question into Google search, it won’t
simply answer your question. It doesn’t
understand language in the way that people do,”
explains Professor Stone.
With robots, the idea would be that they
could hear or read natural human language,
understand it, and respond with learning,
communication, or obedience to a command.
With computers, people could stop using
keyboards and simply talk to their computers. The
computers would be able to listen, parse the
Here’s lookin’ at you, Nao.
words and sentences, and understand and
respond by typing out words in a word processor
or initiating computer commands.
“You could just tell your computer, open my
browser and search for directions to the nearest
restaurant, rather than having to go and do what
we do in Google maps,” says Professor Stone.
When robots learn by doing, they no longer
need to be instructed or controlled quite so much
by an outside intelligence (man’s). “With machine
learning — which is one of my areas of expertise —
the goal is to write and develop algorithms that
Professor Peter Stone of the Department of Computer Sciences,
the University of Texas at Austin, is dedicated to the goal of the
RoboCup Federation, which is to build a humanoid robot team that
can defeat the best human world cup team on a real soccer field by
2050. According to Dan Burrus, founder of Burrus Research
Associates, Inc., and a long-time roboticist, it is more likely that
Professor Stone will reach his goal by the 2030 to 2040 time frame.
Accomplishing Professor Stone’s goal will solve many important
challenges and problems around AI for humanoid robotics. Examples
include the robot agility problem (having robots run quickly and
manipulate their motors in a very granular way so that they can kick
a soccer ball), planning, teamwork, and higher level cognitive
functioning. “The robots need to be able to reason about their
opponents and not fall for the same trick over and over,” Professor
News: Humanoids Beat
Humans in 2050
SERVO 01.2010 13