Camera and sonar view of Pioneer 2.
world to model everything the robot perceives and responds to in the real world.
This way, the robot’s intelligence
can attempt to sort of reason out what
happens in the real world by use of the
stored “working memory elements” of
the virtual world, according to a Pace
Lab Obstacle Avoidance paper, by Dr.
D. Paul Benjamin, et al.
Machine Vision
The cognitive, machine vision robot
— Pioneer 2 — sees through a pair of
Fire Wire (IEEE 1394) Canon VC-C4 cameras and “framegrabbers.” The cameras
are mobile via a pan/tilt hardware device
from Directed Perceptions. The computer
uses an onboard Linux OS for command
and control, which interfaces wirelessly
with a single PC-based computer.
The software brains of the Pioneer
2 consist of two parts: one that processes the “bottom-up” or simple view of
the world; and, one that processes the
“top down” or closer look view.
The bottom up view is quick and
dirty, giving the background and the
general lay of the land. This image
simply provides a view with objects the
robot may want to examine more
closely. This software runs on Intel’s
Open Vision software library.
The top down vision system is
much more complex and elaborate. The
robot’s system activates this element of
the robot’s vision when the robot wants
to take a closer look at something in the
environment that the first vision system
has only vaguely mapped out. This is for
specific object recognition.
This “ERVision” software will look
Pioneer 2 with camera mount and cabling, top
view. The P2 is the predecessor to the P3-AT
from ActivMedia. This model has four wheels
and motors with encoders. The AT model has
optional sensing software to make utilize
the new sensing hardware and turn that into
intelligible commands the robot can follow.
The robot comes with upgrades including
inertial correction that counters “skid steer
dead reckoning errors.” The bot can be
accessorized with laser range finders, both
front and rear sonars, pan/tilt/zoom color
cameras, stereo range finder cameras, and
day/night vision cameras. Finally, GPS,
color-tracking, compasses, and tilt position
sensing hardware round out some of
the most desireable add-ons. Internet
operation is also available with this model.
at an object to recognize its distinct
aspects. It will then model the image in
the virtual world and store that so the
object won’t have to be recognized
again. This saves time and effort.
As the virtual world becomes more
completely detailed, the robot’s memory and perception of its world becomes
more clear and accurate. This way, the
robot’s obstacle avoidance proficiency
increases in its surroundings.
In case any of the obstacles should
be mobile objects (say a house cat, for
example, that might have been sleeping
in one spot when it was originally recognized), the robot is able to determine
whether any objects it has recognized
before are now where they are supposed to be (whether they have moved)
by use of the virtual world model.
SOAR
SOAR, an architecture for
developing intelligent robot systems,
has been in use for 25 years. It is
Pioneer 2 with a view of serial cable.
GEERHEAD
an integral part of the project that has
produced the Pioneer 2 robot (shown
in the images) with activevision.
The current version of the software
is SOAR v8.6, for those who would like
to investigate its use on their next
project. SOAR developers hope to bring
SOAR to the point where it can enable all
the tasks of the kinds of intelligent
robots the world envisions for the future.
The goal is for SOAR to solve open-
Another angle view of the Pioneer 2.
SERVO 03.2008 11