by David Geer
Contact the author at firstname.lastname@example.org
PACE ROBOTICS LAB
Activevision Robot Technology Captures Sights in 3D
Pace Robotics Lab has developed a real looker — a robot with
machine vision that remembers the world in 3D.
Pace University Labs produced the
“activevision” technology (per a
Pace University academic paper)
in conjunction with research into a
much larger robot cognition project.
With activevision, the robot models
itself and its environment in a 3D world
using graphics rendering engine
technology from Ogre3D, just like that
used in gaming software.
The robot sees the world around it,
then assembles it in 3D. It saves and
works within that reservoir of graphical
data in order to develop changing and
improving perceptions of its surroundings.
The robot vision project, called
Adaptive Dynamics and Active
10 SERVO 03.2008
Perception for Thought (ADAPT), falls
under the work of three University
research groups: one from Pace
University (computer science), one from
Brigham Young, and one from Fordham.
The research has produced several
robots, which are available from
ActivMedia today along with some
others. These robots are capable of
a variety of responses in largely
unpredictable environments using
robotic cognition and activevision.
There were obstacles to accomplishing this level of perception with earlier
robots. Developers had to pre-program
those robots to work in their environments. So, while the pre-programming
had a lot to do with how they could
respond, it didn’t help them learn from
the environment or produce their own
perceptions before they responded.
By developing robotic cognitive abilities, researchers hope to be able to give
robots the tools they need to learn from
their environments and adapt according-
ActivMedia Activrobot, Pioneer 2 rear side-angle view. This early experimental version is
the granddaddy to ActivRobots’ Pioneer 3
models, including the P3-DX. The P3-DX
comes with battery, two wheels, caster,
motors, encoders, and a front ring of sonars.
The robot must have its microcontroller, as
well as a sonar board, power board,
ARCOS microcontroller server software, on
the hardware I/O bus with ARIA software
and ARIA Robotics API for software
developers (to add to the robot’s skills),
and an operator’s manual.
ly. Some of the pieces to that puzzle
include the abilities for the robot to solve
problems and improve navigation.
Seeing is Predicting
The mobile robot in the ADAPT
research sees by first predicting what it
will see. It does this using a virtual
model of the world around it through
its multimedia. This exists in the SOAR
software and algorithms (SOAR is a
cognitive software architecture and
framework for developing intelligent
robotics systems), and in memory.
The virtual, multimedia aspect of
the model exists in a 3D game.
Ogre3D is an open-source programming
platform, virtual-world-based game with
advanced graphics. It uses state-of-the-art game physics (the physics make virtual objects in the game respond to each
other in the same way that the same
objects would respond in the real world).
The robot uses its machine vision
and software tools to create a copy of its
environment with itself included. It stores
and interacts with 3D data in the virtual
game world, learning from the process.
This forces the robot to use its intelligence so it can make decisions based on
its perceptions of the world around it and
not on feedback (based on machine
vision techniques) from that world alone.
The robot learns to adapt and
respond to the world around it as a part
of a complex problem-solving process.
The robot’s software uses the virtual