26 SERVO 09.2015
EVOLUTION OF A MOTHER ROBOT
When creating a brand new robot, it’s usually a good idea to design and test it in simulation first, to get a sense of
how well the design will work. However, even a successful
simulated robot will only provide limited insight into how it’s
actually going to do once you build it.
This fundamental disconnect between simulation and
reality becomes especially problematic when you’re dealing
with an area of robotics where it’s impractical to build
physical versions of everything. Evolutionary robotics is a
good example of this, where robot designs are artificially
tested and iterated over hundreds (or thousands) of
generations. It works great in simulation (if you have a fast
computer), but is much harder to do in practice.
In a paper recently published in PLOS ONE, Luzius
Brodbeck, Simon Hauser, and Fumiya Iida from the Institute
of Robotics and Intelligent Systems at ETH Zurich took
things one step farther by teaching a “mother robot” to
autonomously build children robots out of component parts
to see how well they move, doing all of the hard work of
robot evolution without any simulation compromises.
Once the evaluation is complete, the child robots are
disassembled (manually, for now) by removing the hot glue,
and the components are returned to the queue to make a
new robot. Meanwhile, in software, the successful “elite”
designs (the ones that were able to move the farthest in the
least amount of time) are carried on to the next generation
unchanged. The system also mutates or crossbreeds the
elites to create the rest of the next generation.
The basic idea behind evolutionary robotics is to build a
whole bunch of simple robots, test them in some way, and
then take a few of the most promising robots and use them
in the design of the following generation.
A UR5 arm is the “mother robot” which constructs the
“child robot” out of a few standardized parts, including active
cubes with one rotating face and smaller passive cubes made
out of wood. The mother robot hot-glues active and passive
cubes together and then transports them to a testing area,
where they’re wirelessly activated and an overhead camera
watches them wiggle around.
Overall, “a fitness increase of more than 40 percent over
10 generations was observed in all experiments,” which is
pretty good, but the impressive part is that it’s all physical.
The robots have all been built and tested, so you know the
elite designs really are elite, and will behave well in whatever
application you come up with for a weird little robot made
out of some cubes.
SLAM DUNK ON IMPROVEMENTS
John Leonard’s group in the MIT Department of Mechanical Engineering specializes in SLAM
(simultaneous localization and mapping), which is the
technique whereby mobile autonomous robots map their
environments and determine their locations.
At the recent Robotics Science and Systems
Conference, members of Leonard’s group presented a
new paper demonstrating how SLAM can be used to
improve object-recognition systems, which will be a vital
component of future robots that have to manipulate the
objects around them in arbitrary ways.
The system uses SLAM information to augment
existing object-recognition algorithms. Its performance
should thus continue to improve as computer-vision
researchers develop better recognition software, and roboticists develop better SLAM software.
Despite working with existing SLAM and object-recognition algorithms, however, and despite using only the output of an
ordinary video camera, the system’s performance is already comparable to that of special-purpose robotic object-recognition
systems that factor in depth measurements as well as visual information.