Perhaps the most spectacular aspect of self
similar optimization is its ability to continue to readapt to changing environments — something that
genetic algorithms can hardly do.
For example, if you use it to design a neural
net robot brain, the brain can learn to navigate
around a particular room. If small changes in the
room occur (such as a chair being moved), the self
similar system can readily adapt to those changes.
If the robot is suddenly put into a completely
different room, that is not a problem either. It will
start again — from scratch, if necessary — and readapt automatically to new situations without the
need to press a reset button. Genetic algorithms in
contrast converge to a particular solution. They
cannot track large or even gradual changes.
Changes in the environment really confuse them.
In a way, it is superior to what happens in
nature, which uses a process similar to genetic
algorithms that can lead to convergence and an
inability to change. Many plants and animals have
gone extinct because they could not adapt to
radical changes in the environment, whereas if
they had been equipped with a self similar
optimizer they might still exist! There are some
very simple examples of adaptive and co-evolving
code to download and try at Reference 1.
Open Ended Evolution
Another critical advantage to self similar
optimization is its effectiveness for co-evolution. In
nature, you can see co-evolution at work, like with
plants and insects. An insect may adapt over the
generations to eat the leaves of a particular plant.
The plant may counter-adapt over the generations
by developing thicker and less palatable leaves.
This can lead to an arms-race between species,
fueling the evolution of ever more complex forms.
Co-evolution using self similar optimization is
easier than any alternative. Each object in the
environment has its own self similar optimizer, and
continually appraises itself against the environment
and tries to improve itself.
Unending evolution with continuous
improvement is a nice idea. It is true that
sometimes genetic algorithms are run for days or
even weeks, but you could end up wasting your
time if they actually converge after only one day,
for example. Self similar optimization at least
allows for the theoretical possibility of open-ended
evolution and the continuous accumulation of
improvement. In reality, the rate of improvement
does start to slow down after a while.
Nevertheless, it is easy to create variations on the
basic code — such as evolving the precision
parameter along with the design parameters —
that offer interesting possibilities for open-ended
An initial paper about self similar optimization
or continuous gray code optimization (as it is
sometimes known) appeared relatively recently
(see Reference 2). It offers the robotics
community a simple and useful tool. Even
hobbyists who would avoid the complexity of
coding a genetic algorithm can
apply this idea and find ingenious
uses for it, particularly for interactive
robotics. It is not a miracle cure for
every problem, but it is rich in
potential applications. SV
Figure 3. The route taken
in minimizing z=x*sin( 4*x)
+1.1*y*sin( 2*y) starting
from a random position.
Each X indicates where a
parent is replaced by its
child. An important point
is that the system is
always able to make both
large and microscopic
adjustments at any time,
giving greater flexibility.
54 SERVO 01.2010