particular motion pattern, and then
when in that pattern again tries the
previous successful response. If this
response is again successful, its
confidence level will be incremented
to a maximum of five (adjustable). If
not, its confidence level will be
decremented with a lower limit of
zero.
The third level of machine
intelligence — Gamma — will try high
confidence level (at least three) Beta
responses from other motion patterns
when in a motion pattern for which it
has no memory of a Beta response,
before reverting to Alpha. So, with
each incremental level of intelligence
— Alpha, Beta, and Gamma — the
robot learns the motion patterns
which maximize displacement sooner
and sooner.
When you come up with a good
solution, you remember it. When that
solution can be applied elsewhere, it’s
likely because of the experience you
had in the past that you’re going to
come up with that solution quicker.
How many projects start slow but
once you get the hang of it, speed
up? You try random stuff in your
head, to which a lot of it you realize is
stupid without even doing anything,
but you had to run it through your
head first. That’s learning!
When you learn something that
works or moves you towards your
goal, you keep doing it. That is the
strategy of little Junior. In Rodney’s
world, he looks at the mobility sensor
from a Roomba caster with an
emitter/detector (Figure 3). He waits
a millisecond or two, and then looks
again to see if he’s moved. If the
sensor has changed from black to
white (or vice versa), that means the
robot is moving back or forth — our
definition of success!
When the robot has success or
achieves his goal (in this case,
movement), he needs to recall the
previous motion code so a facility has
been created to remember that
number. That is, the robot transfers its
current motion pattern (the variable
MotionCode) and reassigns that value
to another variable, PastMotion
(PastMotion = MotionCode). So, if the
newly selected random code is
successful, Junior can remember the
motion pattern from which it came
and increment the confidence level of
that response.
The original Beta code (no pun
intended) can be downloaded at
www.servomagazine.com/index.p
hp/magazine/article/february2015
_Peavy.
Try something and observe the
effect. If the effect is positive (that is,
it moves you towards your goal), the
next time you are in that situation and
desire the same goal, you simply
remember what you did. This is the
way we work. So, why can’t robots
self-program the same way?
Granted the “goal” here is
contrived, but the process is genuine.
The trick is to find relevant sensor
data to build up a confidence level,
memorize successful responses, and
finally generalize this information
using high confidence Beta responses
before reverting to random actions
again.
Is this serious? Is it real
intelligence? Who knows? Who cares?
It’s a real autonomous cybernetic
creature. Have fun exploring the
questions and as Heiserman says, “….
if someone asks you what your robot
does and they don’t buy your answer,
it’s their problem.”
SERVO 05.2015 49
Figure 2. The original 1979 Rodney was an
8085 microprocessor controlled device with
an eight-bit data bus, 12-bit address bus,
and a load button with which you’d
program the robot one byte at a time in
binary.
Figure 3. The heart of the Rodney Jr. system is the Roomba caster or stall sensor, which I
prefer to call a mobility sensor.