Processing Head and Body Actions
WALL-E uses one chip to control his head and another to control his body.
His roboticists have connected the chips via Serial Peripheral Interface (SPI) in
order to communicate with WALL-E’s actuators.
“The head motion is triggered by audio motion sensors and pre-coded
digital IR signals. The roboticists pre-programmed the head to produce actions
for WALL-E’s emotive expressions. The head motion is also controlled by
secondary 16-bit chips on a board inside the robot,” says Albert Chan,
Thinkway Toys CEO and President.
“WALL-E uses five motors to direct the head alone. One motor is used to
make the head nod up and down; one is used for left to right head
motion/turns; still another actuates the head’s opening and closing to form the
expressions. The final two open and close each of WALL-E’s eyes,” explains Chan.
WALL-E uses two motors to control the arms, wrists, and hands. “Each motor
controls an arm’s up and down motion, wrist turns, and hand opening and
closing (once the arms are active). The arms are time sharing when commanded
by one of the main IC chips boarded on WALL-E,” continues Chan.
For the body and track motions, WALL-E uses three motors. One motor
controls the right track’s turns while another controls the other track. A
third motor actuates the body’s tilts forward and backward. “The body and
track motions can be interactively triggered by the audio motion sensors and
pre-coded IR digital signals. The internally pre-programmed actions also generate
commands from the main 16-bit chip,” Chan concludes.
(lower, center) trigger WALL-E’s
programmed “curiosity.” “The CdS
motion sensor is a special resistor that
responds to changes in light in the
surrounding environment, which
WALL-E can interpret and then react
to,” says Albert Chan, Thinkway Toys
CEO and President.
“WALL-E uses his DSP and motion
sensors to locate and follow people by
the sounds that they make. WALL-E
recognizes a single, short, discrete
sound such as a hand clap and
determines the sound’s location,
whether to the right, left, or directly
in front of him,” continued Chan.
off an object so WALL-E can determine where and how far
an object is.
Three motion sensors in the front and one in the back
WALL-E’s four audio sensors
(microphones placed in the front,
back, and each side of the robot)
enable him to hear voices and sounds.
When WALL-E recognizes voice
patterns, he can respond verbally
using his TalkBack feature.WALL-E
does this by isolating the origins of varying sounds. He then
uses a Digital Signal Processing (DSP) unit to calculate the
abstracted sound’s location of origin in order to respond to
Children with Ultimate WALL-E
shows actual robot size.