Twin Tweaks ...
TESTING THE MODULE WITH A PICTURE.
The Robot With a Face
After becoming thoroughly acquainted with the smart
display modules, it was time to figure out how to
implement them with a robot. A lot of the more intuitive
ways to implement the displays would be as just that —
passive displays. That, however, didn’t seem to take full
advantage of the smart nature of these devices.
We thought the perfect way to incorporate the smart
displays into a robot would be a whimsical way of
responding to sensor input. Simple displays and LEDs are
often used for such a purpose, often as a debugging tool —
when a button gets pressed or when an IR sensor reads
high or low, an LED can come on to indicate that the
sensor is working. Something like that might be one useful
implementation of the smart displays. Instead of adding an
LED for every sensor your robot uses, you can simply add
more indicators to the display. Dragging
and dropping additional indicators would
be a lot easier than wiring up and
powering another LED for every sensor
you want to add.
We thought a more whimsical take
on this classic implementation would be
to put a face onto the display that would
react to sensor input. Our initial vision
was a face that used both screens — the
smaller OLED module could show a
cycloptic eye, and the widescreen LCD
screen would show an expressive mouth.
Our goal was to have the bot change
expressions in response to sensor input.
When a touch sensor was pressed,
the eye could glance over to the sensor
and the mouth could express a grimace
(or joy, if you prefer for your robot to
appreciate tactile interaction). We also
thought that an infrared sensor could
provide some amusing possibilities —
Implementing the screens on a mobile robot, however,
would take a little bit of problem solving. The difficulty in
implementing the OLED and LCD screens on the robot had
its origin in power requirements. Some elements are not
overly power hungry, like sensors. Sensors will usually take
about five volts or less. In fact, the OLED and LCD screens
themselves operate off of five volts. Both modules also
come equipped with a host of digital ins/outs just begging
for some sensors.
It looked like things were shaping up to be a five volt
party, but something was missing. Sensors are a critical part
of a robot, but in our minds an ideal robot should be
mobile. The OLED and LCD screens, however, do not have
the capability to source the power necessary to drive
motors (nor do they have motor drivers).
At first blush, this didn’t seem like much of an issue.
Since the smart displays don’t have motor drivers anyway,
chances are that you’ll want to use some sort of brain
meant for a robot. That brain will likely be accompanied
by a battery with the juice to quench the thirst of power
hungry motors. The modules are smart, but two brains are
better than one, right?
Two brains are fine and dandy, but then the project
would become something more like two robots
Frankensteined together. What we wanted to achieve
was a harmonious and symbiotic whole — like Eddie Brock
and the Venom Symbiote (well, more harmonious than
that). And just like a good symbiote, we wanted both
organisms to use the same power source and communicate
with each other.
70 SERVO 03.2013