food container, and a green Christmas tree cookie cutter as
shown in Figure 2. When one of the objects is detected,
the puppet will track the object with its eye and head
movements. Once the object has been centered in the
puppet’s field of view, the puppet will verbally announce
the object that it sees (for example: “I see a red veggie
In order to give the puppet more personality and to
provide feedback for the human operator, the puppet will
use its eyebrows to indicate the current visual status. If
no object is seen, the eyebrows will lower, giving the
impression that the puppet is agitated. When an object
is spotted, the eyebrows will be raised to imitate an
expression of interest. When the object has been tracked
and centered in the field of vision, the eyebrows will
assume a neutral position.
The actions described above are best appreciated if
seen in real time, so we have created a video demonstrating
the puppet responding to various objects. The video is
posted on You Tube at the following link: www.youtube.
In order to perform the desired operations, the robot’s
control program needs to locate a specific color within the
images captured by the camera on its head. Let’s assume
that we have a routine that divides an image into a 5x5
grid and gives us the x,y coordinates (each of which range
from 0 to 4) of the sector within the image that contains
the specified color. Once we have the values for x and y,
we can easily determine which way the robot should turn
so as to track a desired object (color) using a statement
if x<2 then gosub LookLeft
The LookLeft subroutine can perform any actions you
want. In our case, it raises the puppet’s eyebrows, turns the
eyes left first, then the head left to creating a lifelike
movement. Our puppet’s movements are controlled by
servomotors as described in the previous article.
RobotBASIC provides commands for accessing serial,
parallel, USB, and Bluetooth ports, though, so you can
move your robot using any motor or actuator you wish.
If we add a LookRight subroutine that is executed
whenever x>2, we have the basis for a tracking system.
Placing these two if statements within a loop will make the
puppet continue to move its head (and the camera) to the
left or right based on where the color is seen.
Similarly, if we add LookUp and LookDown
subroutines that are executed based on the value of y, our
robot can track an object (color) both horizontally and
vertically. Another if statement can determine when the
object has been centered in the field of view (when both
x and y are equal to 2) and initiate the desired actions.
In our application, we have the robot continuously look
for all three of the previously mentioned colors and track
the one it finds. When the color is centered in the field of
view, the puppet verbally announces the object that it sees
(based on the color that it has been tracking). The puppet
can recognize any number of objects, as long as they have
distinct colors. If, for example, the specified color is flesh
tone the puppet would be able to track a face.
RobotBASIC makes all of the above easy to implement
by providing a function to capture pictures called
CaptureImage(), and a command that can track colors
called BmpFindClr. The parameters for BmpFindClr
provide tremendous flexibility when specifying how the
search color should be located (all of RobotBASIC’s image
processing commands provide a similar flexibility). Let’s
examine the command and its parameters. The parameters
in italic are optional.
BmpFindClr FileName, Color, Var1, Var2, Clr Tol, Grid Tol,
• FileName: This is the name (and path) of the BMP image
file. If the filename is an empty string, then the file is
assumed to be on the Window’s clipboard. Use
RobotBASIC’s Capture functions to acquire the image.
• Color: This is the RGB color to be searched for. If you
prefer, RobotBASIC has functions that allow you to
specify colors using levels of their red, green, and blue
• Var1: This variable will be set by the command to the
FIGURE 2. The puppet can recognize and track
these items by their color.