FIGURE 3
This code is written for a camera set to 160x120 resolution, a pan/tilt ‘
servo turret, and a Lynxmotion SSC- 32 servo controller.
‘ Initialize starting servo values
‘ Make sure these variables are set to their respective channels in
the
‘ SSC- 32 module
pan = GetVariable(“PAN_SERVO”)
tilt = GetVariable(“TILT_SERVO”)
‘ get the size (width or height) of the current bounding box
size = GetVariable(“COG_BOX_SIZE”)
‘ if it is equal to “” then no object was detected
if size <> 0 then
‘ get the horizontal center of gravity
cogX = GetVariable(“COG_X”)
‘ pan left
if cogX < 70 then
pan = pan - 20
end if
‘ pan right
if cogX > 90 then FIGURE 4
pan = pan + 20
end if
‘ get the vertical center of gravity
cog Y = GetVariable(“COG_Y”)
‘ tilt down
if cog Y < 50 then
tilt = tilt - 10
end if
‘ tilt up
if cog Y > 70 then
tilt = tilt + 10
end if
if pan > 2500 then pan = 1500
if pan < 500 then pan = 1500
if tilt > 2500 then tilt = 1500
if tilt < 500 then tilt = 1500
for an example of a dance
sequence I made (when I was
very, very bored).
SEQ allows for control of my
robot’s Sabertooth 2x5 motor
controller, as it responds to PWM
signals just as a servo does. I
created various steps for moving
the robot forward, back, left, and
right at varying speeds – and
then mapped those to my arrow
keys. I was also able to control
the pan/tilt movement of the
head to look around the room.
Once this was accomplished, I
could easily drive the robot
around my house through an
RDP connection with a streaming
video feed from its webcams — overall, a very easy
implementation of telepresence.
Adding a voice to the robot was also a simple task,
thanks to its onboard PC. The default text-to-speech engine
that comes with Windows is fairly primitive, to the point of
being nearly unintelligible. The company Cepstral offers high
quality text-to-speech engines that work as drop-in upgrades
for the stock Microsoft engine. I chose a voice called “David”
as it was clear and easy to understand. The Cepstral voice
engines also come with a simple text-to-speech app that
reads aloud whatever is typed into the text box. In the first
article, an LED VU meter was fashioned into a mouth, giving
the robot’s voice a visual display as spoken syllables are
synchronized with the LEDs. This setup mimics the original
technology used for the Hollywood Johnny 5’s mouth.
Autonomy and Machine Vision
SetVariable “PAN_SERVO”, pan
SetVariable “TILT_SERVO”, tilt
end if
Contact Andrew at andrewdalter@gmail.com
As I stated before, I am not a programmer. Full blown
autonomy is a technology still being explored by top
researchers around the world; however, a small piece of
that pie would satisfy me. Having built small walkers and
rovers in the past that used basic range sensors to avoid
obstacles, I wanted to explore new territory with this
project. Given that my robot was already capable of vision,
I chose to explore machine vision for the first time.
Roborealm is a free machine vision software package
that can be interfaced to a wide variety of cameras and
electronic hardware. It handles a lot of the heavy coding
work required for vision processing and allows novice
programmers to focus on the concept and implementation.
In Roborealm, a camera’s video feed is brought into
an interface in which various filters can be applied to a
processing pipeline. These image filters can accomplish a
variety of different tasks, such as filtering out specific colors,
sharpening or softening an image, reducing image noise,
and drawing outlines around objects, to name just a few.
Refer to Figure 3 for a diagram of the Roborealm GUI and a
preview of what an object looks like using a filter to remove
54 SERVO 11.2008