asking the user to touch the object. This forced our camera
to auto-adjust its light levels – something we found
necessary when the tablet was used outdoors.
At this point, we had planned for the program to
record the current GPS coordinates to serve as a destination
point. Unfortunately, we found that coordinates obtained
this way were often still in transition. For that reason, we
used the program in Figure 2 to determine the coordinates
of our destination and then hard-coded them into the
Once a destination object and its location have been
established, the program directs the user to move the robot
to a remote location. If your robot is capable of
maneuvering in an outdoor environment, then any open
field can be used to test the program. Our RB- 9 robot is an
indoor vehicle, so we did our final testing at a quiet street
intersection (see Figure 9).
After the robot has been relocated, touching the screen
tells it to start searching for the object (color) it was
previously shown. Let’s turn our attention to the portion of
Figure 8 that begins that search (see comment //use GPS
and compass to move to previous location).
The basic algorithm for this section is that the robot
will use the current and destination GPS coordinates to
calculate a compass heading that will cause the robot to
face the destination. Although longitude and latitude are
angular measurements, they can be thought of as simple
x,y coordinates. The differences between the destination
and current coordinates can be used along with the arc
tangent function to calculate the heading.
After the robot turns to the calculated heading, it
should move forward while using the compass to ensure
that it stays on course as described in last month’s article.
During this movement, the robot will monitor the GPS to
see if the current coordinates change. Depending on the
accuracy of your GPS, your robot might have to move 20
or 30 feet for this to happen.
When the coordinates change, the robot will calculate
a new heading and resume its movement
toward the destination. This constant
recalculation allows the robot to recover
from an occasional faulty GPS reading.
The subroutine Angle is responsible
for calculating the new compass heading
the robot should use. It also checks to see
if the destination has been reached. The
calculations should work anywhere in
After the robot arrives at the
destination coordinates, the robot must
search the area for the object in question.
This action starts at the comment //use
camera to find object and is based on our
previous camera example.
If the object is not immediately seen
with the camera, the robot tries to find it by going through
a series of maneuvers that move it forward while turning to
both the left and right. You can help your robot see the
object by making it reasonably large. We used two toy
buckets to create an easy-to-find 20 inch high bright orange
object as shown in Figure 9. If the object is seen during
these maneuvers, the robot uses the camera to move
A simple state machine manages both of these actions
in the same loop so that the robot can reacquire the object
if it is lost due to unexpected turns from a bumpy ride.
Utilizing simple principles like this can greatly improve the
reliability of an outdoor robot.
The algorithm for using the camera to approach the
object is similar to how we used the compass to maintain a
given heading. If the object is directly in front of the robot,
the object will be centered horizontally in the image, giving
an x coordinate of four in our 9x9 matrix. If the coordinate
is less than four, the robot uses an rCommand to gently
turn to the left. Sector coordinates greater than four will
turn the robot to the right. If desired, the sharpness of the
robot’s turn can be adjusted based on how far the tracked
object is from the center of the screen.
Our robot assumes it has found the object if the
camera sees the object dead ahead and the robot’s ranging
sensor indicates a close object directly in front of the robot.
If this does not happen within five maneuvers, the program
indicates that it cannot find the object. Depending on your
equipment and environment, you may need to enhance the
As you can see, the computational and sensory
capabilities of a Windows 8 tablet can make even this
project easy to understand and implement. You can obtain
the programs discussed in this article from the In the News
tab at www.RobotBASIC.org.
Next month, we will show how voice recognition, text-to-speech, and facial graphics can add a little personality to
the interface between you and your tablet-based robot. SV
62 SERVO 02.2014