guidance node. Figure 4 shows how
we connected the VEXPro to ROS.
Rosserial_python connects via
serial or Wi-Fi to the VEXPro robot
controller. You have source-level
debugging under the Integrated
Development Environment (IDE) on
Windows for quick debug and single-stepping through robot code running
on the VEXPro. By far, the largest and
most challenging part of writing the
software was robot control on the
VEXPro — controlling the numerous
motors and encoders. You can find
the source in the jrover/vexpro
directory of the DPRG ROS source code repository; however,
your robot is almost certainly different. If you have software
to run your robot, you need only to subscribe to Twist
messages using rosserial or otherwise. Use the Twist
messages to control your robot drive motors if you want to
duplicate just the lane-following part of this design.
FIGURE 6. Contest day in the robot pit,
on the shores of Lake Lewisville
near Dallas, TX.
System Testing
We set up a test course in a nearby park, and set
jRover on its way down the track. On sunny days, we had
good success getting CMVision to see the flags. Then, the
weather turned overcast, and the inexpensive webcams we
were using completely failed — every model we tried whited
out. There was insufficient contrast in anything we pointed
them at. Finally, in desperation, we tried a Sony PS3-eye
camera and it worked beautifully. The better camera
provided clear images at dusk, as well as in full sunlight
that saturated the less expensive cameras. We learned to
use a good camera.
We struggled to get the robot to stay in the center of
the course. We adjusted the gain of the control loop to no
avail — the robot would veer to the same side of the
course, and eventually run into one of the course-edge
markers or turn completely out of the course. After much
experimentation, we discovered the importance of boresighting the camera — meaning, to align the direction the
camera is pointing with the direction the robot will run
when it’s told to drive straight. We hadn’t understood how
the system would work when the camera center was not
aligned with robot “straight.” Figure 5 shows what
happened in this situation.
When the robot was in pose A, the camera saw two
flags and the guidance node commanded the robot to go
straight. It headed toward the side of the course because it
was misaligned. If the robot should happen to straighten
up, the camera would have turned with the robot and
would be looking across the course; the guidance node
would command the robot to go left, driving it further
toward the edge of the course. The error grew without
bound, and the closed-loop control wouldn’t correct it.
In pose B, the camera still saw
two flags, but it was now looking at
an angle across the course. The
guidance software again said go
straight, but the robot was very near
the edge of the course and would
easily wander out of bounds as the
right flag slipped out of its field of
view and the presumed center of the
course moved to the left. After we
aligned the camera with robot
“straight,” it would drive down the
center of the course.
Contest day was upon us by the
time we made the lane-following logic
work on the straight-away. Next, we needed to make it turn
the first corner. We adapted the corner by wrapping it in
red tape and distinguished it by blob size. This enabled
jRover to make it around the first corner, but the vision
system was then fooled by the large area of flags at the
hairpin bend that followed.
In the final analysis, our attempt to mash-up a Mars
rover with the self-driving car was partly successful, but
showed us plenty of opportunities to improve. Our
mechanical design needs to be made stronger and
accurate, and our vision system design is too simplistic.
However, we learned a lot and laid a strong foundation on
which to build improvements for the next contest. SV
You can watch video of the robot driving down the Roborama
course at www.youtube.com/watch?v=xKXq-ZeOCnc.
SERVO 06.2013 65