FIGURE 4. Rosserial_embeddedlinux
block diagram.
wheel run at a different speed — depending on how far it
was from the vehicle’s center of rotation — in order to
maintain traction between the wheel and the ground. We
used the drive motor encoders for closed-loop speed control
on each wheel individually, calculating each wheel’s
required speed based on turn radius.
In addition to the PS3-eye camera and motors with
integrated encoders, we installed sensors intended for
future use, such as an android phone providing access to
GPS and a gyro, and an Xtion Pro depth-sensing camera for
indoor VSLAM use.
Software Design
Figure 3 shows the overall organization of the
software components. The vision software used CMVision
— Carnegie-Mellon’s blob-tracking package which integrates
with ROS to publish a list of blobs and their size and
location. Documentation is at www.ros.org/wiki/
cmvision. CMVision can be installed with apt-get — like any
other ROS package. We mounted a camera on the robot
and used CMVision to detect the course boundary flags.
CMVision comes with a great configuration GUI that selects
a range of colors to define a “blob of interest.”
This was very useful because the camera reported
different colors for the flags, depending on lighting
conditions — indoor, outdoor cloudy, sunny, dusk, etc.
After configuration, CMVision started publishing blob data
64 SERVO 06.2013
FIGURE 5. Effect of camera/robot misalignment.
corresponding to the boundary flags at 10 Hz. Blob tracking
with no work! We wrote a simple ROS Python guidance
node which exerted closed-loop control over robot
direction. It subscribed to the blobs topic published by
CMVision, and looked for the furthest left and right blobs.
The guidance node controlled robot drive direction so as to
guide the robot to the midpoint between the outer blobs —
assuming they would be the nearest track markers seen by
the camera.
The guidance node drove the robot by publishing twist
messages — a standard ROS message type — which the
robot control app running on the VEXPro controller
subscribed to. In under 400 lines of Python code, we had
closed-loop control based on vision input! You can
download the code from the DPRG ROS source code
repository at https://code.google.com/p/dprg-ros-pkg.
The jRoverRos directory contains the CMVision
configuration files, and jRoverRos/bin contains the guidance
node code.
The VEXPro had connectors for electrical interfacing to
VEX motors and sensors, and brought the power and
flexibility of its Linux software environment. It had good
networked debugging support, and was able to run
multiple processes. However, it did not have enough
horsepower to run the full ROS environment. We ported
the rosserial_arduino package to Linux, which enabled
VEXPro software to interact with ROS nodes running on the
laptop. The laptop ran ROS, read camera data, did the
vision processing, and ran the guidance node, while the
VEXPro ran the low-level motor and encoder software.
You can find links to documentation and tutorials for
rosserial for Arduino or embedded Linux at http://ros.org/
wiki/rosserial. If your robot controller uses Arduino or
embedded Linux, it is straightforward to connect it to ROS,
and for it to receive speed and direction messages from the
Video with more details of the mechanics and system
components, and the robot in early operational testing can be
seen at www.youtube.com/watch?v=QT-CTaU7O8E.