camera,” explains Valtrop. Taylor uses the NITE middleware
(software) with OpenNI to retrieve the user’s skeleton data
from the 2D image. “This gives me positions and rotations of
a user’s joints. The problem is that this data is relative to the
Kinect camera, so it does not directly translate to the joint
angles that the robot should perform,” Valtrop continued.
“So, for example, I might have my right upper arm
bent at 20 degrees to the right of my torso, but my torso is
rotated 10 degrees to the right relative to the Kinect camera.
So, the Kinect tells me that my right upper arm is rotated 30
degrees to the right. Furthermore, I can ask the Kinect for the
position of my right shoulder, elbow, and hand. From that
data, I can use trigonometry to reduce the pure angle of
the elbow joint to send to the robot,” Valtrop commented.
Controlling the robot’s walking is much simpler. “When
I stand up (if my torso is above 0.5 meters), the robot executes
a static stand up animation. When my body is rotated, the
robot rotates. When my body is translated forward or
backward, the robot walks in the corresponding direction.
When I step to the left or right, the robot side steps also. If
I side step a little, the robot goes slowly. If I side step a lot,
the robot moves more quickly,” illustrates Valtrop.
Valtrop uses a treadmill in his command and control
arsenal to correct his position to make it so he is always
standing in the same place. “I can keep walking forward
and the treadmill tries to bring me back to the center
position,” says Valtrop. Unfortunately, the treadmill is simple
and unidirectional so that Taylor must reset his position
with a rotation or side step.
The HMD unit controls only the robot’s head position.
The Wii remotes open and close the robots hands and
rotate its wrists. “I use this because I cannot use detailed
hand recognition with the Kinect while I am using the
complete skeleton detection,” Valtrop explained.
Taylor Valtrop with his
Nao and his cat, Lotus.
Valtrop uses a couple of different approaches to control
the Nao robot’s joints using the Kinect hardware and
software. “In both examples, linear algebra (matrix
multiplications) is vital for calculating relative rotations
between reference points to rectify the issues
mentioned previously about all of the data coming in
relative to the Kinect camera,” says Valtrop.
In the first joint control approach, Valtrop looks at
each joint’s data as it comes from the Kinect and uses
trigonometry to figure out what angle to send to each
joint on the Nao robot. “It is a bit brute force, and it is
not completely accurate, but it is fast and works in near
real time. I can even dance in sync with the robot,”
The reason this approach is not entirely accurate is
because even if the Nao mimics the angles of Valtrop’s
own joints accurately, this can lead to the robot having
its hand in a different position than his, due to the
difference in proportions between the Nao’s body and
his. “So, it is great for dancing but not for manipulating
tools,” Valtrop reports.
A close-up of the laptop and cabling used in Valtrop's configuration.
SERVO 04.2012 11