This drone keeps track of what it doesn't know to quickly plan aggressive
It’s not too hard to make a drone that
can fly very fast, and it’s not too hard to make
a drone that can avoid obstacles. However,
making a drone that can do both at once is
much more difficult, but it’s necessary in
order for them to be useful out in the real
At MIT CSAIL, Pete Florence (in Russ
Tedrake’s lab) has developed a new motion
planning framework called NanoMap, which uses a
sequence of 3D snapshots to allow fast-moving ( 10 m/s)
drones to safely navigate around obstacles even if
they’re not entirely sure where they are.
As the drone moves forward, it takes a continuous
sequence of depth sensor snapshots (at something like
30 Hz, depending on the sensor) which are represented
by gray triangles. See the curvy blue line in the figure
below? Let’s say that’s the trajectory that you want the
drone to fly along next. To get to the first point in that
trajectory (the red dot in the second diagram from
left), the drone has a good enough view of what’s going
on from right where it is. To plan farther ahead,
however, the drone needs information about areas
outside of the current field of view of its depth sensor.
NanoMap starts looking backwards through its
collection of snapshots until it finds one that shows the
area it needs to plan into. If it can’t find a good
snapshot, then it’ll have to slow down and look around.
If it does find one, it has the information it needs to move
much more aggressively since it already knows where
obstacles might be.
“If NanoMap wasn’t modeling uncertainty and the drone
drifted just five percent away from where it was expected to
be, the drone would crash more than once every four flights.
Meanwhile, when it accounted for uncertainty, the crash rate
reduced to two percent.”
The problem with this technique is that the farther back
the drone has to look to find the right snapshot, the less
certain it’s going to be about where exactly it was (relative
to where it is now) when that snapshot was taken, and
consequently where it needs to move to avoid any obstacles
that the snapshot contains. This stems from the fact that
autonomous robots tend to be lousy at independently
keeping track of their own position. Because IMUs aren’t
perfect, and cameras and LIDAR aren’t perfect, a robot that’s
estimating where it is relative to where it’s been will become
increasingly less accurate.
54 SERVO 05/06.2018
Catch that drone! Photo courtesy of Jonathan How/MIT CSAIL.
This figure shows how NanoMap evaluates motion plans (blue line), given a series of depth sensor
measurements over time (gray triangles). For each sample point (red dot), the history of
measurements is searched until a view is found (orange triangle) that contains the sample point.