WEIGHT A MINUTE
The Intuitive Automata's Autom robotic
weight loss coach is now available for pre-order
on a dedicated "MyAutom" website. If you haven't
been following the saga of Autom, it was first an
MIT Media Lab robot with a significantly different
look. Autom's developer at MIT, Cory Kidd, co-founded Intuitive Automata to help commercialize
Autom based on the original MIT project, and it's
starting to look like everything will be coming
together within the next year.
Autom is designed to be exceptionally
interactive, crunching data on your health, diet, and
exercise regimen, and giving back friendly and constructive criticism. Studies have shown that people who use Autom stick
with their diet and exercise routines for twice as long as people using more traditional weight loss methods
If this sounds good to you, you can be one of the first people to have this friendly little robot helping you out every
day with a deposit of $195. This is not the final price, however. It's just the pre-order deposit. The final price is the $195
deposit plus a balance of $670 when the robot ships, for a total of $865.
HAVE YOU SEEN MY KEYBOARD?
Humans have the ability to look at a scene and immediately pick
out important elements while ignoring everything else, mainly because
we have brains that are awesome. Robots, in general, don't really work
that way. They have to examine each and every pixel of what they're
looking at and decide whether or not it's something they're interested
in. So, for example, if you ask a robot go to find a computer keyboard,
it's got to enter a room and methodically search every pixel-equivalent
area until it finds what it's looking for. While the robot is likely to find
its target in this manner, it'll take basically forever to do so.
However, the Cornell research group has been teaching robots to be able to rapidly break down a scene
into general categories, and then recognize how some categories are related to others. In a general sense, this
is the same thing we humans do. For example, if you enter a room and want to locate a computer keyboard,
you first (subconsciously) identify the places that you don't need to pay attention to — like the walls, ceiling,
and probably the floor. Next, if you don't immediately see the keyboard, you might take a closer look at the
places where keyboards like to hang out, namely in close proximity to a computer.
The key to getting this to work goes back to the machine learning and adaptability system that Cornell
has been working on. First, they showed a robot a series of about 50 different scenes, with most objects in the
scene labeled. The robot read the labels and then remembered characteristics of the associated objects (like
color, texture, and what other objects are nearby), so that it was then able to use those characteristics to
categorize entirely new objects it's never experienced.
Armed with this knowledge and way of thinking, the robot was then able to enter a room it had never
been in before in search of a keyboard, and "know" that since it spotted a monitor — and keyboards are often
found near monitors — it should go check it out.
This ability to follow chains of reasoning to identify, categorize, and locate objects is still being developed,
but giving robots the ability to understand context and use it to adapt to new things brings them that much
closer to being those machines that take over all those household chores so we humans can do more
Cool tidbits herein provided by Evan Ackerman at www.botjunkie.com, www.robotsnob.com, www.plasticpals.com, and other places.
SERVO 11.2011 27