READ MY MIND
Google’s artificial intelligence technology may sometimes eem like it’s reading our minds, but neuroscientists at
Canada’s University of Toronto Scarborough are literally
using AI for that very purpose. They are reconstructing
images based on brain perception using data gathered by
In a test, subjects were hooked up to EEG brainwave-reading equipment and shown images of faces. While this
happened, their brain activity was recorded and then
analyzed using machine learning algorithms.
Impressively, the researchers were able to use this
information to digitally re-create the face image stored in the
person’s mind. Unlike basic shapes, being able to re-create
faces involves a high level of fine-grained visual detail,
showcasing a high level of sophistication for the technology.
While this isn’t the first time that AI has been used to
read people’s minds, it’s the first time it’s been achieved using
EEG data. Previous studies involved fMRI technology, which
measures brain activity by detecting changes in its blood
flow. One of the most exciting differences between the two
techniques is that EEG is far more portable, inexpensive, and
can deliver greater levels of detail in mere milliseconds.
The technology could potentially be used by law
enforcement for creating more accurate eyewitness reports
about a potential suspect’s likeness. Currently, this
information is given to a sketch artist through verbal
descriptions, thereby lowering its levels of accuracy.
It might also serve as a way of helping people who lack
the ability to communicate verbally. The EEG technology
could be employed to produce a neural-based reconstruction
of what a person is perceiving at any given time, allowing
them to better express themselves.
In the future, the team hopes to build on this work by
looking at how effectively they can reconstruct images with
EEG data based on a person’s memory of an event. They also
want to move beyond faces to explore whether they can
recreate accurate images of other objects.
We live in an age of personalized everything — from Amazon recommendations, Facebook news feeds, and
Google search results. So, why shouldn’t we have
personalized robot exosuits as well?
That’s the thinking behind a new project from the Wyss
Institute for Biologically Inspired Engineering and Harvard
University’s John A. Paulson School of Engineering and
Applied Sciences. Researchers there have been working out
how to personalize assistive soft robotic wearables so they
move in sync with the people they are intended to help.
“Previous fixed control strategies or parameter settings
of many other wearable robotic devices can lead to high
response variance between wearers,” postdoctoral
researcher Ye Ding told Digital Trends recently. “Thus we
developed a smart algorithm that can directly use wearer’s
responses [in terms of] measured energy expenditure to
quickly optimize for different individuals to improve walking
The idea of having a personalized robot exosuit may
sound excessive. However, for people who require this
technology, it could turn out to be a game-changer.
Most of us take for granted the ability to easily walk
around without having to think about the effort involved.
For people with physical impairments who need assistive
technology to help them on a daily basis, an optimized
exosuit can make a world of difference.
The technology could also be useful in other scenarios,
such as exosuits that are designed to help physically able
people to increase their strength or walking abilities.
“[Our] method aims to quickly establish the mapping
between the wearer’s energy expenditure with respect to
the control parameters of a soft exosuit by using Bayesian
optimization, which is an algorithm well suited to optimizing
noisy performance signals with very limited data,” Ding
continued. “In our study, we used this algorithm to
configure peak and offset timings of the hip extension
profile of our soft exosuit and achieved significant
How significant? Enough that the technology reduced
the amount of energy consumed as a result of performing
actions by 17. 4 percent in wearers, compared to that same
person walking without an assistive wearable.
In the future, the team wants to further optimize the
technology so that it assists multiple joints such as hips and
ankles at the same time.
SERVO 05/06.2018 51
Ye Ding/Harvard SEAS.