Alisa Brownlee, ATP, CAPS blog offers recent articles and web information on ALS, assistive technology--augmentative alternative communication (AAC), computer access, and other electronic devices that can impact and improve the quality of life for people with ALS.
Any views or opinions presented on this blog are solely those of the author and do not necessarily represent those of the ALS Association.
Other methods of interfacing with the brain via electrodes include those put on the scalp for electroencephalography (EEG) and ones placed under the skull on the brain’s surface, known as electrocorticography (ECoG). The advantage of intracortical implants is they can pick out activity from single cells whereas the other methods capture the average activity of thousands of neurons. “This performance is 10 times better than anything you would get from EEG or ECoG, [which don’t] contain enough information to do this kind of task at this level,” says neurobiologist Andrew Schwartz, at Pitt, who was not involved in the study. Movement and scarring reduces signal quality over roughly the first two years after implantation, but what remains is still useful—“much better than you get with any other technique,” he says.
The biggest drawback, currently, is having wires coming out of people's heads and attached to cables, which is cumbersome and carries risks. “The future is making these devices wireless,” Pandarinath says. “We're not there yet with people but we’re probably closer to five than 10 years away, and that’s a critical step [toward] a device that you could send somebody home with and be less worried about potential risks like infection.” The devices would need wireless power but several groups are already working on this. “Most of the technology is basically there,” Schwartz says. “You can do that inductively using coils—like wirelessly charging your cell phone in a cradle with coils on either side.”
The team attributes the improvements to better systems engineering and decoding algorithms. “Performing repeated computations rapidly is critical in a real-time control system,” Pandarinath says. The researchers published a study last year, led by Stanford bioengineer Paul Nuyujukian. In it they trained two macaque monkeys to perform a similar task to the grid exercise used in this study. The animals typed sentences by selecting characters on a screen as they changed color (although they wouldn’t have understood what the words meant). When the team added a separate algorithm to detect the monkeys’ intention to stop, their best speed increased by two words per minute.
This “discrete click decoder” was also used in the current study. “We've basically created a ‘point and click’ interface here, like a mouse. That’s a good interface for things like modern smartphones or tablets,” Pandarinath says, “which would open a whole new realm of function beyond communication: surfing the Web, playing music, all sorts of things able-bodied people take for granted.”
The Stanford team is already investigating wireless technology, and has ambitious long-term goals for the project. “The vision we hope to achieve someday would be to be able to plug a wireless receiver into any computer and use it using your brain,” Henderson says. “One of our main goals is to allow 24 hours a day, seven days a week, 365 days a year control of a standard computer interface using only brain signals.”
It can be difficult to communicate when you can only move your eyes, as is often the case for people with ALS (also known as motor neurone disease). Microsoft researchers have developed an app to make talking with your eyes easier, called GazeSpeak.
GazeSpeak runs on a smartphone and uses artificial intelligence to convert eye movements into speech, so a conversation partner can understand what is being said in real time.
The app runs on the listener’s device. They point their smartphone at the speaker as if they are taking a photo. A sticker on the back of the phone, visible to the speaker, shows a grid with letters grouped into four boxes corresponding to looking left, right, up and down. As the speaker gives different eye signals, GazeSpeak registers them as letters.
“For example, to say the word ‘task’ they first look down to select the group containing ‘t’, then up to select the group containing ‘a’, and so on,” says Xiaoyi Zhang, who developed GazeSpeak whilst he was an intern at Microsoft.
GazeSpeak selects the appropriate letter from each group by predicting the word the speaker wants to say based on the most common English words, similar to predictive text messaging. The speaker indicates they have finished a word by winking or looking straight ahead for two seconds. The system also takes into account added lists of words, like names or places that the speaker is likely to use. The top four word predictions are shown onscreen, and the top one is read aloud.
“We’re using computer vision to recognise the eye gestures, and AI to do the word prediction,” says Meredith Morris at Microsoft Research in Redmond, Washington.
The app is designed for people with motor disabilities like ALS, because eye movement can become the only way for people with these conditions to communicate. ALS progressively damages nerve cells, affecting a person’s ability to speak, swallow and eventually breathe. The eye muscles are often some of the last to be affected.
Board of the old
“People can become really frustrated when trying to communicate, so if this app can make things easier that’s a really good thing,” says Matthew Hollis from the Motor Neurone Disease Association.
There are currently limited options for people with ALS to communicate. The most common is to use boards displaying letters in different groups, with a person tracking the speaker’s eye movements as they select letters. But it can take a long time for someone to learn how to interpret these eye movements effectively.
GazeSpeak proved much faster to use in an experiment with 20 people trying both the app and the low-tech boards. Completing a sentence with GazeSpeak took 78 seconds on average, compared with 123 seconds using the boards. The people in the tests did not have ALS, but the team also got feedback on the technology from some people with ALS and their interpreters. One person who tried the device typed a test sentence in just 62 seconds and said he thought it would be even quicker in a real-life situation, as his interpreter can more easily predict what he is likely to say.
“I love the phone technology; I just think that would be so slick,” said one of the interpreters.
Other systems currently use software to track eye movements with infrared cameras. But these are often expensive and bulky, and infrared cameras don’t work very well in sunlight. The GazeSpeak app is portable and comparatively cheap, as it only requires an iOS device, like an iPhone or iPad, with the app installed.
Microsoft will present the app at the Conference on Human Factors in Computing Systems in Colorado in May. The researchers say it will be available on the Apple App Store before the conference, and the source code will be made freely available so that other people can help to improve it.