Without doubts Google Glass has the unique potential to form the basis for a new generation of portable brain-computer interfaces. Now Neurogadget has the honour to introduce to you one of the first Google Glass Explorers who’s been using Glass in brain-computer interface research.
Adriane Randolph, executive director of Kennesaw State University’s BrainLab, together with her team, has developed a working prototype that takes input from an evoked brain response to trigger the four basic interface commands for Google Glass: swipe left, swipe right, swipe down, and tap to select.
While this isn’t the first time we hear about Google Glass being used for BCI purposes, there are significant differences between BrainLab’s work and other similar projects, namely This Place’s MindRDR application, which uses NeuroSky’s MindWave to let users take and share photos on Facebook, just by thinking.


According to Adriane Randolph, “both the MindRDR app and our system currently use a separate bioamplification system to capture and read brainwaves and transmit feedback to an application on Google Glass. Where the MindRDR appears to be using a continuous brainwave such as alpha according to the placement of the sensor and description, we are using an evoked response called the P300. With this ‘aha’ response, we are instead able to overlay several different commands to control Glass. Thus, a user will be able to control more than taking a picture but instead access all of the functionality of Glass.”
Google Glass dc 6 1024x681 KSU BrainLab develops BCI for Google Glass, aims to improve the quality of life for locked in people
P300
The user is presented with a string of characters from which he/she must select and attend to one. The characters flash in a randomized pattern. When the character the user desires flashes, he/she elicits a neural response approximately 300 milliseconds later, called a P300. This response is noted by the computer and a selection made.
In other words, “while the MindRDR allows the user to take pictures while thinking, the BrainLab has been developing with Glass to completely control the user experience of Glass with the user’s brain. That would be the main difference besides the BrainLab’s project being a long-term research-based project”, adds Josh Pate, BrainLab associate.

Desktop to Mobile

Last summer, Randolph was selected to pilot the wearable technology device, Google Glass. She had big plans for her new accessory beyond its everyday use for checking email, taking photos and surfing the web. She intended to expand her BCI research to a mobile platform.
Within a few months, another key member of her research team was outfitted with Google Glass and their study took a new turn. The wireless platform opened new possibilities in working with those with limited physical capabilities.


Instead of nodding, swiping or talking to give commands to Google Glass, the research team developed a method for controlling a mobile device using only brain waves.
“We believe this is the first working prototype designed for the Google Glass platform. We know that selection-type commands exist using neural input, but we had to figure out how to use that in Google Glass in a way that benefits our research,” Randolph said. “We chose evoked responses which are like an ‘aha’ response that we record as surface EEGs as input signals.”


BrainLab Google Glass  1024x768 KSU BrainLab develops BCI for Google Glass, aims to improve the quality of life for locked in people
Randolph also told Neurogadget that “BrainLab shares This Place’s excitement that Google Glass holds tremendous possibilities for people living locked-in to their bodies, but who are otherwise cognitively intact. We also recognize Google’s technically accurate statement that “Google Glass cannot read your mind” from the perspective that Glass is not doing the actual EEG-recording and filtering needed to interpret brainwaves. However, as a small computer, Glass is taking the results of this separate processing and using it as input to control embedded apps. The real distinction is in how seamlessly the brainwave processing and feedback to an interface can be implemented. Certainly, in a similar vein of deflecting from Glass’ capabilities of facial recognition, it may not wish to stir up another hornet’s nest by extolling mind-reading capabilities.”


Beside being an enthusiastic Google Glass Explorer, Adriane Randolph has been researching brain-computer interfaces for twelve years and received PhD in Computer Information Systems from Georgia State University. She has directed the KSU BrainLab since its founding in 2007 with hopes to improve the quality of life for people with severe motor disabilities.
More info: http://coles.kennesaw.edu/brainlab