Wednesday, February 1, 2012

Voicegrams transform brain activity into words


Nature | News

http://www.nature.com/news/voicegrams-transform-brain-activity-into-words-1.9945

Voicegrams transform brain activity into words

Computational models decode and reconstruct neural responses to speech

The brain’s electrical activity can be decoded to reconstruct which words a person is hearing, researchers report today in PLoS Biology1.

Brian Pasley, a neuroscientist at the University of California, Berkeley, and his colleagues recorded the brain activity of 15 people who were undergoing evaluation before unrelated neurosurgical procedures. The researchers placed electrodes on the surface of the superior temporal gyrus (STG), part of the brain's auditory system, to record the subjects’ neuronal activity in response to pre-recorded words and sentences.
The STG is thought to participate in the intermediate stages of speech processing, such as the transformation of sounds into phonemes, or speech sounds, yet little is known about which specific features, such as syllable rate or volume fluctuations, it represents.

“A major goal is to figure out how the human brain allows us to understand speech despite all the variability, such as a male or female voice, or fast or slow talkers,” says Pasley. “We build computational models that test hypotheses about how the brain accomplishes this feat, and then see if these models match the brain recordings.”

To analyse the data from the electrode recordings, the researchers used an algorithm designed to extract key features of spoken words, such as the time period and volume changes between syllables.

Mind reading

A recording of some of the words heard by the subjects, followed by the voicegrams as reconstructed by two different computer models.
They then entered these data into a computational model to reconstruct 'voicegrams' showing how these features change over time for each word. They found that these voicegrams could reproduce the sounds the patients heard accurately enough for individual words to be recognized.
During speech perception, the brain encodes and interprets complex acoustic signals composed of multiple frequencies that change over timescales as small as ten-thousandths of a second. The latest findings are a step towards understanding the processes by which the human brain converts sounds into meanings, and could have a number of important clinical applications.

“If we can better understand how each brain area participates in this process,” says Pasley, “we can start to understand how these neural mechanisms malfunction during communication disorders such as aphasia.”
Pasley and his team are interested in the similarities between perceived and imagined speech. “There is some evidence that perception and imagery may be pretty similar in the brain,” he says.

These similarities could eventually lead to the development of brain–computer interfaces that decode brain activity associated with the imagined speech of people who are unable to communicate, such as stroke patients or those with motor neurone disease or locked-in syndrome.

Sophie Scott, a neuroscientist at University College London, who studies speech perception and production, says that she has some reservations about the accuracy of the voicegrams. She would also like to see the pattern of responses for non-speech stimuli, such as music or unintelligible sounds, for comparison. But the authors “did an amazing job of transforming recordings of the neural responses to speech and relating these to the original sounds,” she says. “This approach may enable them to start determining the kinds of transformations and representations underlying normal speech perception.”

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.