Science may one day be able to read our thoughts, having created a special device to decode brain activity.
As WA Today explains, as the brain is believed to process thought in a similar way to sound, the breakthrough could allow neuroscientists to “hear” the thoughts of patients such as in the film The Diving Bell and the Butterfly and for sufferers of Lou Gehrig’s disease like professor Stephen Hawking who has a functioning mind but are unable to speak.
Researchers from the University of California, in Berkeley, are quite successful in decoding electrical activity in a region of the human auditory system known as the superior temporal gyrus.
Professor Robert Knight of the University of California and co-author of a paper in the journal Plos Biology said: “This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig’s disease and can’t speak.”
“If you could eventually reconstruct imagined conversations from brain activity, thousands of people could benefit,” he added.
According to the UK’s Mirror, the research is based on previous studies which recorded words in the brains even though the ferrets could not understand them.
During the latest study 15 epileptic patients were undergoing surgery aimed to find the area of seizures in their brains so it could be removed.
Neurosurgeons took out a hole in the skull and then place 256 electrodes on the brainsurface covering the temporal lobe. These steps helped to record activity and pinpoint the area of seizures.
After that co-author Dr Brian Pasley visited the patients to record five to 10 minutes of conversation. Using computers he managed to match the recorded activity to sounds.
Dr Pasley reproduced the sounds close enough to the original to correctly guess the word, according to the UK’s Telegraph.
“We think we could be more accurate with an hour of listening and recording and repeating the word many times,” Dr Pasley said. “There is some evidence that hearing the sound and imagining the sound activate similar areas of the brain.”
He added:”If you can understand the relationship well enough between the brain recordings and sound, you could either synthesize the actual sound a person is thinking, or just write out the words with a type of interface device.
The research is telling us a lot about how the brain in normal people represents and processes speech sounds.”
Professor Knight agreed with his colleague: “This research is a major step toward understanding what features of speech are represented in the human brain.
Brian’s analysis can reproduce the sound the patient heard, and you can actually recognize the word, although not at a perfect level.”
Jan Schnupp, Professor of Neuroscience at Oxford University, described the study as “remarkable”.
He said: “Neuroscientists have long believed that the brain essentially works by translating aspects of the external world, such as spoken words, into patterns of electrical activity.”
“But proving that this is true by showing that it is possible to translate these activity patterns back into the original sound (or at least a fair approximation of it) is nevertheless a great step forward, and it paves the way to rapid progress toward biomedical applications,” he added.