Neuroscientists recreate Pink Floyd song from recorded brain waves

Neuroscientists have recreated a Pink Floyd song from recorded brain waves
Neuroscientists have recreated a Pink Floyd song from recorded brain waves Copyright Getty Images
Copyright Getty Images
By David Mouriquand
Share this articleComments
Share this articleClose Button

Neuroscientists were able to recreate 'Another Brick in the Wall, Part 1' using AI to decipher the brain’s electrical activity. The reconstructed Pink Floyd song represents a breakthrough that could restore the musicality of natural speech to patients with disabling neurological conditions.

ADVERTISEMENT

Scientists have reconstructed a classic Pink Floyd song from the recorded brain waves of patients who were undergoing epilepsy surgery while listening to the track.

Researchers at University of California, Berkeley (US), used artificial intelligence techniques to decode the brain signals, recreating the 1979 hit 'Another Brick In The Wall, Part 1.'

The team said this is the first time scientists have reconstructed a song from the recordings of brain activity.

This sort of algorithmic translation has been used to recreate speech from brain scans, but not music.

For the study, the researchers analysed brain activity recordings of 29 patients at Albany Medical Center in New York State from 2009 to 2015. As part of their epilepsy treatment, the patients had a net of electrodes implanted in their brains. This created a rare opportunity for the neuroscientists to record from their brain activity while they listened to music. A total of 2,668 electrodes were used to record all the brain activity and 347 of them were specifically related to the music.

The scientists said the famous phrase “All in all it’s just another brick in the wall” is recognisable in the reconstructed song and the rhythms remain intact.

“It sounds a bit like they’re speaking underwater, but it’s our first shot at this,” said Robert Knight, a neurologist and UC Berkeley professor of psychology at the Helen Wills Neuroscience Institute who conducted the study with postdoctoral fellow Ludovic Bellier.

According to the team, the findings, reported in the journal PLOS Biology, show that brain signals can be translated to capture the musical elements of speech (prosody) – patterns of rhythm, sound, stress and intonation – which convey meaning that words alone cannot express.

The scientists believe their work can further help to understand how the brain responds to music and could pave the way for new prosthetic devices that can help improve the perception of the rhythm and melody of speech. This study also represents a breakthrough for neuroscientologists and neurotechnologists who want to help people with severe neurological damage - such as amyotrophic lateral sclerosis, the neurodegenerative disease that Stephen Hawking was diagnosed with. It could help people who have suffered stroke or paralysis, or have other verbal communication issues, to communicate through brain-computer interfaces in a way that sounds more natural.

“It’s a wonderful result,” said Knight.

"One of the things for me about music is it has prosody and emotional content. As this whole field of brain machine interfaces progresses, this gives you a way to add musicality to future brain implants for people who need it, someone who's got ALS or some other disabling neurological or developmental disorder compromising speech output. It gives you an ability to decode not only the linguistic content, but some of the prosodic content of speech, some of the affect. I think that's what we've really begun to crack the code on."

Additional sources • University of California, Berkeley - PLOS Biology

Share this articleComments

You might also like