FEATURED
Research

Using AI to decode language from the brain and advance our understanding of human communication

February 7, 2025
7 minute read

Over the last decade, the Meta Fundamental Artificial Intelligence Research (FAIR) lab in Paris has been at the forefront of advancing scientific research. We’ve led breakthroughs in medicine, climate science, and conservation and have kept our commitment to open and reproducible science. As we look to the next decade, our focus is on achieving advanced machine intelligence (AMI) and using it to power products and innovation for the benefit of everyone.

Today, in collaboration with the Basque Center on Cognition, Brain and Language, a leading interdisciplinary research center in San Sebastian, Spain, we’re excited to share two breakthroughs that show how AI can help advance our understanding of human intelligence, leading us closer to AMI. Building on our previous work toward decoding the perception of images and speech from brain activity, we’re sharing research that successfully decodes the production of sentences from non-invasive brain recordings, accurately decoding up to 80% of characters, and thus often reconstructing full sentences solely from brain signals. In a second study, we’re detailing how AI can also help us understand these brain signals and clarifying how the brain effectively transforms thoughts into a sequence of words.

This important research wouldn’t be possible without the close collaboration we’ve fostered in the neuroscience community. Today, Meta is announcing a $2.2 million donation to the Rothschild Foundation Hospital in support of this work. This continues our track record of working closely with some of the leading research institutions in Europe, including NeuroSpin (CEA), Inria, ENS-PSL, and CNRS. These partnerships will continue to be important to us as we work together to explore how these breakthroughs can make a difference in the world and ultimately improve peoples’ lives.

Using AI to decode language from non-invasive recordings of the brain

Every year, millions of people suffer from brain lesions that can prevent them from communicating. Current approaches show that communication can be restored with a neuroprosthesis feeding command signals to an AI decoder. However, invasive brain recording techniques like stereotactic electroencephalography and electrocorticography require neurosurgical interventions and are difficult to scale. Until now, using noninvasive approaches has typically been limited by the noise complexity of the signals they record.

For our first study, we use both MEG and EEG—non-invasive devices that measure the magnetic and electric fields elicited by neuronal activity—to record 35 healthy volunteers at BCBL while they type sentences. We then train a new AI model to reconstruct the sentence solely from the brain signals. On new sentences, our AI model decodes up to 80% of the characters typed by the participants recorded with MEG, at least twice better than what can be obtained with the classic EEG system.

This research could create a new avenue for non-invasive brain-computer interfaces to help restore communication for those who have lost the ability to speak, but several important challenges remain before this approach can be applied in clinical settings. The first is performance-related: decoding performance is still imperfect. The second is more practical, because MEG requires subjects to be in a magnetically shielded room and remain still. Finally, while this research was done with healthy volunteers, future work will need to be done to explore how it could benefit people suffering from brain injuries.

Using AI to understand how the brain forms language

We’re also sharing a breakthrough toward understanding the neural mechanisms that coordinate language production in the human brain. Studying the brain during speech has always proved extremely challenging for neuroscience, in part because of a simple technical problem: moving the mouth and tongue heavily corrupts neuroimaging signals.

To explore how the brain transforms thoughts into intricate sequences of motor actions, we used AI to help interpret the MEG signals while participants typed sentences. By taking 1,000 snapshots of the brain every second, we can pinpoint the precise moment where thoughts are turned into words, syllables, and even individual letters. Our study shows that the brain generates a sequence of representations that start from the most abstract level of representations—the meaning of a sentence—and progressively transform them into a myriad of actions, such as the actual finger movement on the keyboard.

Importantly, the study also reveals how the brain coherently and simultaneously represents successive words and actions. Our results show that the brain uses a ‘dynamic neural code’—a special neural mechanism that chains successive representations while maintaining each of them over long time periods.

Cracking the neural code of language remains one of the major challenges of AI and neuroscience. The capacity for language, which is specific to humans, has endowed our species with an ability to reason, learn, and accumulate knowledge like no other animal on the planet. Understanding its neural architecture and its computational principles is thus an important path to developing AMI.

Enabling health breakthroughs with open source AI

At Meta, we’re in a unique position to help solve some of the world’s biggest challenges using AI. Our commitment to open source has enabled the AI community to build on our models to achieve their own breakthroughs. Last month, we shared how BrightHeart, a company based in France, is using DINOv2 as part of its AI software to help clinicians identify or rule out signs suggestive of congenital heart defects in fetal heart ultrasounds. Last year, BrightHeart achieved FDA 510(k) clearance for its software, which they attribute in part to Meta’s open source contributions. We also shared how Virgo, a company based in the United States, is using DINOv2 to analyze endoscopy video, achieving state-of-the-art performance in a wide range of AI benchmarks for endoscopy, such as anatomical landmark classification, disease severity scoring for ulcerative colitis, and polyp segmentation.

As we look toward the next 10 years, it’s exciting to think about how the breakthroughs we shared today could benefit the greater good. We look forward to continuing the important conversations we’re having with the community as we move forward—together—to tackle some of society’s greatest challenges.

Read the paper: From Thought to Action

Our latest updates delivered to your inbox

Subscribe to our newsletter to keep up with Meta AI news, events, research breakthroughs, and more.

Join us in the pursuit of what’s possible with AI.

Related Posts
Computer Vision
Introducing Segment Anything: Working toward the first foundation model for image segmentation
April 5, 2023
FEATURED
Research
MultiRay: Optimizing efficiency for large-scale AI models
November 18, 2022
FEATURED
ML Applications
MuAViC: The first audio-video speech translation benchmark
March 8, 2023