May 14, 2025
A few million words suffice for children to acquire language. Yet, the brain mechanisms underlying this unique ability remain poorly understood. To address this issue, we investigate neural activity recorded from over 7,400 electrodes implanted in the brains of 46 children, teenagers, and adults for epilepsy monitoring, as they listened to an audiobook version of “The Little Prince”. We then train neural encoding and decoding models using representations, derived either from linguistic theory or from large language models, to map the location, dynamics and development of the language hierarchy in the brain. We find that a broad range of linguistic features is robustly represented across the cortex, even in 2–5-year-olds. Crucially, these representations evolve with age: while fast phonetic features are already present in the superior temporal gyrus of the youngest individuals, slower word-level representations only emerge in the associative cortices of older individuals. Remarkably, this neuro-developmental trajectory is spontaneously captured by large language models: with training, these AI models learned representations that can only be identified in the adult human brain. Together, these findings reveal the maturation of language representations in the developing brain and show that modern AI systems provide a promising tool to model the neural bases of language acquisition.
Written by
Linnea Evanson
Christine Bulteau
Mathilde Chipaux
Georg Dorfmüller
Sarah Ferrand-Sorbets
Emmanuel Raffo
Sarah Rosenberg
Pierre Bourdillon
Jean Remi King
Publisher
ArXiv
Research Topics
December 16, 2025
Bowen Shi, Andros Tjandra, John Hoffman, Helin Wang, Yi-Chiao Wu, Luya Gao, Julius Richter, Matt Le, Apoorv Vyas, Sanyuan Chen, Christoph Feichtenhofer, Piotr Dollar, Wei-Ning Hsu, Ann Lee
December 16, 2025
December 16, 2025
Apoorv Vyas, Heng-Jui Chang, Cheng-Fu Yang, Bernie Huang, Luya Gao, Julius Richter, Sanyuan Chen, Matt Le, Piotr Dollar, Christoph Feichtenhofer, Ann Lee, Wei-Ning Hsu
December 16, 2025
November 10, 2025
Omnilingual ASR team, Gil Keren, Artyom Kozhevnikov, Yen Meng, Christophe Ropers, Matthew Setzler, Skyler Wang, Ife Adebara, Michael Auli, Can Balioglu, Kevin Chan, Chierh Cheng, Joe Chuang, Caley Drooff, Mark Duppenthaler, Paul-Ambroise Duquenne, Alexander Erben, Cynthia Gao, Gabriel Mejia Gonzalez, Kehan Lyu, Sagar Miglani, Vineel Pratap, Kaushik Ram Sadagopan, Safiyyah Saleem, Arina Turkatenko, Albert Ventayol-Boada, Zheng-Xin Yong, Yu-An Chung, Jean Maillard, Rashel Moritz, Alexandre Mourachko, Mary Williamson, Shireen Yates
November 10, 2025
August 13, 2025
Josephine Raugel, Marc Szafraniec, Huy V. Vo, Camille Couprie, Patrick Labatut, Piotr Bojanowski, Valentin Wyart, Jean Remi King
August 13, 2025

Our approach
Latest news
Foundational models