The human language system in the mind and brain
Evelina Fedorenko
McGovern Institute for Brain Research and Department of Brain & Cognitive Sciences, MIT
The goal of my research program is to understand the computations and representations that enable us to share complex thoughts with one another via language, and their neural implementation. A decade ago, I developed a robust new approach to the study of language in the brain based on identifying language-responsive cortex functionally in individual participants. Originally developed for fMRI, we have since extended this approach to other modalities, like MEG and electrocorticography. Using this functional-localization approach, I identified and characterized a set of frontal and temporal brain areas that i) support language comprehension and production (spoken and written); ii) are robustly separable from the lower-level perceptual (e.g., speech processing) and motor (e.g., articulation) brain areas; iii) are spatially and functionally similar across diverse languages (>40 languages from 11 language families); and iv) form a functionally integrated system with substantial redundancy across different components. I will highlight three key findings from our work. First, I will show that the language brain regions are highly selective for language over diverse non-linguistic processes—from math and music, to executive processes, to non-verbal semantic cognition, and even processing computer code—while also showing a deep and intriguing link with a system that supports social cognition. Second, I will show that, contra many leading accounts, the language regions support both understanding of word meanings and sentence-structure building, with no part of the language network being selective for syntactic processing. Further, the ‘temporal integration window’ of the language system is only a few words long—in line with the fact that most dependencies among words are local across the world’s languages—and appears to be relatively insensitive to word order. Finally, I will present recent evidence of predictive coding in the language network during naturalistic comprehension, and show that state-of-the-art artificial neural network language models—optimized for predictive processing—accurately capture neural responses during language comprehension. The latter line of work is a critical first step to developing mechanistic accounts of language comprehension.