Publication:

Robust and Multimodal Signals for Language in the Brain

Loading...
Thumbnail Image

Date

2025-04-01

Published Version

Published Version

Journal Title

Journal ISSN

Volume Title

Publisher

The Harvard community has made this article openly available. Please share how this access benefits you.

Research Projects

Organizational Units

Journal Issue

Citation

Misra, Pranav. 2025. Robust and Multimodal Signals for Language in the Brain. Doctoral Dissertation, Harvard University Graduate School of Arts and Sciences.

Research Data

Abstract

An impressive aspect of human language is its ability to maintain consistent meaning and grammatical correctness across different sensory modalities. Elucidating the modality-invariant internal representation of language in the brain has major implications for cognitive science, brain disorders, and artificial intelligence. A pillar of linguistic studies is the notion that words have defined functions, often referred to as parts of speech, such as nouns and adjectives. This dissertation tries to answer two questions: how can we find a multimodal and generalizable representation for language processing? Is there a representation for parts of speech in the brain? To address these questions, I recorded neural responses from 1,801 electrodes in 20 participants with epilepsy while they were presented with two-word minimal phrases consisting of an adjective and a noun, in both auditory and visual presentations. I observed neural signals that distinguished between these two parts of speech (POS), localized within a small region in the left lateral orbitofrontal cortex. The representation of POS showed invariance across several criteria: visual and auditory presentation modalities, and robustness to word properties like length, order, frequency, and semantics. The results also generalized across two different languages in a bilingual participant. I found that these selective signals provide key elements for the compositional processes of language, highlighting a localized and invariant representation of POS. Furthermore, I extended these ideas by evaluating how parts of speech are processed within full sentences. Recording activity from 1,563 electrodes in 17 participants, I found neural signals that separated nouns from verbs in sentences. This selective, invariant, and localized representation of parts of speech provides key elements for the representation of language. Going beyond POS, I also found multimodal representations for grammar and syntax processing in sentences, a first with neurophysiological signals. In conclusion, I collected and analyzed a dataset of over 60 hours of stimuli, 40,000 two-word phrases, and 15,000 sentences by recording neural responses to audiovisual stimuli from 3,394 stereo-electroencephalography electrode contacts across 37 participants. My findings report neural representations of language that are both robust and adaptable, contributing to a deeper understanding of core linguistic processes within the human brain. This work lays a foundation for more nuanced studies of language processing. To the best of my knowledge, this is the first study to rigorously and systematically evaluate the invariant representation of parts of speech and multimodal representations for semantic and grammatical processing of sentences at the invasive neurophysiological level from the human brain.

Description

Other Available Sources

Keywords

Brain Science, epilepsy patients, iEEG, Language, Nouns, Parts-of-speech, Neurosciences, Artificial intelligence, Linguistics

Terms of Use

This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service

Endorsement

Review

Supplemented By

Referenced By

Related Stories