New research suggests the human brain processes spoken language through a step-by-step method that remarkably aligns with how advanced artificial intelligence models operate. This pivotal discovery, detailed in a study published in Nature Communications and led by The Hebrew University of Jerusalem, redefines long-held assumptions about language comprehension.

The findings, initially reported by ScienceDaily on January 21, 2026, indicate that meaning unfolds gradually through context rather than relying solely on fixed linguistic rules. This perspective shift opens new avenues for understanding our most complex cognitive functions.

Dr. Ariel Goldstein of the Hebrew University, alongside collaborators Dr. Mariano Schain of Google Research and Prof. Uri Hasson and Eric Ham from Princeton University, spearheaded this groundbreaking work. Their team uncovered an unexpected similarity in how humans interpret speech and how modern AI systems process text.

Unpacking the brain’s layered language comprehension

Using electrocorticography recordings, scientists tracked brain activity in participants listening to a thirty-minute podcast. They observed that the brain follows a structured sequence mirroring the layered design of large language models like GPT-2 and Llama 2.

As we process speech, each word moves through a series of neural steps. Early neural signals correspond to basic word features, much like AI’s initial processing layers. Deeper brain responses, particularly in higher-level language regions such as Broca’s area, align with AI’s deeper layers, integrating context and broader meaning.

Dr. Goldstein noted, “What surprised us most was how closely the brain’s temporal unfolding of meaning matches the sequence of transformations inside large language models. Even though these systems are built very differently, both seem to converge on a similar step-by-step buildup toward understanding.” This highlights a fundamental, shared mechanism.

Rethinking traditional linguistic theories

For years, language was primarily understood through fixed symbols and rigid hierarchies. This research challenges that view, suggesting a more flexible, statistical process where meaning emerges through dynamic contextual integration rather than strict linguistic building blocks.

The study also tested traditional linguistic elements like phonemes and morphemes. These classic features did not explain real-time brain activity as effectively as the contextual representations produced by AI models. This supports the idea of the brain relying more on flowing context.

To advance the field, the team has made the complete set of neural recordings and language features publicly available. This open dataset empowers researchers worldwide to compare language understanding theories and develop computational models that more accurately reflect the human mind’s workings.

The convergence between human brain function and advanced AI models offers profound insights into how we comprehend language. This shift from rule-based to context-driven understanding not only reshapes cognitive science but also underscores AI’s potential as a powerful tool for unraveling the mysteries of the mind.