A groundbreaking study by Google Research in collaboration with leading universities has revealed remarkable similarities between how Large Language Models (LLMs) and the human brain process natural language during everyday conversations.
The research, published in Nature Human Behaviour, demonstrates that neural activity in the human brain shows linear alignment with the internal contextual embeddings used by LLMs when processing speech and language.
"We discovered that the word-level internal embeddings generated by deep language models match neural activity patterns in brain regions responsible for speech comprehension and production," explains Dr. Mariano Schain, lead researcher at Google Research.
The study analyzed neural activity recorded via intracranial electrodes during natural conversations. Researchers compared these patterns with embeddings from the Whisper speech-to-text model, examining both speech comprehension and production.
During speech comprehension, the brain follows a clear sequence: As a person hears words, neural activity first appears in speech areas along the superior temporal gyrus, followed by meaning processing in Broca's area hundreds of milliseconds later.
The pattern reverses during speech production: Activity begins in Broca's area as the brain plans what to say, moves to the motor cortex to coordinate articulation, and finally registers in auditory areas as the speaker monitors their own voice.
While the study revealed shared computational principles between LLMs and human brains, key differences exist. Unlike LLMs that process hundreds of words simultaneously, the brain analyzes language serially, one word at a time.
"These findings open new pathways for developing biologically-inspired artificial neural networks with enhanced real-world capabilities," notes Dr. Ariel Goldstein, visiting researcher at Google Research.
The research represents a collaboration between Google Research, Princeton University, New York University, and the Hebrew University of Jerusalem. Their work provides valuable insights into both human cognition and artificial intelligence, potentially leading to more advanced language processing systems.
This breakthrough study suggests that despite their different architectures, both artificial and biological systems may utilize similar fundamental principles for understanding and generating language.