AI gives clues to how the brain processes language


Predicting the next word someone might say – as AI algorithms now do when you search the internet or text a friend – can be a key part of the human brain’s ability to process language. , new search suggests.

Why is this important: How the brain makes sense of language is a long-standing question in neuroscience. The new study demonstrates how AI algorithms that aren’t designed to mimic the brain can help understand it.

  • “No one has been able to do the full pipeline from word input to neural mechanism through behavioral output,” says Martin Schrimpf, a PhD. MIT student and author of the new article published this week in PNAS.

What did they do: The researchers compared 43 machine learning language models, including OpenAI’s GPT-2 model which is optimized to predict the next words in a text, to brain scan data on how neurons react when someone reads or hears a language.

  • They gave each model words and measured the response of nodes in artificial neural networks that, like neurons in the brain, transmit information.
  • These responses were then compared to the activity of neurons – measured by functional magnetic resonance (fMRI) or electrocorticography – when people performed different language tasks.

What they found: The activity of the nodes in the AI ​​models that are best for next word prediction were similar to the models of neurons in the human brain.

  • These models were also better at predicting how long it took someone to read a text – a behavioral response.
  • Models who excelled at other language tasks, such as filling in a blank word in a sentence, also did not predict brain responses.

Yes, but: This is not direct evidence of how the brain processes language, says Evelina Fedorenko, professor of cognitive neuroscience at MIT and author of the study. “But it is a very suggestive and much more powerful source of evidence than anything we have had.”

  • The discovery may not be enough to explain how humans extract meaning from language, Stanford psychologist Noah Goodman told american scientist, although he agreed with Fedorenko that the method is a big step forward for the field.

The plot: There was a solid but confusing conclusion, says Schrimpf.

  • AI models can be trained on massive amounts of text or they may not be trained.
  • Schrimpf said he expects untrained models to give bad predictions of brain responses, but they found the models to be correct.
  • There might be an inherent structure that pushes untrained models in the right direction, he says. Humans are similar – our untrained brains are “a good starting point from which you can get something without optimizing real world experiences.”

The bottom line: A common criticism of research comparing AI and neuroscience is that both are black boxes, Fedorenko says. “This is obsolete. There are new tools for probing. “

Go further:

Source link

Leave A Reply

Your email address will not be published.