Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/115420
PIRA download icon_1.1View/Download Full Text
Title: Increasing alignment of large language models with language processing in the human brain
Authors: Gao, C
Ma, Z
Chen, J
Li, P 
Huang, S
Li, J
Issue Date: 2025
Source: Nature computational science, Published: 16 September 2025, Latest Research articles, https://doi.org/10.1038/s43588-025-00863-0
Abstract: Transformer-based large language models (LLMs) have considerably advanced our understanding of how meaning is represented in the human brain; however, the validity of increasingly large LLMs is being questioned due to their extensive training data and their ability to access context thousands of words long. In this study we investigated whether instruction tuning—another core technique in recent LLMs that goes beyond mere scaling—can enhance models’ ability to capture linguistic information in the human brain. We compared base and instruction-tuned LLMs of varying sizes against human behavioral and brain activity measured with eye-tracking and functional magnetic resonance imaging during naturalistic reading. We show that simply making LLMs larger leads to a closer match with the human brain than fine-tuning them with instructions. These finding have substantial implications for understanding the cognitive plausibility of LLMs and their role in studying naturalistic language comprehension.
Publisher: Nature Publishing Group
Journal: Nature computational science 
EISSN: 2662-8457
DOI: 10.1038/s43588-025-00863-0
Rights: © The Author(s) 2025
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
The following publication Gao, C., Ma, Z., Chen, J. et al. Increasing alignment of large language models with language processing in the human brain. Nat Comput Sci (2025) is available at https://doi.org/10.1038/s43588-025-00863-0.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
s43588-025-00863-0.pdf9.37 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.