Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/106697
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Chinese and Bilingual Studiesen_US
dc.creatorRambelli, Gen_US
dc.creatorChersoni, Een_US
dc.creatorTesta, Den_US
dc.creatorBlache, Pen_US
dc.creatorLenci, Aen_US
dc.date.accessioned2024-06-03T02:11:35Z-
dc.date.available2024-06-03T02:11:35Z-
dc.identifier.issn1756-8757en_US
dc.identifier.urihttp://hdl.handle.net/10397/106697-
dc.language.isoenen_US
dc.publisherWiley-Blackwell Publishing, Inc.en_US
dc.rights© 2024 The Authors. Topics in Cognitive Science published by Wiley Periodicals LLC on behalf of Cognitive Science Societyen_US
dc.rightsThis is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.en_US
dc.rightsThe following publication Rambelli, G., Chersoni, E., Testa, D., Blache, P. and Lenci, A. (2024), Neural Generative Models and the Parallel Architecture of Language: A Critical Review and Outlook. Top. Cogn. Sci. is available at https://doi.org/10.1111/tops.12733.en_US
dc.subjectEnriched compositionen_US
dc.subjectGPT-3 promptingen_US
dc.subjectNeural large language modelsen_US
dc.subjectParallel architectureen_US
dc.subjectSemantic compositionen_US
dc.subjectStatistical learningen_US
dc.subjectSyntax-semantics interfaceen_US
dc.titleNeural generative models and the parallel architecture of language : a critical review and outlooken_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.doi10.1111/tops.12733en_US
dcterms.abstractAccording to the parallel architecture, syntactic and semantic information processing are two separate streams that interact selectively during language comprehension. While considerable effort is put into psycho- and neurolinguistics to understand the interchange of processing mechanisms in human comprehension, the nature of this interaction in recent neural Large Language Models remains elusive. In this article, we revisit influential linguistic and behavioral experiments and evaluate the ability of a large language model, GPT-3, to perform these tasks. The model can solve semantic tasks autonomously from syntactic realization in a manner that resembles human behavior. However, the outcomes present a complex and variegated picture, leaving open the question of how Language Models could learn structured conceptual representations.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationTopics in cognitive science, First published: 18 April 2024, Early View, https://doi.org/10.1111/tops.12733en_US
dcterms.isPartOfTopics in cognitive scienceen_US
dcterms.issued2024-
dc.identifier.scopus2-s2.0-85191004644-
dc.identifier.eissn1756-8765en_US
dc.description.validate202405 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera2727b-
dc.identifier.SubFormID48142-
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
dc.relation.rdatahttps://osf.io/c7f4u/?view_only=201206d3d8574e9b84429419be4587e7en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Rambelli_Neural_Generative_Models.pdf459.28 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

2
Citations as of Jun 30, 2024

Downloads

3
Citations as of Jun 30, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.