Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/100016
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Chinese and Bilingual Studies-
dc.creatorCong, Y-
dc.creatorChersoni, E-
dc.creatorHsu, Y Y-
dc.creatorLenci, A-
dc.date.accessioned2023-07-28T03:36:44Z-
dc.date.available2023-07-28T03:36:44Z-
dc.identifier.isbn978-1-959429-76-0-
dc.identifier.urihttp://hdl.handle.net/10397/100016-
dc.descriptionThe 12th Joint Conference on Lexical and Computational Semantics, July 13th-14th 2023, co-located with ACL 2023, Toronto, Ontario, Canada.en_US
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguisticsen_US
dc.rights©2023 Association for Computational Linguisticsen_US
dc.rightsACL materials are Copyright © 1963–2023 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License (https://creativecommons.org/licenses/by-nc-sa/3.0/). Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Yan Cong, Emmanuele Chersoni, Yu-yin Hsu, and Alessandro Lenci. 2023. Are Language Models Sensitive to Semantic Attraction? A Study on Surprisal. In Proceedings of the The 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023), pages 141–148, Stroudsburg, PA: Association for Computational Linguistics is available at https://aclanthology.org/2023.starsem-1.13.en_US
dc.titleAre language models sensitive to semantic attraction? A study on surprisalen_US
dc.typeConference Paperen_US
dc.identifier.spage141-
dc.identifier.epage148-
dcterms.abstractIn psycholinguistics, semantic attraction is a sentence processing phenomenon in which a given argument violates the selectional requirements of a verb, but this violation is not perceived by comprehenders due to its attraction to another noun in the same sentence, which is syntactically unrelated but semantically sound. In our study, we use autoregressive language models to compute the sentence-level and the target phrase-level Surprisal scores of a psycholinguistic dataset on semantic attraction. Our results show that the models are sensitive to semantic attraction, leading to reduced Surprisal scores, although none of them perfectly matches the human behavioral patterns.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn The 12th Joint Conference on Lexical and Computational Semantics, Proceedings of the Conference (*SEM 2023), 2023, p. 141-148. Stroudsburg, PA: Association for Computational Linguistics, 2023-
dcterms.issued2023-
dc.relation.ispartofbookThe 12th Joint Conference on Lexical and Computational Semantics: Proceedings of the Conference (*SEM 2023)-
dc.relation.conferenceSEM 2023-
dc.publisher.placeStroudsburg, PAen_US
dc.description.validate202307 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera2338en_US
dc.identifier.SubFormID47536en_US
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Cong_Language_Models_Sensitive.pdf785.41 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

177
Citations as of May 11, 2025

Downloads

108
Citations as of May 11, 2025

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.