Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/90538
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Chinese and Bilingual Studiesen_US
dc.creatorRambelli, Gen_US
dc.creatorChersoni, Een_US
dc.creatorLenci, Aen_US
dc.creatorBlache, Pen_US
dc.creatorHuang, CRen_US
dc.date.accessioned2021-07-21T06:06:27Z-
dc.date.available2021-07-21T06:06:27Z-
dc.identifier.urihttp://hdl.handle.net/10397/90538-
dc.description1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, Suzhou, China, December 4 - 7, 2020en_US
dc.language.isoenen_US
dc.rights©2020 Association for Computational Linguisticsen_US
dc.rightsPosted with permission of the publisher and author.en_US
dc.titleComparing probabilistic, distributional and transformer-based models on logical metonymy interpretationen_US
dc.typeConference Paperen_US
dc.identifier.spage224en_US
dc.identifier.epage234en_US
dcterms.abstractIn linguistics and cognitive science, Logical metonymies are defined as type clashes between an event-selecting verb and an entity-denoting noun (e.g. The editor finished the article), which are typically interpreted by inferring a hidden event (e.g. reading) on the basis of contextual cues. This paper tackles the problem of logical metonymy interpretation, that is, the retrieval of the covert event via computational methods. We compare different types of models, including the probabilistic and the distributional ones previously introduced in the literature on the topic. For the first time, we also tested on this task some of the recent Transformer-based models, such as BERT, RoBERTa, XLNet, and GPT-2. Our results show a complex scenario, in which the best Transformer-based models and some traditional distributional models perform very similarly. However, the low performance on some of the testing datasets suggests that logical metonymy is still a challenging phenomenon for computational modeling.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationProceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, p. 224-234en_US
dcterms.issued2020-12-
dc.description.validate202107 bcwhen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera0670-n25-
dc.description.pubStatusPublisheden_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
2020.aacl-main.26.pdf377.81 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Show simple item record

Page views

86
Last Week
1
Last month
Citations as of Apr 14, 2024

Downloads

21
Citations as of Apr 14, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.