Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/114022
PIRA download icon_1.1View/Download Full Text
Title: Can large language models interpret noun-noun compounds? a linguistically-motivated study on lexicalized and novel compounds
Authors: Rambelli, G
Chersoni, E 
Collacciani, C
Bolognesi, M
Issue Date: 2024
Source: In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (v. 1: Long Papers), p. 11823–11835. Bangkok, Thailand: Association for Computational Linguistics, 2024
Abstract: Noun-noun compounds interpretation is the task where a model is given one of such constructions, and it is asked to provide a paraphrase, making the semantic relation between the nouns explicit, as in carrot cake is “a cake made of carrots.” Such a task requires the ability to understand the implicit structured representation of the compound meaning. In this paper, we test to what extent the recent Large Language Models can interpret the semantic relation between the constituents of lexicalized English compounds and whether they can abstract from such semantic knowledge to predict the semantic relation between the constituents of similar but novel compounds by relying on analogical comparisons (e.g., carrot dessert). We test both Surprisal metrics and prompt-based methods to see whether i.) they can correctly predict the relation between constituents, and ii.) the semantic representation of the relation is robust to paraphrasing. Using a dataset of lexicalized and annotated noun-noun compounds, we find that LLMs can infer some semantic relations better than others (with a preference for compounds involving concrete concepts). When challenged to perform abstractions and transfer their interpretations to semantically similar but novel compounds, LLMs show serious limitations.
Publisher: Association for Computational Linguistics
ISBN: 979-8-89176-094-3
DOI: 10.18653/v1/2024.acl-long.637
Description: 62nd Annual Meeting of the Association for Computational Linguistics, Bangkok, Thailand, 11-16 August 2024
Rights: ©2024 Association for Computational Linguistics
ACL materials are Copyright © 1963–2025 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License.
The following publication Giulia Rambelli, Emmanuele Chersoni, Claudia Collacciani, and Marianna Bolognesi. 2024. Can Large Language Models Interpret Noun-Noun Compounds? A Linguistically-Motivated Study on Lexicalized and Novel Compounds. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11823–11835, Bangkok, Thailand. Association for Computational Linguistics is available at https://doi.org/10.18653/v1/2024.acl-long.637.
Appears in Collections:Conference Paper

Files in This Item:
File Description SizeFormat 
2024.acl-long.637.pdf341.37 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.