Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/95825
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Chinese and Bilingual Studiesen_US
dc.creatorPeng, Ben_US
dc.creatorChersoni, Een_US
dc.creatorHsu, YYen_US
dc.creatorHuang, CRen_US
dc.date.accessioned2022-10-17T07:47:12Z-
dc.date.available2022-10-17T07:47:12Z-
dc.identifier.isbn979-10-95546-74-0en_US
dc.identifier.urihttp://hdl.handle.net/10397/95825-
dc.description4th Financial Narrative Processing Workshop FNP 2022, Language Resources and Evaluation Conference, 24 June 2022, Marseille, Franceen_US
dc.language.isoenen_US
dc.publisherEuropean Language Resources Association (ELRA)en_US
dc.rights© European Language Resources Association (ELRA)en_US
dc.rightsThese workshop proceedings are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (https://creativecommons.org/licenses/by-nc/4.0/)en_US
dc.rightsThe following publication Peng, B., Chersoni, E., Hsu, Y. Y., & Huang, C. R. (2022, June). Discovering Financial Hypernyms by Prompting Masked Language Models. In M. El-Haj, P. Rayson & N. Zmandar (Eds.), Proceedings of the 4th Financial Narrative Processing Workshop (FNP 2022) (pp. 10-16). Paris: European Language Resources Association (ELRA) is available at http://www.lrec-conf.org/proceedings/lrec2022/workshops/FNP/index.htmlen_US
dc.subjectTransformersen_US
dc.subjectSemantic relationsen_US
dc.subjectLanguage modelingen_US
dc.subjectFinancial natural language processingen_US
dc.titleDiscovering financial hypernyms by prompting masked language modelsen_US
dc.typeConference Paperen_US
dc.identifier.spage10en_US
dc.identifier.epage16en_US
dcterms.abstractWith the rising popularity of Transformer-based language models, several studies have tried to exploit their masked language modeling capabilities to automatically extract relational linguistic knowledge, although this kind of research has rarely investigated semantic relations in specialized domains. The present study aims at testing a general-domain and a domain-adapted Transformer model on two datasets of financial term-hypernym pairs using the prompt methodology. Our results show that the differences of prompts impact critically on models’ performance, and that domain adaptation to financial texts generally improves the capacity of the models to associate the target terms with the right hypernyms, although the more successful models are those which retain a general-domain vocabulary.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn M El-Haj, P Rayson & N Zmandar (Eds.), Proceedings of the 4th Financial Narrative Processing Workshop (FNP 2022), p. 10-16. Paris: European Language Resources Association (ELRA).en_US
dcterms.issued2022-06-
dc.relation.conferenceFinancial Narrative Processing Workshop [FNP]en_US
dc.publisher.placeParis, Franceen_US
dc.description.validate202210 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera1795-
dc.identifier.SubFormID45959-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextProjects ZVYU and ZG9Xen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Peng_Financial_Hypernyms_Language.pdf366.24 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

152
Last Week
7
Last month
Citations as of Nov 9, 2025

Downloads

41
Citations as of Nov 9, 2025

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.