Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/97879
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Chinese and Bilingual Studiesen_US
dc.creatorRaval, Sen_US
dc.creatorSedghamiz, Hen_US
dc.creatorSantus, Een_US
dc.creatorAlhanai, Ten_US
dc.creatorGhassemi, Men_US
dc.creatorChersoni, Een_US
dc.date.accessioned2023-03-24T07:39:46Z-
dc.date.available2023-03-24T07:39:46Z-
dc.identifier.isbn978-1-955917-10-0en_US
dc.identifier.urihttp://hdl.handle.net/10397/97879-
dc.descriptionFindings of the Association for Computational Linguistics: EMNLP 2021, Punta Cana, Dominican Republic, November 7–11, 2021en_US
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguisticsen_US
dc.rights©2021 Association for Computational Linguisticsen_US
dc.rightsMaterials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. (https://creativecommons.org/licenses/by/4.0/)en_US
dc.rightsThe following publication Shivam Raval, Hooman Sedghamiz, Enrico Santus, Tuka Alhanai, Mohammad Ghassemi, and Emmanuele Chersoni. 2021. Exploring a Unified Sequence-To-Sequence Transformer for Medical Product Safety Monitoring in Social Media. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3534–3546, Punta Cana, Dominican Republic. Association for Computational Linguistics is available at https://aclanthology.org/2021.findings-emnlp.300/.en_US
dc.titleExploring a unified sequence-to-sequence transformer for medical product safety monitoring in social mediaen_US
dc.typeConference Paperen_US
dc.identifier.spage3534en_US
dc.identifier.epage3546en_US
dc.identifier.doi10.18653/v1/2021.findings-emnlp.300en_US
dcterms.abstractAdverse Events (AE) are harmful events resulting from the use of medical products. Although social media may be crucial for early AE detection, the sheer scale of this data makes it logistically intractable to analyze using human agents, with NLP representing the only low-cost and scalable alternative. In this paper, we frame AE Detection and Extraction as a sequence-to-sequence problem using the T5 model architecture and achieve strong performance improvements over the baselines on several English benchmarks (F1 = 0.71, 12.7% relative improvement for AE Detection; Strict F1 = 0.713, 12.4% relative improvement for AE Extraction). Motivated by the strong commonalities between AE tasks, the class imbalance in AE benchmarks, and the linguistic and structural variety typical of social media texts, we propose a new strategy for multi-task training that accounts, at the same time, for task and dataset characteristics. Our approach increases model robustness, leading to further performance gains. Finally, our framework shows some language transfer capabilities, obtaining higher performance than Multilingual BERT in zero-shot learning on French data.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn MF Moens, X Huang, L Specia & SW Yih (Eds.), Findings of the Association for Computational Linguistics: EMNLP 2021, p. 3534–3546, Punta Cana, Dominican Republic. Association for Computational Linguistics, 2021en_US
dcterms.issued2021-11-
dc.relation.conferenceFindings of the Association for Computational Linguistics [Findings]en_US
dc.description.validate202303 bcwwen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberCBS-0065-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS55712671-
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Chersoni_Exploring_Unified_Sequence-To-Sequence.pdf446.43 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

98
Citations as of May 11, 2025

Downloads

33
Citations as of May 11, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.