Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/90395
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Chinese and Bilingual Studiesen_US
dc.creatorPortelli, Ben_US
dc.creatorLenzi, Een_US
dc.creatorChersoni, Een_US
dc.creatorSerra, Gen_US
dc.creatorSantus, Een_US
dc.date.accessioned2021-06-28T07:25:51Z-
dc.date.available2021-06-28T07:25:51Z-
dc.identifier.urihttp://hdl.handle.net/10397/90395-
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguisticsen_US
dc.rights© 1963–2021 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/en_US
dc.rightsThe following publication Portelli, B., Lenzi, E., Chersoni, E., Serra, G., & Santus, E. (2021, April). BERT Prescriptions to Avoid Unwanted Headaches: A Comparison of Transformer Architectures for Adverse Drug Event Detection. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (pp. 1740-1747) is available at https://www.aclweb.org/anthology/2021.eacl-main.149/en_US
dc.titleBERT prescriptions to avoid unwanted headaches : a comparison of transformer architectures for adverse drug event detectionen_US
dc.typeConference Paperen_US
dc.identifier.spage1740en_US
dc.identifier.epage1747en_US
dcterms.abstractPretrained transformer-based models, such as BERT and its variants, have become a common choice to obtain state-of-the-art performances in NLP tasks. In the identification of Adverse Drug Events (ADE) from social media texts, for example, BERT architectures rank first in the leaderboard. However, a systematic comparison between these models has not yet been done. In this paper, we aim at sheddinglightonthedifferencesbetweentheir performance analyzing the results of 12 models, tested on two standard benchmarks.en_US
dcterms.abstractSpanBERT and PubMedBERT emerged as the best models in our evaluation: this result clearly shows that span-based pretraining givesadecisiveadvantageinthepreciserecognition of ADEs, and that in-domain language pretraining is particularly useful when the transformer model is trained just on biomedical text from scratch.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationProceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, EACL, April 2021, main volume, p. 1740-1747en_US
dcterms.issued2021-04-
dc.relation.ispartofbookProceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volumeen_US
dc.relation.conferenceConference of the European Chapter of the Association for Computational Linguisticsen_US
dc.description.validate202106 bcvcen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera0670-n32-
dc.description.pubStatusPublisheden_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
2021.eacl-main.149.pdf236.3 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

124
Last Week
1
Last month
Citations as of Apr 21, 2024

Downloads

21
Citations as of Apr 21, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.