Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/106689
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Chinese and Bilingual Studies | en_US |
dc.creator | Scaboro, S | en_US |
dc.creator | Portelli, B | en_US |
dc.creator | Chersoni, E | en_US |
dc.creator | Santus, E | en_US |
dc.creator | Serra ,G | en_US |
dc.date.accessioned | 2024-06-03T02:11:31Z | - |
dc.date.available | 2024-06-03T02:11:31Z | - |
dc.identifier.issn | 0950-7051 | en_US |
dc.identifier.uri | http://hdl.handle.net/10397/106689 | - |
dc.language.iso | en | en_US |
dc.publisher | Elsevier BV | en_US |
dc.rights | © 2023 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). | en_US |
dc.rights | The following publication Scaboro, S., Portelli, B., Chersoni, E., Santus, E., & Serra, G. (2023). Extensive evaluation of transformer-based architectures for adverse drug events extraction. Knowledge-Based Systems, 275, 110675 is available at https://doi.org/10.1016/j.knosys.2023.110675. | en_US |
dc.subject | Adverse drug events | en_US |
dc.subject | Extraction | en_US |
dc.subject | Side effects | en_US |
dc.subject | Transformers | en_US |
dc.title | Extensive evaluation of transformer-based architectures for adverse drug events extraction | en_US |
dc.type | Journal/Magazine Article | en_US |
dc.identifier.volume | 275 | en_US |
dc.identifier.doi | 10.1016/j.knosys.2023.110675 | en_US |
dcterms.abstract | Adverse Drug Event (ADE) extraction is one of the core tasks in digital pharmacovigilance, especially when applied to informal texts. This task has been addressed by the Natural Language Processing community using large pre-trained language models, such as BERT. Despite the great number of Transformer-based architectures used in the literature, it is unclear which of them has better performances and why. Therefore, in this paper we perform an extensive evaluation and analysis of 19 Transformer-based models for ADE extraction on informal texts. We compare the performance of all the considered models on two datasets with increasing levels of informality (forums posts and tweets). We also combine the purely Transformer-based models with two commonly-used additional processing layers (CRF and LSTM), and analyze their effect on the models performance. Furthermore, we use a well-established feature importance technique (SHAP) to correlate the performance of the models with a set of features that describe them: model category (AutoEncoding, AutoRegressive, Text-to-Text), pre-training domain, training from scratch, and model size in number of parameters. At the end of our analyses, we identify a list of take-home messages that can be derived from the experimental data. | en_US |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | Knowledge-based systems, 5 Sept 2023, v. 275, 110675 | en_US |
dcterms.isPartOf | Knowledge-based systems | en_US |
dcterms.issued | 2023-09 | - |
dc.identifier.scopus | 2-s2.0-85162167418 | - |
dc.identifier.eissn | 1872-7409 | en_US |
dc.identifier.artn | 110675 | en_US |
dc.description.validate | 202405 bcch | en_US |
dc.description.oa | Version of Record | en_US |
dc.identifier.FolderNumber | a2727a | - |
dc.identifier.SubFormID | 48134 | - |
dc.description.fundingSource | Self-funded | en_US |
dc.description.pubStatus | Published | en_US |
dc.description.oaCategory | CC | en_US |
dc.relation.rdata | https://github.com/AilabUdineGit/ade-detection-survey | en_US |
Appears in Collections: | Journal/Magazine Article |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
1-s2.0-S0950705123004252-main.pdf | 1.89 MB | Adobe PDF | View/Open |
Page views
2
Citations as of Jun 30, 2024
Downloads
3
Citations as of Jun 30, 2024
SCOPUSTM
Citations
4
Citations as of Jun 21, 2024
WEB OF SCIENCETM
Citations
3
Citations as of Jun 27, 2024
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.