Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/107878
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorZhang, Yen_US
dc.creatorLi, Jen_US
dc.creatorLi, Wen_US
dc.date.accessioned2024-07-15T07:55:29Z-
dc.date.available2024-07-15T07:55:29Z-
dc.identifier.isbn979-8-89176-060-8en_US
dc.identifier.urihttp://hdl.handle.net/10397/107878-
dc.descriptionThe 2023 Conference on Empirical Methods in Natural Language Processing, December 6-10, 2023, Singaporeen_US
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguistics (ACL)en_US
dc.rights© 2023 Association for Computational Linguisticsen_US
dc.rightsMaterials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Xianming Li and Jing Li. 2024. BeLLM: Backward Dependency Enhanced Large Language Model for Sentence Embeddings. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 792–804, Mexico City, Mexico. Association for Computational Linguistics is available at https://aclanthology.org/2023.emnlp-main.203/.en_US
dc.titleVIBE : topic-driven temporal adaptation for twitter classificationen_US
dc.typeConference Paperen_US
dc.identifier.spage3340en_US
dc.identifier.epage3354en_US
dcterms.abstractLanguage features are evolving in real-world social media, resulting in the deteriorating performance of text classification in dynamics. To address this challenge, we study temporal adaptation, where models trained on past data are tested in the future. Most prior work focused on continued pretraining or knowledge updating, which may compromise their performance on noisy social media data. To tackle this issue, we reflect feature change via modeling latent topic evolution and propose a novel model, VIBE: Variational Information Bottleneck for Evolutions. Concretely, we first employ two Information Bottleneck (IB) regularizers to distinguish past and future topics. Then, the distinguished topics work as adaptive features via multi-task training with timestamp and class label prediction. In adaptive learning, VIBE utilizes retrieved unlabeled data from online streams created posterior to training data time. Substantial Twitter experiments on three classification tasks show that our model, with only 3% of data, significantly outperforms previous state-of-the-art continued-pretraining methods.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, p. 3340–3354, Singapore. Association for Computational Linguistics, 2023en_US
dcterms.issued2023-
dc.relation.conferenceConference on Empirical Methods in Natural Language Processing [EMNLP]en_US
dc.description.validate202407 bcwhen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera3033-
dc.identifier.SubFormID49242-
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
2023.emnlp-main.203.pdf2.11 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

38
Citations as of Sep 1, 2024

Downloads

12
Citations as of Sep 1, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.