Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/107873
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorYu, Een_US
dc.creatorLi, Jen_US
dc.creatorXu, Xen_US
dc.date.accessioned2024-07-15T07:55:27Z-
dc.date.available2024-07-15T07:55:27Z-
dc.identifier.isbn978-2-493814-10-4en_US
dc.identifier.urihttp://hdl.handle.net/10397/107873-
dc.descriptionThe 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), 20-25 May, 2024, Torino, Italiaen_US
dc.language.isoenen_US
dc.publisherELRA Language Resources Association and the International Committee on Computational Linguisticsen_US
dc.rights© 2024 ELRA Language Resource Association: CC BY-NC 4.0 (https://creativecommons.org/licenses/by-nc/4.0/deed.en)en_US
dc.rightsThe following publication Erxin Yu, Jing Li, and Chunpu Xu. 2024. PopALM: Popularity-Aligned Language Models for Social Media Trendy Response Prediction. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 12867–12878, Torino, Italia. ELRA and ICCL is available at https://aclanthology.org/2024.lrec-main.1127/.en_US
dc.subjectPopularity-aligned language modelsen_US
dc.subjectTrendy response predictionen_US
dc.subjectCurriculum learningen_US
dc.titlePopALM : popularity-aligned language models for social media trendy response predictionen_US
dc.typeConference Paperen_US
dc.identifier.spage12867en_US
dc.identifier.epage12878en_US
dcterms.abstractSocial media platforms are daily exhibiting millions of events. To preliminarily predict the mainstream public reaction to these events, we study trendy response prediction to automatically generate top-liked user replies to social media events. While previous works focus on generating responses without factoring in popularity, we propose Popularity-Aligned Language Models (PopALM) to distinguish responses liked by a larger audience through reinforcement learning. Recognizing the noisy labels from user “likes”, we tailor-make curriculum learning in proximal policy optimization (PPO) to help models capture the essential samples for easy-to-hard training. In experiments, we build a large-scale Weibo dataset for trendy response prediction, and its results show that PopALM can help boost the performance of advanced language models.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), p. 12867–12878, Torino, Italia. ELRA and ICCL.en_US
dcterms.issued2024-
dc.relation.ispartofbookProceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)en_US
dc.description.validate202407 bcwhen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera3031-
dc.identifier.SubFormID49236-
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
2024.lrec-main.1127.pdf627.03 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

36
Citations as of Sep 1, 2024

Downloads

10
Citations as of Sep 1, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.