Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/107874
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorLin, Len_US
dc.creatorWang, Len_US
dc.creatorZhao, Xen_US
dc.creatorLi, Jen_US
dc.creatorWong, KFen_US
dc.date.accessioned2024-07-15T07:55:27Z-
dc.date.available2024-07-15T07:55:27Z-
dc.identifier.isbn979-8-89176-093-6en_US
dc.identifier.urihttp://hdl.handle.net/10397/107874-
dc.descriptionThe 18th Conference of the European Chapter of the Association for Computational Linguistics Findings of EACL 2024, March 17-22, 2024, Maltaen_US
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguistics (ACL)en_US
dc.rights© 2024 Association for Computational Linguisticsen_US
dc.rightsMaterials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Luyang Lin, Lingzhi Wang, Xiaoyan Zhao, Jing Li, and Kam-Fai Wong. 2024. IndiVec: An Exploration of Leveraging Large Language Models for Media Bias Detection with Fine-Grained Bias Indicators. In Findings of the Association for Computational Linguistics: EACL 2024, pages 1038–1050, St. Julian’s, Malta. Association for Computational Linguistics is available at https://aclanthology.org/2024.findings-eacl.70/.en_US
dc.titleIndiVec : an exploration of leveraging large language models for media bias detection with fine-grained bias indicatorsen_US
dc.typeConference Paperen_US
dc.identifier.spage1038en_US
dc.identifier.epage1050en_US
dcterms.abstractThis study focuses on media bias detection, crucial in today’s era of influential social media platforms shaping individual attitudes and opinions. In contrast to prior work that primarily relies on training specific models tailored to particular datasets, resulting in limited adaptability and subpar performance on out-of-domain data, we introduce a general bias detection framework, IndiVec, built upon large language models. IndiVec begins by constructing a fine-grained media bias database, leveraging the robust instruction-following capabilities of large language models and vector database techniques. When confronted with new input for bias detection, our framework automatically selects the most relevant indicator from the vector database and employs majority voting to determine the input’s bias label. IndiVec excels compared to previous methods due to its adaptability (demonstrating consistent performance across diverse datasets from various sources) and explainability (providing explicit top-k indicators to interpret bias predictions). Experimental results on four political bias datasets highlight IndiVec’s significant superiority over baselines. Furthermore, additional experiments and analysis provide profound insights into the framework’s effectiveness.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Findings of the Association for Computational Linguistics: EACL 2024, p. 1038–1050, St. Julian’s, Malta. Association for Computational Linguisticsen_US
dcterms.issued2024-
dc.relation.conferenceConference of the European Chapter of the Association for Computational Linguistics [EACL]en_US
dc.description.validate202407 bcwhen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera3031-
dc.identifier.SubFormID49237-
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
2024.findings-eacl.70.pdf365.43 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

4
Citations as of Jul 21, 2024

Downloads

1
Citations as of Jul 21, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.