Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/112805
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorLai, Wen_US
dc.creatorXie, Hen_US
dc.creatorXu, Gen_US
dc.creatorLi, Qen_US
dc.date.accessioned2025-05-09T00:55:04Z-
dc.date.available2025-05-09T00:55:04Z-
dc.identifier.urihttp://hdl.handle.net/10397/112805-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2025 The Authors. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/en_US
dc.rightsThe following publication W. Lai, H. Xie, G. Xu and Q. Li, "RVISA: Reasoning and Verification for Implicit Sentiment Analysis," in IEEE Transactions on Affective Computing, vol. 16, no. 3, pp. 1760-1771, July-Sept. 2025 is available at https://doi.org/10.1109/TAFFC.2025.3537799.en_US
dc.subjectChain-of-thoughten_US
dc.subjectImplicit sentiment analysisen_US
dc.subjectLarge language modelsen_US
dc.subjectMulti-task learningen_US
dc.titleRVISA : reasoning and verification for implicit sentiment analysisen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1760en_US
dc.identifier.epage1771en_US
dc.identifier.volume16en_US
dc.identifier.issue3en_US
dc.identifier.doi10.1109/TAFFC.2025.3537799en_US
dcterms.abstractUnder the context of the increasing social demand for fine-grained sentiment analysis (SA), implicit sentiment analysis (ISA) poses a significant challenge owing to the absence of salient cue words in expressions. Thus, reliable reasoning is required to understand how sentiment is evoked, enabling the identification of implicit sentiments. In the era of large language models (LLMs), encoder-decoder (ED) LLMs have emerged as popular backbone models for SA applications, given their impressive text comprehension and reasoning capabilities across diverse tasks. In comparison, decoder-only (DO) LLMs exhibit superior natural language generation and in-context learning capabilities. However, their responses may contain misleading or inaccurate information. To accurately identify implicit sentiments with reliable reasoning, this study introduces a two-stage reasoning framework named Reasoning and Verification for Implicit Sentiment Analysis (RVISA), which leverages the generation ability of DO LLMs and reasoning ability of ED LLMs to train an enhanced reasoner. The framework involves three-hop reasoning prompting to explicitly furnish sentiment elements as cues. The generated rationales are then used to fine-tune an ED LLM into a skilled reasoner. Additionally, we develop a straightforward yet effective answer-based verification mechanism to ensure the reliability of reasoning learning. Evaluation of the proposed method on two benchmark datasets demonstrates that it achieves state-of-the-art performance in ISA.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on affective computing, July-Sept 2025, v. 16, no. 3, p. 1760-1771en_US
dcterms.isPartOfIEEE transactions on affective computingen_US
dcterms.issued2025-07-
dc.identifier.scopus2-s2.0-85217495515-
dc.identifier.eissn1949-3045en_US
dc.description.validate202505 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOS-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThe Faculty Research Grants (DB24A4 and DB24C5) at Lingnan University, Hong Kongen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Lai_RVISA_Reasoning_Verification.pdf3.53 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

SCOPUSTM   
Citations

1
Citations as of Oct 3, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.