Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/105542
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorWen, Z-
dc.creatorCao, J-
dc.creatorYang, R-
dc.creatorWang, S-
dc.date.accessioned2024-04-15T07:34:56Z-
dc.date.available2024-04-15T07:34:56Z-
dc.identifier.isbn979-10-95546-34-4-
dc.identifier.urihttp://hdl.handle.net/10397/105542-
dc.description12th International Conference on Language Resources and Evaluation (LREC 2020), May 11-16 , 2020, Marseille, Franceen_US
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguistics (ACL)en_US
dc.rights© European Language Resources Association (ELRA), licensed under CC-BY-NCen_US
dc.rightsThe following publication Zhiyuan Wen, Jiannong Cao, Ruosong Yang, and Senzhang Wang. 2020. Decode with Template: Content Preserving Sentiment Transfer. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4671–4679, Marseille, France. European Language Resources Association is available at https://aclanthology.org/2020.lrec-1.575.en_US
dc.titleDecode with template : content preserving sentiment transferen_US
dc.typeConference Paperen_US
dc.identifier.spage4671-
dc.identifier.epage4679-
dcterms.abstractSentiment transfer aims to change the underlying sentiment of input sentences. The two major challenges in existing works lie in (1) effectively disentangling the original sentiment from input sentences; and (2) preserving the semantic content while transferring the sentiment. We find that identifying the sentiment-irrelevant content from input sentences to facilitate generating output sentences could address the above challenges and then propose the Decode with Template model in this paper. We first mask the explicit sentiment words in input sentences and use the rest parts as templates to eliminate the original sentiment. Then, we input the templates and the target sentiments into our bidirectionally guided variational auto-encoder (VAE) model to generate output. In our method, the template preserves most of the semantics in input sentences, and the bidirectionally guided decoding captures both forward and backward contextual information to generate output. Both two parts contribute to better content preservation. We evaluate our method on two review datasets, Amazon and Yelp, with automatic evaluation methods and human rating. The experimental results show that our method significantly outperforms state-of-the-art models, especially in content preservation.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Twelfth International Conference on Language Resources and Evaluation: May 11-16 , 2020, Palais du Pharo, Marseille, France, Conference proceedings, p. 4671-4679. Paris, France: ELRA – European Language Resources Association, 2020-
dcterms.issued2020-
dc.relation.ispartofbookTwelfth International Conference on Language Resources and Evaluation: May 11-16 , 2020, Palais du Pharo, Marseille, France, Conference proceedings-
dc.relation.conferenceInternational Conference on Language Resources and Evaluation [LREC],-
dc.description.validate202402 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberCOMP-0349en_US
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextPolyU Teaching Developmenten_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS54443933en_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
2020.lrec-1.575.pdf448.41 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

37
Citations as of Jun 22, 2025

Downloads

11
Citations as of Jun 22, 2025

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.