Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/89106
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorXu, Jen_US
dc.creatorSun, Xen_US
dc.creatorZeng, Qen_US
dc.creatorRen, Xen_US
dc.creatorZhang, Xen_US
dc.creatorWang, Hen_US
dc.creatorLi, Wen_US
dc.date.accessioned2021-02-04T02:39:23Z-
dc.date.available2021-02-04T02:39:23Z-
dc.identifier.urihttp://hdl.handle.net/10397/89106-
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguistics (ACL)en_US
dc.rights© 2017 Association for Computational Linguisticsen_US
dc.rightsACL materials are Copyright © 1963–2021 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Xu, J., Sun, X., Zeng, Q., Ren, X., Zhang, X., Wang, H., & Li, W. (2018). Unpaired sentiment-to-sentiment translation: A cycled reinforcement learning approach. Paper presented at the ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), 1, 979-988 is available at https://dx.doi.org/10.18653/v1/p18-1090en_US
dc.titleUnpaired sentiment-to-sentiment translation : a cycled reinforcement learning approachen_US
dc.typeConference Paperen_US
dc.identifier.spage979en_US
dc.identifier.epage988en_US
dc.identifier.volume1en_US
dc.identifier.doi10.18653/v1/p18-1090en_US
dcterms.abstractThe goal of sentiment-to-sentiment “translation” is to change the underlying sentiment of a sentence while keeping its content. The main challenge is the lack of parallel data. To solve this problem, we propose a cycled reinforcement learning method that enables training on unpaired data by collaboration between a neutralization module and an emotionalization module. We evaluate our approach on two review datasets, Yelp and Amazon. Experimental results show that our approach significantly outperforms the state-of-the-art systems. Especially, the proposed method substantially improves the content preservation performance. The BLEU score is improved from 1.64 to 22.46 and from 0.56 to 14.06 on the two datasets, respectively.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia, 15-20 July 2018, (Vol. 1: Long Papers) , p. 979-988. Stroudsburg : Association for Computational Linguistics, 2018.en_US
dcterms.issued2018-
dc.identifier.scopus2-s2.0-85055705783-
dc.relation.ispartofbookProceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia, 15-20 July 2018en_US
dc.relation.conferenceAssociation for Computational Linguistics. Annual Meeting [ACL]-
dc.description.validate202101 bcrc-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOS-
dc.description.pubStatusPublisheden_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
P18-1090.pdf423.06 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

87
Last Week
1
Last month
Citations as of Apr 14, 2024

Downloads

25
Citations as of Apr 14, 2024

SCOPUSTM   
Citations

127
Citations as of Apr 19, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.