Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/105574
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorYe, Hen_US
dc.creatorLi, Wen_US
dc.creatorWang, Len_US
dc.date.accessioned2024-04-15T07:35:08Z-
dc.date.available2024-04-15T07:35:08Z-
dc.identifier.isbn978-1-950737-48-2en_US
dc.identifier.urihttp://hdl.handle.net/10397/105574-
dc.description57th Annual Meeting of the Association for Computational Linguistics (ACL), July 28-August 2, 2019, Florence, Italyen_US
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguistics (ACL)en_US
dc.rights© 2019 Association for Computational Linguisticsen_US
dc.rightsThis publication is licensed on a Creative Commons Attribution 4.0 International License. (https://creativecommons.org/licenses/by/4.0/)en_US
dc.rightsThe following publication Hai Ye, Wenjie Li, and Lu Wang. 2019. Jointly Learning Semantic Parser and Natural Language Generator via Dual Information Maximization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2090–2101, Florence, Italy. Association for Computational Linguistics is available at https://doi.org/10.18653/v1/P19-1201.en_US
dc.titleJointly learning semantic parser and natural language generator via dual information maximizationen_US
dc.typeConference Paperen_US
dc.identifier.spage2090en_US
dc.identifier.epage2101en_US
dc.identifier.doi10.18653/v1/P19-1201en_US
dcterms.abstractSemantic parsing aims to transform natural language (NL) utterances into formal meaning representations (MRs), whereas an NL generator achieves the reverse: producing an NL description for some given MRs. Despite this intrinsic connection, the two tasks are often studied separately in prior work. In this paper, we model the duality of these two tasks via a joint learning framework, and demonstrate its effectiveness of boosting the performance on both tasks. Concretely, we propose a novel method of dual information maximization (DIM) to regularize the learning process, where DIM empirically maximizes the variational lower bounds of expected joint distributions of NL and MRs. We further extend DIM to a semi-supervision setup (SemiDIM), which leverages unlabeled data of both tasks. Experiments on three datasets of dialogue management and code generation (and summarization) show that performance on both semantic parsing and NL generation can be consistently improved by DIM, in both supervised and semi-supervised setups.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn The 57th Annual Meeting of the Association for Computational Linguistics: Proceedings of the Conference, p. 2090-2101. Stroudsburg, PA, USA: Association for Computational Linguistics (ACL), 2019en_US
dcterms.issued2019-
dc.relation.ispartofbookThe 57th Annual Meeting of the Association for Computational Linguistics: Proceedings of the Conferenceen_US
dc.relation.conferenceAnnual Meeting of the Association for Computational Linguistics [ACL]en_US
dc.description.validate202402 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberCOMP-0549-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNational Natural Science Foundation of Chinaen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS20003916-
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
P19-1201.pdf777.85 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

109
Last Week
7
Last month
Citations as of Nov 30, 2025

Downloads

26
Citations as of Nov 30, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.