Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/105490
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorZheng, C-
dc.creatorCai, Y-
dc.creatorZhang, G-
dc.creatorLi, Q-
dc.date.accessioned2024-04-15T07:34:40Z-
dc.date.available2024-04-15T07:34:40Z-
dc.identifier.isbn978-1-952148-27-9-
dc.identifier.urihttp://hdl.handle.net/10397/105490-
dc.description28th International Conference on Computational Linguistics, December 8-13, 2020, Barcelona, Spain (Online)en_US
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguistics (ACL)en_US
dc.rightsThis work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/.en_US
dc.rightsThe following publication Changmeng Zheng, Yi Cai, Guanjie Zhang, and Qing Li. 2020. Controllable Abstractive Sentence Summarization with Guiding Entities. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5668–5678, Barcelona, Spain (Online). International Committee on Computational Linguistics is available at https://doi.org/10.18653/v1/2020.coling-main.497.en_US
dc.titleControllable abstractive sentence summarization with guiding entitiesen_US
dc.typeConference Paperen_US
dc.identifier.spage5668-
dc.identifier.epage5678-
dc.identifier.doi10.18653/v1/2020.coling-main.497-
dcterms.abstractEntities are the major proportion and build up the topic of text summaries. Although existing text summarization models can produce promising results of automatic metrics, for example, ROUGE, it is difficult to guarantee that an entity is contained in generated summaries. In this paper, we propose a controllable abstractive sentence summarization model which generates summaries with guiding entities. Instead of generating summaries from left to right, we start with a selected entity, generate the left part first, then the right part of a complete summary. Compared to previous entity-based text summarization models, our method can ensure that entities appear in final output summaries rather than generating the complete sentence with implicit entity and article representations. Our model can also generate more novel entities with them incorporated into outputs directly. To evaluate the informativeness of the proposed model, we develop a fine-grained informativeness metrics in the relevance, extraness and omission perspectives. We conduct experiments in two widely-used sentence summarization datasets and experimental results show that our model outperforms the state-of-the-art methods in both automatic evaluation scores and informativeness metrics.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Proceedings of the 28th International Conference on Computational Linguistics, p. 5668-5678. Barcelona, Spain : International Committee on Computational Linguistics, 2020-
dcterms.issued2020-
dc.relation.ispartofbookProceedings of the 28th International Conference on Computational Linguistics-
dc.relation.conferenceInternational Conference on Computational Linguistics [COLING]-
dc.description.validate202402 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberCOMP-0155en_US
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS49984897en_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
2020.coling-main.497.pdf509.96 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

103
Last Week
6
Last month
Citations as of Nov 30, 2025

Downloads

64
Citations as of Nov 30, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.