Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/105492
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorYuan, R-
dc.creatorWang, Z-
dc.creatorLi, W-
dc.date.accessioned2024-04-15T07:34:41Z-
dc.date.available2024-04-15T07:34:41Z-
dc.identifier.isbn978-1-952148-27-9-
dc.identifier.urihttp://hdl.handle.net/10397/105492-
dc.description28th International Conference on Computational Linguistics, December 8-13, 2020, Barcelona, Spain (Online)en_US
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguistics (ACL)en_US
dc.rightsThis work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/.en_US
dc.rightsThe following publication Ruifeng Yuan, Zili Wang, and Wenjie Li. 2020. Fact-level Extractive Summarization with Hierarchical Graph Mask on BERT. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5629–5639, Barcelona, Spain (Online). International Committee on Computational Linguistics is available at https://doi.org/10.18653/v1/2020.coling-main.493.en_US
dc.titleFact-level extractive summarization with hierarchical graph mask on BERTen_US
dc.typeConference Paperen_US
dc.identifier.spage5629-
dc.identifier.epage5639-
dc.identifier.doi10.18653/v1/2020.coling-main.493-
dcterms.abstractMost current extractive summarization models generate summaries by selecting salient sentences. However, one of the problems with sentence-level extractive summarization is that there exists a gap between the human-written gold summary and the oracle sentence labels. In this paper, we propose to extract fact-level semantic units for better extractive summarization. We also introduce a hierarchical structure, which incorporates the multi-level of granularities of the textual information into the model. In addition, we incorporate our model with BERT using a hierarchical graph mask. This allows us to combine BERT’s ability in natural language understanding and the structural information without increasing the scale of the model. Experiments on the CNN/DaliyMail dataset show that our model achieves state-of-the-art results.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Proceedings of the 28th International Conference on Computational Linguistics, p. 5629-5639. Barcelona, Spain : International Committee on Computational Linguistics, 2020-
dcterms.issued2020-
dc.relation.ispartofbookProceedings of the 28th International Conference on Computational Linguistics-
dc.relation.conferenceInternational Conference on Computational Linguistics [COLING]-
dc.description.validate202402 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberCOMP-0157en_US
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNational Natural Science Foundation of Chinaen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS49977398en_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
2020.coling-main.493.pdf304.66 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

95
Last Week
5
Last month
Citations as of Nov 30, 2025

Downloads

26
Citations as of Nov 30, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.