Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/105450
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorLiu, Sen_US
dc.creatorCao, Jen_US
dc.creatorYang, Ren_US
dc.creatorWen, Zen_US
dc.date.accessioned2024-04-15T07:34:27Z-
dc.date.available2024-04-15T07:34:27Z-
dc.identifier.isbn978-1-954085-54-1en_US
dc.identifier.urihttp://hdl.handle.net/10397/105450-
dc.descriptionJoint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Online, August 1-6, 2021en_US
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguistics (ACL)en_US
dc.rights©2021 Association for Computational Linguisticsen_US
dc.rightsThis publication is licensed on a Creative Commons Attribution 4.0 International License. (https://creativecommons.org/licenses/by/4.0/)en_US
dc.rightsThe following publication Shuaiqi Liu, Jiannong Cao, Ruosong Yang, and Zhiyuan Wen. 2021. Highlight-Transformer: Leveraging Key Phrase Aware Attention to Improve Abstractive Multi-Document Summarization. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 5021–5027, Online. Association for Computational Linguistics is available at https://doi.org/10.18653/v1/2021.findings-acl.445.en_US
dc.titleHighlight-transformer : leveraging key phrase aware attention to improve abstractive multi-document summarizationen_US
dc.typeConference Paperen_US
dc.identifier.spage5021en_US
dc.identifier.epage5027en_US
dc.identifier.doi10.18653/v1/2021.findings-acl.445en_US
dcterms.abstractAbstractive multi-document summarization aims to generate a comprehensive summary covering salient content from multiple input documents. Compared with previous RNN-based models, the Transformer-based models employ the self-attention mechanism to capture the dependencies in input documents and can generate better summaries. Existing works have not considered key phrases in determining attention weights of self-attention. Consequently, some of the tokens within key phrases only receive small attention weights. It can affect completely encoding key phrases that convey the salient ideas of input documents. In this paper, we introduce the Highlight-Transformer, a model with the highlighting mechanism in the encoder to assign greater attention weights for the tokens within key phrases. We propose two structures of high-lighting attention for each head and the multi-head highlighting attention. The experimental results on the Multi-News dataset show that our proposed model significantly outperforms the competitive baseline models.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, p. 5021-5027. Stroudsburg, PA, USA: Association for Computational Linguistics (ACL), 2021en_US
dcterms.issued2021-
dc.relation.ispartofbookFindings of the Association for Computational Linguistics: ACL-IJCNLP 2021en_US
dc.relation.conferenceJoint Conference of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing [ACL-IJCNLP]en_US
dc.description.validate202402 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberCOMP-0020-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextHong Kong Red Swastika Society Tai Po Secondary Schoolen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS54445423-
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
2021.findings-acl.445.pdf289.29 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

46
Citations as of May 11, 2025

Downloads

22
Citations as of May 11, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.