Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/105450
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Computing | en_US |
dc.creator | Liu, S | en_US |
dc.creator | Cao, J | en_US |
dc.creator | Yang, R | en_US |
dc.creator | Wen, Z | en_US |
dc.date.accessioned | 2024-04-15T07:34:27Z | - |
dc.date.available | 2024-04-15T07:34:27Z | - |
dc.identifier.isbn | 978-1-954085-54-1 | en_US |
dc.identifier.uri | http://hdl.handle.net/10397/105450 | - |
dc.description | Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Online, August 1-6, 2021 | en_US |
dc.language.iso | en | en_US |
dc.publisher | Association for Computational Linguistics (ACL) | en_US |
dc.rights | ©2021 Association for Computational Linguistics | en_US |
dc.rights | This publication is licensed on a Creative Commons Attribution 4.0 International License. (https://creativecommons.org/licenses/by/4.0/) | en_US |
dc.rights | The following publication Shuaiqi Liu, Jiannong Cao, Ruosong Yang, and Zhiyuan Wen. 2021. Highlight-Transformer: Leveraging Key Phrase Aware Attention to Improve Abstractive Multi-Document Summarization. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 5021–5027, Online. Association for Computational Linguistics is available at https://doi.org/10.18653/v1/2021.findings-acl.445. | en_US |
dc.title | Highlight-transformer : leveraging key phrase aware attention to improve abstractive multi-document summarization | en_US |
dc.type | Conference Paper | en_US |
dc.identifier.spage | 5021 | en_US |
dc.identifier.epage | 5027 | en_US |
dc.identifier.doi | 10.18653/v1/2021.findings-acl.445 | en_US |
dcterms.abstract | Abstractive multi-document summarization aims to generate a comprehensive summary covering salient content from multiple input documents. Compared with previous RNN-based models, the Transformer-based models employ the self-attention mechanism to capture the dependencies in input documents and can generate better summaries. Existing works have not considered key phrases in determining attention weights of self-attention. Consequently, some of the tokens within key phrases only receive small attention weights. It can affect completely encoding key phrases that convey the salient ideas of input documents. In this paper, we introduce the Highlight-Transformer, a model with the highlighting mechanism in the encoder to assign greater attention weights for the tokens within key phrases. We propose two structures of high-lighting attention for each head and the multi-head highlighting attention. The experimental results on the Multi-News dataset show that our proposed model significantly outperforms the competitive baseline models. | en_US |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, p. 5021-5027. Stroudsburg, PA, USA: Association for Computational Linguistics (ACL), 2021 | en_US |
dcterms.issued | 2021 | - |
dc.relation.ispartofbook | Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 | en_US |
dc.relation.conference | Joint Conference of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing [ACL-IJCNLP] | en_US |
dc.description.validate | 202402 bcch | en_US |
dc.description.oa | Version of Record | en_US |
dc.identifier.FolderNumber | COMP-0020 | - |
dc.description.fundingSource | RGC | en_US |
dc.description.fundingSource | Others | en_US |
dc.description.fundingText | Hong Kong Red Swastika Society Tai Po Secondary School | en_US |
dc.description.pubStatus | Published | en_US |
dc.identifier.OPUS | 54445423 | - |
dc.description.oaCategory | CC | en_US |
Appears in Collections: | Conference Paper |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2021.findings-acl.445.pdf | 289.29 kB | Adobe PDF | View/Open |
Page views
46
Citations as of May 11, 2025
Downloads
22
Citations as of May 11, 2025

Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.