Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/105724
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorCao, Zen_US
dc.creatorLi, Wen_US
dc.creatorLi, Sen_US
dc.creatorWei, Fen_US
dc.creatorLi, Yen_US
dc.date.accessioned2024-04-15T07:36:14Z-
dc.date.available2024-04-15T07:36:14Z-
dc.identifier.isbn978-4-87974-702-0en_US
dc.identifier.urihttp://hdl.handle.net/10397/105724-
dc.description26th International Conference on Computational Linguistics, December 11-16, 2016, Osaka, Japanen_US
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguistics (ACL)en_US
dc.rightsCopyright of each paper stays with the respective authors (or their employers).en_US
dc.rightsPosted with permission of the author.en_US
dc.rightsThe following publication Ziqiang Cao, Wenjie Li, Sujian Li, Furu Wei, and Yanran Li. 2016. AttSum: Joint Learning of Focusing and Summarization with Neural Attention. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 547–556, Osaka, Japan. The COLING 2016 Organizing Committee is available at https://aclanthology.org/C16-1053/.en_US
dc.titleAttSum : joint learning of focusing and summarization with neural attentionen_US
dc.typeConference Paperen_US
dc.identifier.spage547en_US
dc.identifier.epage556en_US
dcterms.abstractQuery relevance ranking and sentence saliency ranking are the two main tasks in extractive query-focused summarization. Previous supervised summarization systems often perform the two tasks in isolation. However, since reference summaries are the trade-off between relevance and saliency, using them as supervision, neither of the two rankers could be trained well. This paper proposes a novel summarization system called AttSum, which tackles the two tasks jointly. It automatically learns distributed representations for sentences as well as the document cluster. Meanwhile, it applies the attention mechanism to simulate the attentive reading of human behavior when a query is given. Extensive experiments are conducted on DUC query-focused summarization benchmark datasets. Without using any hand-crafted features, AttSum achieves competitive performance. We also observe that the sentences recognized to focus on the query indeed meet the query need.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn The 26th International Conference on Computational Linguistics: Proceedings of COLING 2016: Technical Papers, p. 547-556en_US
dcterms.issued2016-
dc.relation.ispartofbookThe 26th International Conference on Computational Linguistics: Proceedings of COLING 2016: Technical Papersen_US
dc.relation.conferenceInternational Conference on Computational Linguistics [COLING]en_US
dc.description.validate202402 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberCOMP-1599-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNational Natural Science Foundation of China; The Hong Kong Polytechnic Universityen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS19996042-
dc.description.oaCategoryCopyright retained by authoren_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
C16-1053.pdf224.86 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

12
Citations as of May 12, 2024

Downloads

1
Citations as of May 12, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.