Please use this identifier to cite or link to this item:
Title: Document-aware graph models for query-oriented multi-document summarization
Authors: Wei, F
Li, W 
He, Y
Issue Date: 2011
Publisher: Springer
Source: In W Lin, D Tao, J Kacprzyk, Z Li, E Izquierdo & Wang (Eds.), Multimedia analysis, processing and communications, p. 655-678. Berlin ; Heidelberg: Springer, 2011 How to cite?
Series/Report no.: Studies in computational intelligence ; v. 346
Abstract: Sentence ranking is the issue of most concern in document summarization. In recent years, graph-based summarization models and sentence ranking algorithms have drawn considerable attention from the extractive summarization community due to their capability of recursively calculating sentence significance from the entire text graph that links sentences together rather than relying on single sentence alone. However, when dealing with multi-document summarization, existing sentence ranking algorithms often assemble a set of documents into one large file. The document dimension is ignored. In this work, we develop two alternative models to integrate the document dimension into existing sentence ranking algorithms. They are the one-layer (i.e. sentence layer) document-sensitive model and the two-layer (i.e. document and sentence layers) mutual reinforcement model. While the former implicitly incorporates the document’s influence in sentence ranking, the latter explicitly formulates the mutual reinforcement among sentence and document during ranking. The effectiveness of the proposed models and algorithms are examined on the DUC query-oriented multi-document summarization data sets.
ISBN: 9783642195518 (electronic bk.)
3642195512 (electronic bk.)
DOI: 10.1007/978-3-642-19551-8_24
Appears in Collections:Book Chapter

View full-text via PolyU eLinks SFX Query
Show full item record


Last Week
Last month
Citations as of Feb 15, 2019

Page view(s)

Last Week
Last month
Citations as of Feb 18, 2019

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.