Please use this identifier to cite or link to this item:
Title: Improving news video annotation with semantic context
Authors: Qiu, Y
Guan, G
Wang, Z
Feng, D
Keywords: Video annotation
Semantic context
Semantic similarity
Issue Date: 2010
Source: Proceedings of the International Conference on Digital Image Computing: Techniques and Applications (DICTA'2010), Sydney, Australia, 1-3 Dec. 2010, p. 214-219 How to cite?
Abstract: Automatic video annotation has been proposed to bridge the semantic gap introduced by content based retrieval so as to facilitate concept based video retrieval. Recently, utilizing context information has emerged as an important direction in automatic visual information annotation. %Most of existing approaches aim to assign linguistic terms to visual information without considering its given context. In this paper, we present a novel video annotation approach by utilizing the semantic context extracted from video subtitles. The semantic context of a video shot, which is formed by a set of key terms identified from subtitles of the video, is utilized to refine the initial annotation results by exploiting the semantic similarity between those key terms and the candidate annotation concepts. Similarity measurements including Google distance and WordNet distance have been investigated for such a refinement purpose. In addition, visualness is utilized to further discriminate individual terms for better refinement granularity. Extensive experiments on TRECVID 2005 dataset have been conducted to demonstrate significant improvement of the proposed annotation approach and to investigate the impact of various factors.
ISBN: 978-1-4244-8816-2
978-0-7695-4271-3 (E-ISBN)
DOI: 10.1109/DICTA.2010.47
Appears in Collections:Conference Paper

View full-text via PolyU eLinks SFX Query
Show full item record


Last Week
Last month
Citations as of Jul 5, 2018

Page view(s)

Last Week
Last month
Citations as of Jul 10, 2018

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.