Back to results list
Please use this identifier to cite or link to this item:
|Title:||Efficient learning algorithms for image annotation||Authors:||Hu, Jiwei||Keywords:||Image processing -- Digital techniques.
Information retrieval -- Data processing.
Hong Kong Polytechnic University -- Dissertations
|Issue Date:||2014||Publisher:||The Hong Kong Polytechnic University||Abstract:||The most important challenges in the development of successful solutions for computer-vision applications are: to select an efficient and effective feature for object or scene representation, to effectively model high-level knowledge, and to incorporate the user's intentions in the computational algorithms. With the rapid development of digital imagery, how to search images more efficiently by their content has become one of the biggest challenges. Content-based image retrieval (CBIR) is therefore an exciting and worthwhile research interest. However, recent research has shown that there is a significant gap between low-level image features and the semantic concepts used by humans to interpret images. In other words, images lack clearly defined semantic units of composition, which are analogous to words in text documents. As an attempt to bridge this so-called semantic gap, automatic image annotation has been gaining more and more attention in recent years. This research aims to explore a number of different approaches to automatic image annotation, and some related issues. This thesis begins with an introduction to different techniques for image description, which forms the foundation of the research on image auto-annotation. In addition, we also discuss the state-of-the-art automatic image-annotation techniques. Then, we will present our in-depth research into learning the relationship between the image labels themselves. We have introduced a simple label-filtering algorithm, which can remove most of the irrelevant labels for a query image while maintaining those potential labels. With a small population of potential labels left, we then explore the relationship between the features to be used and each label class. Hence, specific and effective features are selected for each class to form a label-specific classifier. In other words, our approach prunes specific features for each single label and formalizes the annotation task as a discriminative classification problem. We have also introduced a new hierarchical model, which mimics human thinking, for image annotation. In this proposed framework, we divide labels into several hierarchies for efficient and accurate labeling, which are constructed using our proposed Associative Memory Sharing (AMS) method. We have also further investigated two challenges in image annotation: the semantic-gap problem and the presence of ambiguous words that may lead to a wrong interpretation between low-level and high-level information. These problems always degrade the performance of the image-to-word mapping. Therefore, we have also proposed a discriminative model with a ranking function that optimizes the cost between a target word and the corresponding images, while simultaneously discovering the disambiguated senses of those words that are optimal for supervised tasks. For the different algorithms proposed in this thesis, we have also conducted comprehensive experiments to evaluate their respective performances. Different datasets are used, and our algorithms are compared to a number of state-of-the-art algorithms. Experiment results show the superior performances of our proposed algorithms.||Description:||xiv, 91 p. : ill. ; 30 cm.
PolyU Library Call No.: [THS] LG51 .H577P EIE 2014 Hu
|URI:||http://hdl.handle.net/10397/6791||Rights:||All rights reserved.|
|Appears in Collections:||Thesis|
Show full item record
Files in This Item:
|b26818218_link.htm||For PolyU Users||203 B||HTML||View/Open|
|b26818218_ir.pdf||For All Users (Non-printable)||8.11 MB||Adobe PDF||View/Open|
Citations as of Oct 14, 2018
Citations as of Oct 14, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.