Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/3035
Title: An elastic graph dynamic link model for invariant vision object recognition
Authors: Lee, Shu-tak Raymond
Keywords: Pattern recognition systems
Hong Kong Polytechnic University -- Dissertations
Issue Date: 2000
Publisher: The Hong Kong Polytechnic University
Abstract: In recent decades, neural network researches have been tremendously increased in various areas ranging from simple synapses to sophisticated weather forecasting systems. Vision object recognition, due to its vast applications [such as personal identification and scene analysis], has been one of the most popular topics in such neural network researches but it also poses the greatest challenge to researchers of neural network for various reasons. Vision itself involves the process of massive amounts of data and has the geometric invariant problem, problems of noise, occlusion and object variability. Moreover, vision object recognition needs to be fast, especially for the situation or purpose of robot vision and automatic surveillance. However, wiring of the specific feature extractors in the neural network system and the tremendous amount of time required for learning its use also create an additional difficulty in the neural network approach. Numerous models are proposed in order to provide an efficient and effective solution. In this thesis, a Elastic Graph Dynamic Link Model (EGDLM) is proposed to cope with the inherent variability properties of visual patterns recognition. This model makes use of both "Elastic Graph Matching" and "Neural Network Architecture" for Invariant Vision Object Recognition. From the application point of view, we have implemented the model in various complex vision objection recognition problems [such as: Handwritten Chinese character recognition; human face recognition under various invariant situations including dilation, contraction, rotation, occlusion, distortion and scene analysis]. In addition, the author has applied and utilized the model in automatic satellite interpretation for identification of weather patterns [such as tropical cyclones]. Due to the difference in nature in different vision object recognition problems, various image pre-processing techniques have been adapted including "Active Contour Model" and "Composite Neural Oscillatory Model" for vision object segmentation. The result is quite encouraging. From the academic point of view, the author has explored and simulated the vision object encoding, storage and recognition of brain model by using an "Elastic Graph" and "Dynamic Memory Links" neural network model. The integration of various approaches is a new contribution to science. The success of the project not only can provide a feasible and effective vision object recognition model, but also enable us to enter into a new era of neuroscience for memory management, recall and management scheme. Moreover, the author aims at the provision of a "generalized" vision object recognition model to tackle a variety of problems ranging from the "Handwritten Character Recognition", "Scene analysis" to "Human Face Recognition" and "Satellite Picture Interpretation" problems. These problems can be solved by the present proposed methodologies, much better than that by classical recognition models alone. The experimental results have confirmed such potential and will stimulate more interest in the topics in the year to come.
Description: xiv, 225, [60] leaves : ill. ; 30 cm.
PolyU Library Call No.: [THS] LG51 .H577P COMP 2000 Lee
URI: http://hdl.handle.net/10397/3035
Rights: All rights reserved.
Appears in Collections:Thesis

Files in This Item:
File Description SizeFormat 
b15403117_link.htmFor PolyU Users 162 BHTMLView/Open
b15403117_ir.pdfFor All Users (Non-printable) 10.16 MBAdobe PDFView/Open
Show full item record

Page view(s)

416
Last Week
2
Last month
Checked on Jan 15, 2017

Download(s)

160
Checked on Jan 15, 2017

Google ScholarTM

Check



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.