Please use this identifier to cite or link to this item:
|Title:||Segmentation and recognition of connected handwritten digits|
|Keywords:||Optical character recognition devices.|
Writing -- Data processing.
Optical pattern recognition.
Pattern recognition systems.
Hong Kong Polytechnic University -- Dissertations
|Publisher:||The Hong Kong Polytechnic University|
|Abstract:||Recognition of connected handwritten characters is a challenging task due mainly to two problems: poor character segmentation and unreliable isolated character recognition. Overlapping, touching, and ligatures between neighboring characters make character segmentation and then recognition very difficult. This thesis presents our research results on connected handwritten character recognition using a segmentation-based approach. The three main problems that have to be solved are the estimation of the number of character in a word, character segmentation, and reliable isolated character recognition. We discuss in this thesis a neural network based length estimation, a background-thinning-based segmentation algorithm, a new template representation and optimization technique for building a template based classifier, and dynamic programming techniques used in segmentation-based or segmentation-free approach for recognizing connected characters. We used digit strings as examples to evaluate our new algorithms. Length estimation is very helpful for the successful segmentation and recognition of connected handwritten digits. The kernel of our algorithm is a neural network estimator with a set of statistical and structural features as the input. To extract features, several preprocessing steps including noise reduction, normalization and skeletonization have to be carried out. The output of the neural network is a set of fuzzy membership grades reflecting the degrees of an input digit string for having different number of digits. In experiments, we consider up to 4-digit strings (very few 5-digit or longer strings in real applications). NIST Special Database 3 was used for training and testing the neural network length estimator. The database includes 20,852 isolated digit samples, 4,555 connected 2-digit samples, 355 connected 3-digit samples and 48 connected 4-digit samples. Because we do not have many 3-digit strings and 4-digit strings, we artificially generate some by merging the existing samples of digits and digit stings. Experimental results on NIST Special Database 3 and derived digit strings show that only 55 (0.6%) out of 9910 digit strings are poorly estimated.|
Correct segmentation is vital for character recognition using segmentation-based approaches. In our approach, the segmentation is based on the analysis of the background (the regions excluding the characters) skeleton of a digit string image. In the exploration of connected digit samples, we found the shape of the background regions is much simpler than the foreground. Making use of the shape information of background can simplify the searching of the segmentation paths and decrease the number of segmentation candidates accordingly. In this algorithm, we extract segmentation paths by matching feature points on background skeletons. Feature points include fork points, terminal points, and curve points. The definition of fork points and terminal points are same as the length estimation algorithm. A curve point is a point on a segment where the direction of the line changes sharply. A three-step-matching scheme is developed to find the matched point pairs and a segmentation path is constructed by connecting these feature points with possible extension to the top and/or bottom of the skeleton. We applied our segmentation algorithm to the connected 2-digits. The membership degree to which a candidate is a good segmentation path is determined by fuzzy decision rules with the nine properties associated with a segmentation path. The separated digits are recognized by a nearest-neighbor classifier. Tested on NIST Special Database 3, our background-thinning-based approach for segmenting and recognizing handwritten digit strings show better performance than some existing techniques. Moreover, our approach can deal with both single- and multi-touching problems. We present a multi-module classifier to recognize isolated digits. Among four modules, a template-based classifier based on the rational B-spline surface representation of the Pixel-to-Boundary Distance Map (PBDM) is adopted to improve the performance of the classifier, in particular, in rejecting non-digit patterns. To extract optimized templates, we used a two-stage algorithm based on a neural network and an evolutionary algorithm. The classifier can reliably distinguish non-digit patterns from digits, which is a desirable feature for recognizing handwritten digit strings. The classifier can be applied together with a segmentation-based recognition algorithm or a segmentation-free recognition algorithm. Experimental results show that the designed multi-module classifier compares favorably with other classification techniques tested. In this thesis, we also discuss a segmentation-based and a segmentation-free approaches for recognizing connect characters. Based on the designed multi-module classifier and the background-thinning-based segmentation algorithm, a segmentation-based recognition approach is presented. A dynamic programming algorithm is applied in this approach. Experimental results show that our approach can achieve more favorable classification performance. To deal with some hard-to-segment handwritten digit strings, a segmentation-free recognition approach with a dynamic programming algorithm is also discussed.
|Description:||xviii, 108 leaves : ill. ; 30 cm.|
PolyU Library Call No.: [THS] LG51 .H577M EIE 1999 Lu
|Rights:||All rights reserved.|
|Appears in Collections:||Thesis|
Show full item record
Files in This Item:
|b14939502_link.htm||For PolyU Users||162 B||HTML||View/Open|
|b14939502_ir.pdf||For All Users (Non-printable)||3.8 MB||Adobe PDF||View/Open|
Checked on Jul 24, 2016
Checked on Jul 24, 2016
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.