A Bayesian framework for deformable pattern recognition with application to handwritten character recognition

Abstract
Deformable models have recently been proposed for many pattern recognition applications due to their ability to handle large shape variations. These proposed approaches represent patterns or shapes as deformable models, which deform themselves to match with the input image, and subsequently feed the extracted information into a classifier. The three components驴modeling, matching, and classification驴are often treated as independent tasks. In this paper, we study how to integrate deformable models into a Bayesian framework as a unified approach for modeling, matching, and classifying shapes. Handwritten character recognition serves as a testbed for evaluating the approach. With the use of our system, recognition is invariant to affine transformation as well as other handwriting variations. In addition, no preprocessing or manual setting of hyperparameters (e.g., regularization parameter and character width) is required. Besides, issues on the incorporation of constraints on model flexibility, detection of subparts, and speed-up are investigated. Using a model set with only 23 prototypes without any discriminative training, we can achieve an accuracy of 94.7 percent with no rejection on a subset (11,791 images by 100 writers) of handwritten digits from the NIST SD-1 dataset.

This publication has 7 references indexed in Scilit: