Categorization of faces using unsupervised feature extraction

Abstract
The proposal of G. Cottrell et al. (1987) that their image compression network might be used to extract image features for pattern recognition automatically, is tested by training a neural network to compress 64 face images, spanning 11 subjects, and 13 nonface images. Features extracted in this manner (the output of the hidden units) are given as input to a one-layer network trained to distinguish faces from nonfaces and to attach a name and sex to the face images. The network successfully recognizes new images of familiar faces, categorizes novel images as to their `faceness' and, to a great extent, gender, and exhibits continued accuracy over a considerable range of partial or shifted input