An Explicit Nonlinear Mapping for Manifold Learning

Abstract
Manifold learning is a hot research topic in the held of computer science and has many applications in the real world. A main drawback of manifold learning methods is, however, that there are no explicit mappings from the input data manifold to the output embedding. This prohibits the application of manifold learning methods in many practical problems such as classification and target detection. Previously, in order to provide explicit mappings for manifold learning methods, many methods have been proposed to get an approximate explicit representation mapping with the assumption that there exists a linear projection between the high-dimensional data samples and their low-dimensional embedding. However, this linearity assumption may be too restrictive. In this paper, an explicit nonlinear mapping is proposed for manifold learning, based on the assumption that there exists a polynomial mapping between the high-dimensional data samples and their low-dimensional representations. As far as we know, this is the hrst time that an explicit nonlinear mapping for manifold learning is given. In particular, we apply this to the method of locally linear embedding and derive an explicit nonlinear manifold learning algorithm, which is named neighborhood preserving polynomial embedding. Experimental results on both synthetic and real-world data show that the proposed mapping is much more effective in preserving the local neighborhood information and the nonlinear geometry of the high-dimensional data samples than previous work.

This publication has 32 references indexed in Scilit: