Logdet Divergence Based Sparse Non-Negative Matrix Factorization for Stable Representation

Abstract
Non-negative matrix factorization (NMF) decomposes any non-negative matrix into the product of two low dimensional non-negative matrices. Since NMF learns effective parts-based representation, it has been widely applied in computer vision and data mining. However, traditional NMF has the riskrisk learning rank-deficient basis learning rank-deficient basis on high-dimensional dataset with few examples especially when some examples are heavily corrupted by outliers. In this paper, we propose a Logdet divergence based sparse NMF method (LDS-NMF) to deal with the rank-deficiency problem. In particular, LDS-NMF reduces the risk of rank deficiency by minimizing the Logdet divergence between the product of basis matrix with its transpose and the identity matrix, meanwhile penalizing the density of the coefficients. Since the objective function of LDS-NMF is nonconvex, it is difficult to optimize. In this paper, we develop a multiplicative update rule to optimize LDS-NMF in the frame of block coordinate descent, and theoretically prove its convergence. Experimental results on popular datasets show that LDS-NMF can learn more stable representations than those learned by representative NMF methods.