On ${l}_{q}$ Optimization and Sparse Inverse Covariance Selection

Abstract
Graphical models are well established in providing meaningful conditional probability descriptions of complex multivariable interactions. In the Gaussian case, the conditional independencies between different variables correspond to zero entries in the precision (inverse covariance) matrix. Hence, there has been much recent interest in sparse precision matrix estimation in areas such as statistics, machine learning, computer vision, pattern recognition, and signal processing. A popular estimation method involves optimizing a penalized log-likelihood problem. The penalty is responsible for inducing sparsity and a common choice is the convex l1 norm. Even though the l0 penalty is the natural choice guaranteeing maximum sparsity, it has been avoided due to lack of convexity. As a result, in this paper we bridge the gap between these two penalties and propose the non-concave lq penalized log-likelihood problem for sparse precision matrix estimation where 0 ≤ q <; 1. A novel algorithm is developed for the optimization and we provide some of its theoretic properties that are useful in sparse linear regression. We illustrate on synthetic and real data, showing reconstruction quality comparisons of sparsity inducing penalties:l0, lq with 0 <; q <; 1, l1, and SCAD.

This publication has 38 references indexed in Scilit: