Honest variable selection in linear and logistic regression models via ℓ1 and ℓ1+ℓ2 penalization
Open Access
- 1 January 2008
- journal article
- Published by Institute of Mathematical Statistics in Electronic Journal of Statistics
- Vol. 2 (none)
- https://doi.org/10.1214/08-ejs287
Abstract
This paper investigates correct variable selection in finite samples via $\ell_1$ and $\ell_1+\ell_2$ type penalization schemes. The asymptotic consistency of variable selection immediately follows from this analysis. We focus on logistic and linear regression models. The following questions are central to our paper: given a level of confidence $1-\delta$, under which assumptions on the design matrix, for which strength of the signal and for what values of the tuning parameters can we identify the true model at the given level of confidence? Formally, if $\widehat{I}$ is an estimate of the true variable set $I^*$, we study conditions under which $\mathbb{P}(\widehat{I}=I^*)\geq 1-\delta$, for a given sample size $n$, number of parameters $M$ and confidence $1-\delta$. We show that in identifiable models, both methods can recover coefficients of size $\frac{1}{\sqrt{n}}$, up to small multiplicative constants and logarithmic factors in $M$ and $\frac{1}{\delta}$. The advantage of the $\ell_1+\ell_2$ penalization over the $\ell_1$ is minor for the variable selection problem, for the models we consider here. Whereas the former estimates are unique, and become more stable for highly correlated data matrices as one increases the tuning parameter of the $\ell_2$ part, too large an increase in this parameter value may preclude variable selection.Comment: Published in at http://dx.doi.org/10.1214/08-EJS287 the Electronic Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of Mathematical Statistics (http://www.imstat.org
Keywords
Other Versions
This publication has 23 references indexed in Scilit:
- Lasso-type recovery of sparse representations for high-dimensional dataThe Annals of Statistics, 2009
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimatorsElectronic Journal of Statistics, 2008
- Consistent selection via the Lasso for high dimensional approximating regression modelsPublished by Institute of Mathematical Statistics ,2008
- Aggregation for Gaussian regressionThe Annals of Statistics, 2007
- Sparsity oracle inequalities for the LassoElectronic Journal of Statistics, 2007
- Lasso type classifiers with a reject optionElectronic Journal of Statistics, 2007
- The Adaptive Lasso and Its Oracle PropertiesJournal of the American Statistical Association, 2006
- Regularization and Variable Selection Via the Elastic NetJournal of the Royal Statistical Society Series B: Statistical Methodology, 2005
- A new approach to variable selection in least squares problemsIMA Journal of Numerical Analysis, 2000
- On the LASSO and its DualJournal of Computational and Graphical Statistics, 2000