Efficiency loss and the linearity condition in dimension reduction
- 24 February 2013
- journal article
- research article
- Published by Oxford University Press (OUP) in Biometrika
- Vol. 100 (2), 371-383
- https://doi.org/10.1093/biomet/ass075
Abstract
Linearity, sometimes jointly with constant variance, is routinely assumed in the context of sufficient dimension reduction. It is well understood that, when these conditions do not hold, blindly using them may lead to inconsistency in estimating the central subspace and the central mean subspace. Surprisingly, we discover that even if these conditions do hold, using them will bring efficiency loss. This paradoxical phenomenon is illustrated through sliced inverse regression and principal Hessian directions. The efficiency loss also applies to other dimension reduction procedures. We explain this empirical discovery by theoretical investigation.Keywords
This publication has 15 references indexed in Scilit:
- A Semiparametric Approach to Dimension ReductionJournal of the American Statistical Association, 2012
- Dimension reduction for non-elliptically distributed predictors: second-order methodsBiometrika, 2010
- Dimension reduction for nonelliptically distributed predictorsThe Annals of Statistics, 2009
- On Directional Regression for Dimension ReductionJournal of the American Statistical Association, 2007
- A paradox concerning nuisance parameters and projected estimating functionsBiometrika, 2004
- Dimension reduction for conditional mean in regressionThe Annals of Statistics, 2002
- On almost Linearity of Low Dimensional Projections from High Dimensional DataThe Annals of Statistics, 1993
- On Principal Hessian Directions for Data Visualization and Dimension Reduction: Another Application of Stein's LemmaJournal of the American Statistical Association, 1992
- Sliced Inverse Regression for Dimension ReductionJournal of the American Statistical Association, 1991
- Regression Analysis Under Link ViolationThe Annals of Statistics, 1989