Abstract
Kernel smoothing is a common method of estimating the mean function in the nonparametric regression model y = f(x) + ε, where f(x) is a smooth deterministic mean function and ε is an error process with mean zero. In this article, the mean squared error of kernel estimators is computed for processes with correlated errors, and the estimators are shown to be consistent when the sequence of error processes converges to a mixing sequence. The standard techniques for bandwidth selection, such as cross-validation and generalized cross-validation, are shown to perform very badly when the errors are correlated. Standard selection techniques are shown to favor undersmoothing when the correlations are predominantly positive and oversmoothing when negative. The selection criteria can, however, be adjusted to correct for the effect of correlation. In simulations, the standard selection criteria are shown to behave as predicted. The corrected criteria are shown to be very effective when the correlation function is known. Estimates of correlation based on the data are shown, by simulation, to be sufficiently good for correcting the selection criteria, particularly if the signal to noise ratio is small.