Abstract
This paper concerns the effects of random error in numerical measurements of risk factors (covanates) in relative risk regressions. When not dependent on outcome (nondifferential), such error usually attenuates relative risk estimates (shifts them toward one) and leads to spuriously narrow confidence intervals. The presence of measurement error also reduces precision of estimates and power of significance tests. However, significance levels obtained by using the approximate measurements are usually valid and as powerful as possible given the measurement error. The attenuation in risk estimate depends not only on the size (variance) of the measurement error, but also on its distributional form, on whether it is dependent on the true level of the risk factor (whether it is of “Berkson” type), on the variance and distributional form of true levels of the risk factor, on the functional form of the regression (exponential or linear), and on the confounding variables included in the model. Error in measuring confounding variables leads to loss of control of confounding, leaving residual bias. Uncomplicated techniques of correcting the effects of measurement error in simple models in which distributions are assumed normal are available in the statistical literature. For these corrections, information on measurement error variance is required. Some approaches appropriate for more general models have been proposed, but these appear to be insufficiently developed for routine application.