Abstract
Predictions obtained from a multivariate calibration model are sensitive to variations in the spectra such as baseline shifts, multiplicative effects, etc. Many spectral pretreatment methods have been developed to reduce these distortions, and the best method is usually the one that minimizes the prediction error for an independent test set. This paper shows how multivariate sensitivity can be used to interpret spectral pretreatment results. Understanding why a particular pretreatment method gives good or bad results is important for ruling out chance effects in the conventional process of "trial and error", thus obtaining more confidence in the finally selected model. The principles are exemplified using the transmission near-infrared spectroscopic prediction of oxygenates in ampules of the standard reference material gasoline. The pretreatment methods compared are the multiplicative signal correction, first-derivative method, and second-derivative method. It is shown that for this application the first- and second-derivative methods are successful in removing the background. However, differentiating the spectra substantially reduces multivariate net analyte signal (in the worst case by a factor of 21). Consequently, a significantly smaller multivariate sensitivity is obtained which leads to increased spectral error propagation resulting in a larger uncertainty in the regression vector estimate and larger prediction errors. Differentiating spectra also increases the spectral noise (each time by a factor 2(1/2)) but this effect, which is well-known, is of minor importance for the current application.