Discrete analysis of spatial-sensitivity models

Abstract
The visual representation of spatial patterns begins with a series of linear transformations: the stimulus is blurred by the optics, spatially sampled by the photoreceptor array, spatially pooled by the ganglion-cell receptive fields, and so forth. Models of human spatial-pattern vision commonly summarize the initial transformations by a single linear transformation that maps the stimulus into an array of sensor responses. Some components of the initial linear transformations (e.g., lens blurring, photoreceptor sampling) have been estimated empirically; others have not. A computable model must include some assumptions concerning the unknown components of the initial linear encoding. Even a modest sketch of the initial visual encoding requires the specification of a large number of sensors, making the calculations required for performance predictions quite large. We describe procedures for reducing the computational burden of current models of spatial vision that ensure that the simplifications are consistent with the predictions of the complete model. We also describe a method for using pattern-sensitivity measurements to estimate the initial linear transformation. The method is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. We show how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.

This publication has 27 references indexed in Scilit: