Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search.

Abstract
Many experiments have shown that the human visual system makes extensive use of contextual information for facilitating object search in natural scenes. However, the question of how to formally model contextual influences is still open. On the basis of a Bayesian framework, the authors present an original approach of attentional guidance by global scene context. The model comprises 2 parallel pathways; one pathway computes local features (saliency) and the other computes global (scene-centered) features. The contextual guidance model of attention combines bottom-up saliency, scene context, and top-down mechanisms at an early stage of visual processing and predicts the image regions likely to be fixated by human observers performing natural search tasks in real-world scenes.
Funding Information
  • National Geospatial-Intelligence Agency (NEGI-1582-04-0004)
  • National Institute of Mental Health (1R03MH068322-01)
  • NEC Corporation
  • Army Research Office (W911NF-04-1-0078)
  • National Science Foundation (BCS-0094433)
  • Michigan State University Foundation