Conjunctions Can Guide Attention Through Visual Search
Guided search is a mechanism that controls and optimizes the deployment of attention during visual search and allows one to pay attention only to highly relevant items. For instance, when searching for a conjunction of two features, we are able to select a feature-marked subset (e.g., all items sharing same color) prior to focusing attention on particular items. Standard models assume that only separate features can guide attention since they are only available at the preattentive stage of visual analysis and no conjunction information is available at that stage. Here I show that search performance is affected by both the distribution offeatures across the visual field and their conjunctions in particular items. It appears that people are unable to use “pure”, unbound features for selecting relevant subsets. This major finding requires reconsidering the standard models of guided search. The concept of distributed attention, which represents multiple items as imperfectly bound objects, seems promising in explaining this finding