Publication: Visual search for object categories is predicted by the representational architecture of high-level visual cortex
Open/View Files
Date
2016
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
American Physiological Society
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Cohen, Michael A., George A. Alvarez, Ken Nakayama, and Talia Konkle. 2016. “Visual Search for Object Categories Is Predicted by the Representational Architecture of High-Level Visual Cortex.” Journal of Neurophysiology 117 (1) (November 2): 388–402. doi:10.1152/jn.00569.2016.
Research Data
Abstract
Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face amongst cars, body amongst hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macro-scale sectors as well as smaller meso-scale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system.
Description
Other Available Sources
Keywords
Terms of Use
This article is made available under the terms and conditions applicable to Open Access Policy Articles (OAP), as set forth at Terms of Service