Visual search for object categories is predicted by the representational architecture of high-level visual cortex
MetadataShow full item record
CitationCohen, Michael A., George A. Alvarez, Ken Nakayama, and Talia Konkle. 2016. “Visual Search for Object Categories Is Predicted by the Representational Architecture of High-Level Visual Cortex.” Journal of Neurophysiology 117 (1) (November 2): 388–402. doi:10.1152/jn.00569.2016.
AbstractVisual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face amongst cars, body amongst hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macro-scale sectors as well as smaller meso-scale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:33973830
- FAS Scholarly Articles