Visual search for object categories is predicted by the representational architecture of high-level visual cortex

DSpace/Manakin Repository

Visual search for object categories is predicted by the representational architecture of high-level visual cortex

Citable link to this page

 

 
Title: Visual search for object categories is predicted by the representational architecture of high-level visual cortex
Author: Cohen, Michael Sharpe; Alvarez, George Angelo; Nakayama, Ken; Konkle, Talia A

Note: Order does not necessarily reflect citation order of authors.

Citation: Cohen, Michael A., George A. Alvarez, Ken Nakayama, and Talia Konkle. 2016. “Visual Search for Object Categories Is Predicted by the Representational Architecture of High-Level Visual Cortex.” Journal of Neurophysiology 117 (1) (November 2): 388–402. doi:10.1152/jn.00569.2016.
Full Text & Related Files:
Abstract: Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face amongst cars, body amongst hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macro-scale sectors as well as smaller meso-scale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system.
Published Version: doi:10.1152/jn.00569.2016
Terms of Use: This article is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#OAP
Citable link to this page: http://nrs.harvard.edu/urn-3:HUL.InstRepos:33973830
Downloads of this work:

Show full Dublin Core record

This item appears in the following Collection(s)

 
 

Search DASH


Advanced Search
 
 

Submitters