Action Representations in the Visual System
Author
Tarhan, Leyla
Metadata
Show full item recordCitation
Tarhan, Leyla. 2021. Action Representations in the Visual System. Doctoral dissertation, Harvard University Graduate School of Arts and Sciences.Abstract
We see other people’s actions every day. Yet, most research investigating how we recognize and understand these actions focuses on abstract, conceptual representations outside of the visual system. This thesis shifts this focus, to study how the visual system represents the actions that we see. I argue that this shift is necessary to understand how the brain derives meaning from the patterns of light that fall on our eyes when we witness an action.In Chapter 1, I begin by introducing a tool that enhances our ability to study how action representations change over large swathes of the cortex: reliability-based voxel selection. In this chapter, I explain how to leverage this tool to identify regions of the brain with high-quality data, and demonstrate that this method isolates different regions of cortex than alternative approaches to voxel selection.
In Chapter 2, I apply reliability-based voxel selection and other modern neuroimaging analyses to study how the visual cortex represents everyday actions, such as running and cooking. I show that this cortex naturally divides into 5 networks that are tuned to slightly different aspects of these actions. One network is tuned to actions’ sociality (whether they are directed at a person, such as ballroom dancing), while four networks are tuned to different interaction envelopes (the scale of space at which an action affects the world around it). These interaction envelopes range from small (as in knitting) to large (as in a soccer penalty shot). This work suggests that sociality and interaction envelope size are two of the major features organizing action responses in the visual cortex.
In Chapter 3, I turn to a more cognitive level, to ask what properties underlie our intuitive processing of actions. Specifically, I investigate the properties that underlie our intuitions about actions’ similarity; for example, why do running and walking seem similar to each other, but different from baking cookies? I show that inferences about actors’ goals predict these intuitions particularly well, followed by judgments about the actors’ movement kinematics. This work adds to an existing literature showing that humans naturally process others’ intentions when they see their actions. In addition, it demonstrates that more visual information – such as movement kinematics – also influences this intuitive level of action processing.
Together, these findings demonstrate that observing everyday actions evokes rich representations throughout the visual system, as well as higher-order cognitive processing. This work also pushes action research forward by putting the focus on how the visual system extracts information from the actions that we see, rather than the more conceptual-level processing that operates over this perceptual information. I end by outlining how we can continue building on this work, to uncover how the brain transforms visual action representations into abstract conceptual representations – ultimately extracting meaning from a pattern of light.
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAACitable link to this page
https://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37368217
Collections
- FAS Theses and Dissertations [6848]
Contact administrator regarding this item (to report mistakes or request changes)