Reachable environments in the mind and brain: insights into the visual representation of near-scale spaces
Access StatusFull text of the requested work is not available in DASH at this time ("dark deposit"). For more information on dark deposits, see our FAQ.
Josephs, Emilie Louise
MetadataShow full item record
CitationJosephs, Emilie Louise. 2021. Reachable environments in the mind and brain: insights into the visual representation of near-scale spaces. Doctoral dissertation, Harvard University Graduate School of Arts and Sciences.
AbstractMuch of our visual experience consists of rich, close-scale environments: imagine the view of your desk as you type an email, or the kitchen counter as you prepare a meal. Vision science has identified mechanisms for processing views of individual objects and views of navigable-scale environments (i.e. “scenes”), but it is still unknown what mechanisms underlie our understanding of reachable-scale spaces (hereafter “reachspaces”). Across three papers, I explore how reachspaces are represented in the mind and brain, and provide evidence that they may require additional mechanisms distinct from object and scene processing. In the first paper (Josephs & Konkle, 2019), I tested whether views of reachspaces differ systematically from views of scenes and objects in their visual statistics. Using computational measures, I found that reachspaces span a distinct set of visual features from scenes and objects. With behavioral experiments, I confirmed that human observers are sensitive to these differences, and that even across a wide variety of categories (e.g. office, bathroom, kitchen, etc), reachspaces are more perceptually similar to each other than to views at other scales. In the second paper, I explored the organization of our knowledge of the reachable world. I collected over 1 million similarity judgments on reachspace images, and modeled them to discover the latent dimensions that shape these judgments. Overall, I found that reachspace similarity is well predicted by the function or purpose of the space, similar to previous findings for both objects and scenes. I next examined clusters in the similarity structure of this image set, and found evidence for conceptual divisions between 5 kinds of reachspaces: those related to eating, electronics, storage, hobbies and chores. Finally, I found that a wide variety of dimensions contribute to these judgments, which only partially overlap with dimensions previously identified for scenes and objects. In the third paper (Josephs & Konkle, 2020), I explored how reachspaces activate visual cortex, and whether their activation profiles differed from scenes and objects. Using functional neuroimaging, I found that reachspaces elicit preferential activity in ventral occipital and dorsal parietal regions, distinct from the regions that prefer scenes and objects. Further experiments found that these reachspace-preferring regions are strongly responsive to views of multiple objects. Altogether, this work has begun to identify the representations that support reachspace processing, and raises the possibility that they rely on different mechanisms than scenes or single objects.
Citable link to this pagehttps://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37368355
- FAS Theses and Dissertations