Person: Konkle, Talia
Loading...
Email Address
AA Acceptance Date
Birth Date
Research Projects
Organizational Units
Job Title
Last Name
Konkle
First Name
Talia
Name
Konkle, Talia
12 results
Search Results
Now showing 1 - 10 of 12
Publication Visual Long-Term Memory Has a Massive Storage Capacity for Object Details(Proceedings of the National Academy of Sciences, 2008-09-11) Brady, Timothy F.; Konkle, Talia; Alvarez, George; Oliva, AudeOne of the major lessons of memory research has been that human memory is fallible, imprecise, and subject to interference. Thus, although observers can remember thousands of images, it is widely assumed that these memories lack detail. Contrary to this assumption, here we show that long-term memory is capable of storing a massive number of objects with details from the image. Participants viewed pictures of 2,500 objects over the course of 5.5 h. Afterward, they were shown pairs of images and indicated which of the two they had seen. The previously viewed item could be paired with either an object from a novel category, an object of the same basic-level category, or the same object in a different state or pose. Performance in each of these conditions was remarkably high (92%, 88%, and 87%, respectively), suggesting that participants successfully maintained detailed representations of thousands of images. These results have implications for cognitive models, in which capacity limitations impose a primary computational constraint (e.g., models of object recognition), and pose a challenge to neural models of memory storage and retrieval, which must be able to account for such a large and detailed storage capacity.Publication The Large-Scale Organization of Object-Responsive Cortex Is Reflected in Resting-State Network Architecture(Oxford University Press (OUP), 2016) Konkle, Talia; Caramazza, AlfonsoNeural responses to visually presented objects have a large-scale spatial organization across the cortex, related to the dimensions of animacy and object size. Most proposals about the origins of this organization point to the influence of differential connectivity with other cortical regions as the key organizing force that drives distinctions in object-responsive cortex. To explore this possibility, we used resting-state functional connectivity to examine the relationship between stimulus-evoked organization of objects, and distinctions in functional network architecture. Using a data-driven analysis, we found evidence for three distinct whole-brain resting-state networks that route through object-responsive cortex, and these naturally manifest the tripartite structure of the stimulus-evoked organization. However, object-responsive regions were also highly correlated with each other at rest. Together, these results point to a nested network architecture, with a local interconnected network across object-responsive cortex and distinctive subnetworks that specifically route these key object distinctions to distinct long-range regions. Broadly, these results point to the viability that long-range connections are a driving force of the large-scale organization of object-responsive cortex.Publication Mid-level perceptual features distinguish objects of different real-world sizes.(American Psychological Association (APA), 2016) Long, Bria; Konkle, Talia; Cohen, Michael A.; Alvarez, GeorgeUnderstanding how perceptual and conceptual representations are connected is a fundamental goal of cognitive science. Here, we focus on a broad conceptual distinction that constrains how we interact with objects—real-world size. Although there appear to be clear perceptual correlates for basic-level categories (apples look like other apples, oranges look like other oranges), the perceptual correlates of broader categorical distinctions are largely unexplored, i.e., do small objects look like other small objects? Because there are many kinds of small objects (e.g., cups, keys), there may be no reliable perceptual features that distinguish them from big objects (e.g., cars, tables). Contrary to this intuition, we demonstrated that big and small objects have reliable perceptual differences that can be extracted by early stages of visual processing. In a series of visual search studies, participants found target objects faster when the distractor objects differed in real-world size. These results held when we broadly sampled big and small objects, when we controlled for low-level features and image statistics, and when we reduced objects to texforms—unrecognizable textures that loosely preserve an object’s form. However, this effect was absent when we used more basic textures. These results demonstrate that big and small objects have reliably different mid-level perceptual features, and suggest that early perceptual information about broad-category membership may influence downstream object perception, recognition, and categorization processes.Publication Tripartite Organization of the Ventral Stream by Animacy and Object Size(Society for Neuroscience, 2013) Konkle, Talia; Caramazza, AlfonsoOccipito-temporal cortex is known to house visual object representations, but the organization of the neural activation patterns along this cortex is still being discovered. Here we found a systematic, large-scale structure in the neural responses related to the interaction between two major cognitive dimensions of object representation: animacy and real-world size. Neural responses were measured with functional magnetic resonance imaging while human observers viewed images of big and small animals and big and small objects. We found that real-world size drives differential responses only in the object domain, not the animate domain, yielding a tripartite distinction in the space of object representation. Specifically, cortical zones with distinct response preferences for big objects, all animals, and small objects, are arranged in a spoked organization around the occipital pole, along a single ventromedial, to lateral, to dorsomedial axis. The preference zones are duplicated on the ventral and lateral surface of the brain. Such a duplication indicates that a yet unknown higher-order division of labor separates object processing into two substreams of the ventral visual pathway. Broadly, we suggest that these large-scale neural divisions reflect the major joints in the representational structure of objects and thus place informative constraints on the nature of the underlying cognitive architecture.Publication Conceptual distinctiveness supports detailed visual long-term memory for real-world objects(American Psychological Association, 2010) Konkle, Talia; Brady, Timothy F; Alvarez, GeorgeHumans have a massive capacity to store detailed information in visual long-term memory. The present studies explored the fidelity of these visual long-term memory representations and examined how conceptual and perceptual features of object categories support this capacity. Observers viewed 2,800 object images with a different number of exemplars presented from each category. At test, observers indicated which of 2 exemplars they had previously studied. Memory performance was high and remained quite high (82% accuracy) with 16 exemplars from a category in memory, demonstrating a large memory capacity for object exemplars. However, memory performance decreased as more exemplars were held in memory, implying systematic categorical interference. Object categories with conceptually distinctive exemplars showed less interference in memory as the number of exemplars increased. Interference in memory was not predicted by the perceptual distinctiveness of exemplars from an object category, though these perceptual measures predicted visual search rates for an object target among exemplars. These data provide evidence that observers' capacity to remember visual information in long-term memory depends more on conceptual structure than perceptual distinctiveness.Publication Scene Memory Is More Detailed Than You Think: The Role of Categories in Visual Long-Term Memory(SAGE Publications, 2010) Konkle, Talia; Brady, T. F.; Alvarez, George; Oliva, A.Observers can store thousands of object images in visual long-term memory with high fidelity, but the fidelity of scene representations in long-term memory is not known. Here, we probed scene-representation fidelity by varying the number of studied exemplars in different scene categories and testing memory using exemplar-level foils. Observers viewed thousands of scenes over 5.5 hr and then completed a series of forced-choice tests. Memory performance was high, even with up to 64 scenes from the same category in memory. Moreover, there was only a 2% decrease in accuracy for each doubling of the number of studied scene exemplars. Surprisingly, this degree of categorical interference was similar to the degree previously demonstrated for object memory. Thus, although scenes have often been defined as a superset of objects, our results suggest that scenes and objects may be entities at a similar level of abstraction in visual long-term memory.Publication Visual Awareness Is Limited by the Representational Architecture of the Visual System(MIT Press - Journals, 2015) Cohen, Michael A; Nakayama, Ken; Konkle, Talia; Stantić, Mirta; Alvarez, GeorgeVisual perception and awareness have strict limitations. We suggest that one source of these limitations is the representational architecture of the visual system. Under this view, the extent to which items activate the same neural channels constrains the amount of information that can be processed by the visual system and ultimately reach awareness. Here, we measured how well stimuli from different categories (e.g., faces and cars) blocked one another from reaching awareness using two distinct paradigms that render stimuli invisible: visual masking and continuous flash suppression. Next, we used fMRI to measure the similarity of the neural responses elicited by these categories across the entire visual hierarchy. Overall, we found strong brain-behavior correlations within the ventral pathway, weaker correlations in the dorsal pathway, and no correlations in early visual cortex (V1-V3). These results suggest that the organization of higher level visual cortex constrains visual awareness and the overall processing capacity of visual cognition.Publication Visual Long-Term Memory Has the Same Limit on Fidelity as Visual Working Memory(SAGE Publications, 2013-05-20) Brady, Timothy Francis; Konkle, Talia; Gill, Jonathan; Oliva, Aude; Alvarez, GeorgeVisual long-term memory can store thousands of objects with surprising visual detail, but just how detailed are these representations, and how can one quantify this fidelity? Using the property of color as a case study, we estimated the precision of visual information in long-term memory, and compared this with the precision of the same information in working memory. Observers were shown real-world objects in random colors and were asked to recall the colors after a delay. We quantified two parameters of performance: the variability of internal representations of color (fidelity) and the probability of forgetting an object’s color altogether. Surprisingly, the fidelity of color information in long-term memory was comparable to the asymptotic precision of working memory. These results suggest that long-term memory and working memory may be constrained by a common limit, such as a bound on the fidelity required to retrieve a memory representation.Publication Real-World Objects Are Not Represented as Bound Units: Independent Forgetting of Different Object Details from Visual Memory(American Psychological Association, 2013-05-20) Brady, Timothy Francis; Konkle, Talia; Alvarez, George; Oliva, AudeAre real-world objects represented as bound units? Although a great deal of research has examined binding between the feature dimensions of simple shapes, little work has examined whether the featural properties of real-world objects are stored in a single unitary object representation. In a first experiment, we found that information about an object's color is forgotten more rapidly than the information about an object's state (e.g., open, closed), suggesting that observers do not forget objects as entirely bound units. In a second and third experiment, we examined whether state and exemplar information are forgotten separately or together. If these properties are forgotten separately, the probability of getting one feature correct should be independent of whether the other feature was correct. We found that after a short delay, observers frequently remember both state and exemplar information about the same objects, but after a longer delay, memory for the two properties becomes independent. This indicates that information about object state and exemplar are forgotten separately over time. We thus conclude that real-world objects are not represented in a single unitary representation in visual memory.Publication Visual search for object categories is predicted by the representational architecture of high-level visual cortex(American Physiological Society, 2016) Cohen, Michael Sharpe; Alvarez, George; Nakayama, Ken; Konkle, TaliaVisual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face amongst cars, body amongst hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macro-scale sectors as well as smaller meso-scale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system.