Structure Learning and Uncertainty-Guided Exploration in the Human Brain
Citation
Tomov, Momchil. 2020. Structure Learning and Uncertainty-Guided Exploration in the Human Brain. Doctoral dissertation, Harvard University, Graduate School of Arts & Sciences.Abstract
Over the past several decades, reinforcement learning has emerged as a unifying framework for reward-based learning and decision making in brains, minds and machines. With a long history crisscrossing the fields of psychology, neuroscience, and artificial intelligence, reinforcement learning has made major contributions to explaining many human and animal behaviors, the neural circuits underlying these behaviors, and allowing artificial agents to achieve human-level performance on tasks that were previously beyond the capabilities of computers. Characterizing the reinforcement learning computations performed by the brain can thus simultaneously advance our understanding of neurological disorders affecting decision making and also guide theoretical research towards developing artificial agents capable of dealing with complex real-world domains. Yet despite broad agreement on the prominence of reinforcement learning and its mapping onto brain circuits, many open questions remain regarding the particular kinds of representations and algorithms employed by living organisms. In this body of work, I use a combination of computational modeling, behavioral experiments, and neuroimaging to study two such questions: how the brain tackles the exploration-exploitation dilemma to efficiently learn the values of different options, given that it knows the structure of the environment, and how it might learn and represent this structure in the first place.I first sought to characterize the neural architecture of uncertainty-guided exploration. Using fMRI, I found that the relative uncertainty of the available options is reflected in rostrolateral prefrontal cortex and drives directed exploration, while the total uncertainty of the available options is reflected in dorsolateral prefrontal cortex Over the past several decades, reinforcement learning has emerged as a unifying framework for reward-based learning and decision making in brains, minds and machines. With a long history crisscrossing the fields of psychology, neuroscience, and artificial intelligence, reinforcement learning has made major contributions to explaining many human and animal behaviors, the neural circuits underlying these behaviors, and allowing artificial agents to achieve human-level performance on tasks that were previously beyond the capabilities of computers. Characterizing the reinforcement learning computations performed by the brain can thus simultaneously advance our understanding of neurological disorders affecting decision making and also guide theoretical research towards developing artificial agents capable of dealing with complex real-world domains. Yet despite broad agreement on the prominence of reinforcement learning and its mapping onto brain circuits, many open questions remain regarding the particular kinds of representations and algorithms employed by living organisms. In this body of work, I use a combination of computational modeling, behavioral experiments, and neuroimaging to study two such questions: how the brain tackles the exploration-exploitation dilemma to efficiently learn the values of different options, given that it knows the structure of the environment, and how it might learn and represent this structure in the first place. I first sought to characterize the neural architecture of uncertainty-guided exploration. Using fMRI, I found that the relative uncertainty of the available options is reflected in rostrolateral prefrontal cortex and drives directed exploration, while the total uncertainty of the available options is reflected in dorsolateral prefrontal cortex.
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAACitable link to this page
https://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37365522
Collections
- FAS Theses and Dissertations [6848]
Contact administrator regarding this item (to report mistakes or request changes)