Competition and Cooperation Between Multiple Reinforcement Learning Systems
View/ Open
45762990.pdf (2.052Mb)
Access Status
Full text of the requested work is not available in DASH at this time ("restricted access"). For more information on restricted deposits, see our FAQ.Published Version
https://doi.org/10.1016/b978-0-12-812098-9.00007-3Metadata
Show full item recordCitation
Kool, Wouter, Kiery A. Cushman, and Samuel J. Gershman. 2018. Competition and Cooperation Between Multiple Reinforcement Learning Systems. In Understanding Goal-Directed Decision Making: Computations and Circuits, ed. Richard Morris, Aaron Bornstein and Amitai Shenhav, 153-178. Elsevier.Abstract
Most psychological research on reinforcement learning has depicted two systems locked in battle for control of behavior: a flexible but computationally expensive “model-based” system and an inflexible but cheap “model-free” system. However, the complete picture is more complex, with the two systems cooperating in myriad ways. We focus on two issues at the frontier of this research program. First, how is the conflict between these systems adjudicated? Second, how can the systems be combined to harness the relative strengths of each? This chapter reviews recent work on competition and cooperation between the two systems, highlighting the computational principles that govern different forms of interaction.Citable link to this page
http://nrs.harvard.edu/urn-3:HUL.InstRepos:41319693
Collections
- FAS Scholarly Articles [17845]
Contact administrator regarding this item (to report mistakes or request changes)