Competition and Cooperation Between Multiple Reinforcement Learning Systems
Access StatusFull text of the requested work is not available in DASH at this time ("restricted access"). For more information on restricted deposits, see our FAQ.
MetadataShow full item record
CitationKool, Wouter, Kiery A. Cushman, and Samuel J. Gershman. 2018. Competition and Cooperation Between Multiple Reinforcement Learning Systems. In Understanding Goal-Directed Decision Making: Computations and Circuits, ed. Richard Morris, Aaron Bornstein and Amitai Shenhav, 153-178. Elsevier.
AbstractMost psychological research on reinforcement learning has depicted two systems locked in battle for control of behavior: a flexible but computationally expensive “model-based” system and an inflexible but cheap “model-free” system. However, the complete picture is more complex, with the two systems cooperating in myriad ways. We focus on two issues at the frontier of this research program. First, how is the conflict between these systems adjudicated? Second, how can the systems be combined to harness the relative strengths of each? This chapter reviews recent work on competition and cooperation between the two systems, highlighting the computational principles that govern different forms of interaction.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:41319693
- FAS Scholarly Articles 
Contact administrator regarding this item (to report mistakes or request changes)