Person:
Schulz, Eric

Loading...
Profile Picture

Email Address

AA Acceptance Date

Birth Date

Research Projects

Organizational Units

Job Title

Last Name

Schulz

First Name

Eric

Name

Schulz, Eric

Search Results

Now showing 1 - 2 of 2
  • Publication
    Generalization Guides Human Exploration in Vast Decision Spaces
    (Cold Spring Harbor Laboratory, 2017-08-01) Wu, Charley M.; Schulz, Eric; Speekenbrink, Maarten; Nelson, Jonathan D.; Meder, Björn
    From foraging for food to learning complex games, many aspects of human behaviour can be framed as a search problem with a vast space of possible actions. Under finite search horizons, optimal solutions are generally unobtainable. Yet how do humans navigate vast problem spaces, which require intelligent exploration of unobserved actions? Using a variety of bandit tasks with up to 121 arms, we study how humans search for rewards under limited search horizons, where the spatial correlation of rewards (in both generated and natural environments) provides traction for generalization. Across a variety of different probabilistic and heuristic models, we find evidence that Gaussian Process function learning--combined with an optimistic Upper Confidence Bound sampling strategy--provides a robust account of how people use generalization to guide search. Our modelling results and parameter estimates are recoverable, and can be used to simulate human-like performance, providing novel insights about human behaviour in complex environments.
  • Publication
    Multi-Task Reinforcement Learning in Humans
    (Nature publishing group, 2021-01-28) Tomov, Momchil; Schulz, Eric; Gershman, Samuel
    The ability to transfer knowledge across tasks and generalize to novel ones is an important hallmark of human intelligence. Yet not much is known about human multi-task reinforcement learning. We study participants’ behavior in a novel two-step decision making task with multiple features and changing reward functions. We compare their behavior to two state-of-the-art algorithms for multi-task reinforcement learning, one that maps previous policies and encountered features to new reward functions and one that approximates value functions across tasks, as well as to standard model-based and model-free algorithms. Across three exploratory experiments and a large preregistered experiment, our results provide strong evidence for a strategy that maps previously learned policies to novel scenarios. These results enrich our understanding of human reinforcement learning in complex environments with changing task demands.