Reinforcement Learning to Enable Robust Robotic Model Predictive Control
Author
Grossman, Lev Jacob
Metadata
Show full item recordCitation
Grossman, Lev Jacob. 2020. Reinforcement Learning to Enable Robust Robotic Model Predictive Control. Bachelor's thesis, Harvard College.Abstract
Traditional methods of robotic planning and trajectory optimization often break down when environmental conditions change, real-world noise is introduced, or when rewards become sparse. Consequently, much of the work involved in calculating trajectories is done not by algorithms, but by hand: tuning cost functions and engineering rewards. This thesis seeks to minimize this human effort by building on prior work combining both model-based and model- free methods within an actor-critic framework. This specific synergy allows for the automatic learning of cost functions expressive enough to enable robust robotic planning. This thesis proposes a novel algorithm, “Reference-Guided, Value-Based MPC,” which combines model predictive control (MPC) and reinforcement learning (RL) to compute feasible trajectories for a robotic arm. The algorithm does this while 1) achieving an almost 50% higher planning success rate than standard MPC, 2) solving in sparse environments considered unsolvable by current state of the art algorithms, and 3) generalizing its solutions to different environment initializations.Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAACitable link to this page
https://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37364714
Collections
- FAS Theses and Dissertations [5435]
Contact administrator regarding this item (to report mistakes or request changes)