Economic Hierarchical Q-learning
View/ Open
Published Version
http://portal.acm.org/citation.cfm?id=1620163.1620179Metadata
Show full item recordCitation
Schultink, Erik, Ruggiero Cavallo, and David C. Parkes. 2008. Economic hierarchical Q-learning. In Proceedings of the Twenty-third AAAI Conference on Artificial Intelligence and the Twentieth Innovative Applications of Artificial Intelligence Conference: July 13-17, 2008, Chicago, Illinois, ed. American Association for Artificial Intelligence, 689-695. Menlo Park, Calif.: AAAI Press.Abstract
Hierarchical state decompositions address the curse-of-dimensionality in Q-learning methods for reinforcement learning (RL) but can suffer from suboptimality. In addressing this, we introduce the Economic Hierarchical Q-Learning (EHQ) algorithm for hierarchical RL. The EHQ algorithm uses subsidies to align interests such that agents that would otherwise converge to a recursively optimal policy will instead be motivated to act hierarchically optimally. The essential idea is that a parent will pay a child for the relative value to the rest of the system for "returning the world" in one state over another state. The resulting learning framework is simple compared to other algorithms that obtain hierarchical optimality. Additionally, EHQ encapsulates relevant information about value tradeoffs faced across the hierarchy at each node and requires minimal data exchange between nodes. We provide no theoretical proof of hierarchical optimality but are able demonstrate success with EHQ in empirical results.Other Sources
http://www.eecs.harvard.edu/econcs/pubs/schultink08.pdfTerms of Use
This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAACitable link to this page
http://nrs.harvard.edu/urn-3:HUL.InstRepos:4000334
Collections
- FAS Scholarly Articles [18292]
Contact administrator regarding this item (to report mistakes or request changes)