Publication: Economic Hierarchical Q-learning
Open/View Files
Date
2008
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
Association for the Advancement of Artificial Intelligence
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Schultink, Erik, Ruggiero Cavallo, and David C. Parkes. 2008. Economic hierarchical Q-learning. In Proceedings of the Twenty-third AAAI Conference on Artificial Intelligence and the Twentieth Innovative Applications of Artificial Intelligence Conference: July 13-17, 2008, Chicago, Illinois, ed. American Association for Artificial Intelligence, 689-695. Menlo Park, Calif.: AAAI Press.
Research Data
Abstract
Hierarchical state decompositions address the curse-of-dimensionality in Q-learning methods for reinforcement learning (RL) but can suffer from suboptimality. In addressing this, we introduce the Economic Hierarchical Q-Learning (EHQ) algorithm for hierarchical RL. The EHQ algorithm uses subsidies to align interests such that agents that would otherwise converge to a recursively optimal policy will instead be motivated to act hierarchically optimally. The essential idea is that a parent will pay a child for the relative value to the rest of the system for "returning the world" in one state over another state. The resulting learning framework is simple compared to other algorithms that obtain hierarchical optimality. Additionally, EHQ encapsulates relevant information about value tradeoffs faced across the hierarchy at each node and requires minimal data exchange between nodes. We provide no theoretical proof of hierarchical optimality but are able demonstrate success with EHQ in empirical results.
Description
Other Available Sources
Keywords
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service