Optimal Coordinated Planning Amongst Self-Interested Agents with Private State
View/ Open
Cavallo_Optimal.pdf (183.3Kb)
Access Status
Full text of the requested work is not available in DASH at this time ("restricted access"). For more information on restricted deposits, see our FAQ.Published Version
http://www.informatik.uni-trier.de/~ley/db/conf/uai/index.htmlMetadata
Show full item recordCitation
Cavallo, Ruggiero, David C. Parkes, and Satinder Singh. 2006. Optimal coordinated planning amongst self-interested agents with private state. In Uncertainty in artificial intelligence: Proceedings of the Twenty-second Conference: July 13-16, 2006, Cambridge, MA, ed. R. Dechter, T. S. Richardson, F. Bacchus et al., 55-62. Corvallis, Oregon: AUAI Press.Abstract
Consider a multi-agent system in a dynamic and uncertain environment. Each agent’s local decision problem is modeled as a Markov decision process (MDP) and agents must coordinate on a joint action in each period, which provides a reward to each agent and causes local state transitions. A social planner knows the model of every agent’s MDP and wants to implement the optimal joint policy, but agents are self-interested and have private local state. We provide an incentive-compatible mechanism for eliciting state information that achieves the optimal joint plan in a Markov perfect equilibrium of the induced stochastic game. In the special case in which local problems are Markov chains and agents compete to take a single action in each period, we leverage Gittins allocation indices to provide an efficient factored algorithm and distribute computation of the optimal policy among the agents. Distributed, optimal coordinated learning in a multi-agent variant of the multi-armed bandit problem is obtained as a special case.Other Sources
http://www.eecs.harvard.edu/econcs/pubs/cps-uai06.pdfCitable link to this page
http://nrs.harvard.edu/urn-3:HUL.InstRepos:4686806
Collections
- FAS Scholarly Articles [18292]
Contact administrator regarding this item (to report mistakes or request changes)