Optimal Coordinated Planning Amongst Self-Interested Agents with Private State
Access StatusFull text of the requested work is not available in DASH at this time ("dark deposit"). For more information on dark deposits, see our FAQ.
MetadataShow full item record
CitationCavallo, Ruggiero, David C. Parkes, and Satinder Singh. 2006. Optimal coordinated planning amongst self-interested agents with private state. In Uncertainty in artificial intelligence: Proceedings of the Twenty-second Conference: July 13-16, 2006, Cambridge, MA, ed. R. Dechter, T. S. Richardson, F. Bacchus et al., 55-62. Corvallis, Oregon: AUAI Press.
AbstractConsider a multi-agent system in a dynamic and uncertain environment. Each agent’s local decision problem is modeled as a Markov decision process (MDP) and agents must coordinate on a joint action in each period, which provides a reward to each agent and causes local state transitions. A social planner knows the model of every agent’s MDP and wants to implement the optimal joint policy, but agents are self-interested and have private local state. We provide an incentive-compatible mechanism for eliciting state information that achieves the optimal joint plan in a Markov perfect equilibrium of the induced stochastic game. In the special case in which local problems are Markov chains and agents compete to take a single action in each period, we leverage Gittins allocation indices to provide an efficient factored algorithm and distribute computation of the optimal policy among the agents. Distributed, optimal coordinated learning in a multi-agent variant of the multi-armed bandit problem is obtained as a special case.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:4686806
- FAS Scholarly Articles