Learning and Solving Many-Player Games Through a Cluster-Based Representation
MetadataShow full item record
CitationFicici, Sevan, David C. Parkes, and Avi Pfeffer. 2008. Learning and solving many-player games through a cluster-based representation. In Uncertainty in artificial intelligence: Proceedings of the Twenty-fourth Conference: July 9-12, 2008, Helsinki, Finland, ed. D. McAllester, P. Myllymaki, 187-195. Corvallis, Or.: AUAI Press for Association for Uncertainty in Artificial Intelligence.
AbstractIn addressing the challenge of exponential scaling with the number of agents we adopt a cluster-based representation to approximately solve asymmetric games of very many players. A cluster groups together agents with a similar “strategic view ” of the game. We learn the clustered approximation from data consisting of strategy profiles and payoffs, which may be obtained from observations of play or access to a simulator. Using our clustering we construct a reduced “twins” game in which each cluster is associated with two players of the reduced game. This allows our representation to be individuallyresponsive because we align the interests of every individual agent with the strategy of its cluster. Our approach provides agents with higher payoffs and lower regret on average than model-free methods as well as previous cluster-based methods, and requires only few observations for learning to be successful. The “twins ” approach is shown to be an important component of providing these low regret approximations.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:4000306
- FAS Scholarly Articles