Publication: Multi-Agent Systems: Cooperative Helper Agents and Robustness to Adversarial Attacks
No Thumbnail Available
Open/View Files
Date
2020-08-18
Authors
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Tylkin, Paul. 2020. Multi-Agent Systems: Cooperative Helper Agents and Robustness to Adversarial Attacks. Doctoral dissertation, Harvard University Graduate School of Arts and Sciences.
Research Data
Abstract
Artificially-intelligent (AI) agents are, and will continue to become, indispensable and key participants in the world around us. We want agents to make our lives better, easier, safer, more interesting, and more rewarding. But it is clear that agents are not always going to behave in ways that we expect and that we would like them to. Sometimes, this is because of misspecification or mistakes in their design, and sometimes it is because they have been built with malevolent objectives in mind. This dissertation demonstrates that by thinking carefully about agents' models of the world and incentives, we can design multi-agent systems that are more performant and more robust to adversarial attacks.
One study considers the design of election systems in the presence of a vote buyer -- an entity seeking to pay voters to vote a certain way, thereby changing the election outcome. Assuming that the vote buyer is budget-constrained and seeks to buy ballots from the sub-population of voters that would potentially change the election outcome, this study shows how an Election Authority, which runs and administers the election, can protect the election's integrity by also distributing decoy ballots and using these to deplete the vote buyer's budget.
Two new frameworks for multi-agent learning are introduced. The first consists of a set of modifications to the Arcade Learning Environment and OpenAI Gym, allowing for training and deploying multiple agents, as well as arbitrarily modifying the environments by writing to emulator RAM. The second is a new environment (the Javatari Learning Environment) for deploying artificial agents together with humans. Using these new frameworks, this work investigates the design of helper agents that can act cooperatively together with humans or lower-skilled agents in the same world.
Description
Other Available Sources
Keywords
AI, artificial intelligence, helper agents, human-AI cooperation, multi-agent systems, robustness, Artificial intelligence, Computer science
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service