Competing Mobile Network Game: Embracing antijamming and jamming strategies with reinforcement learning
Published Version
https://doi.org/10.1109/CNS.2013.6682689Metadata
Show full item recordCitation
Gwon, Youngjune, Siamak Dastangoo, Carl Fossa, and H. T. Kung. 2013. “Competing Mobile Network Game: Embracing Antijamming and Jamming Strategies with Reinforcement Learning.” In the Proceedings of the 2013 IEEE Conference on Communications and Network Security (CNS), National Harbor, MD and Washington DC, 14-16 October, 2013, 28-36. IEE Press.Abstract
We introduce Competing Mobile Network Game (CMNG), a stochastic game played by cognitive radio networks that compete for dominating an open spectrum access. Differentiated from existing approaches, we incorporate both communicator and jamming nodes to form a network for friendly coalition, integrate antijamming and jamming subgames into a stochastic framework, and apply Q-learning techniques to solve for an optimal channel access strategy. We empirically evaluate our Q-learning based strategies and find that Minimax-Q learning is more suitable for an aggressive environment than Nash-Q while Friend-or-foe Q-learning can provide the best solution under distributed mobile ad hoc networking scenarios in which the centralized control can hardly be available.Terms of Use
This article is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#OAPCitable link to this page
http://nrs.harvard.edu/urn-3:HUL.InstRepos:12561370
Collections
- FAS Scholarly Articles [18153]
Contact administrator regarding this item (to report mistakes or request changes)