Distributed Q-Learning for Aggregated Interference Control in Cognitive Radio Networks

Abstract
This paper deals with the problem of aggregated interference generated by multiple cognitive radios (CRs) at the receivers of primary (licensed) users. In particular, we consider a secondary CR system based on the IEEE 802.22 standard for wireless regional area networks (WRANs), and we model it as a multiagent system where the multiple agents are the different secondary base stations in charge of controlling the secondary cells. We propose a form of real-time multiagent reinforcement learning, which is known as decentralized Q-learning, to manage the aggregated interference generated by multiple WRAN systems. We consider both situations of complete and partial information about the environment. By directly interacting with the surrounding environment in a distributed fashion, the multiagent system is able to learn, in the first case, an efficient policy to solve the problem and, in the second case, a reasonably good suboptimal policy. Computational and memory requirement considerations are also presented, discussing two different options for uploading and processing the learning information. Simulation results, which are presented for both the upstream and downstream cases, reveal that the proposed approach is able to fulfill the primary-user interference constraints, without introducing signaling overhead in the system.

This publication has 12 references indexed in Scilit: