Distributed Competitive Decision Making Using Multi-Armed Bandit Algorithms - Equipe DECIDE, from data to decision Accéder directement au contenu
Article Dans Une Revue Wireless Personal Communications Année : 2021

Distributed Competitive Decision Making Using Multi-Armed Bandit Algorithms

Résumé

This paper tackles the problem of Opportunistic Spectrum Access (OSA) in the Cognitive Radio (CR). The main challenge of a Secondary User (SU) in OSA is to learn the availability of existing channels in order to select and access the one with the highest vacancy probability. To reach this goal, we propose a novel Multi-Armed Bandit (MAB) algorithm called ϵ-UCB in order to enhance the spectrum learning of a SU and decrease the regret, i.e. the loss of reward by the selection of worst channels. We corroborate with simulations that the regret of the proposed algorithm has a logarithmic behavior. The last statement means that within a finite number of time slots, the SU can estimate the vacancy probability of targeted channels in order to select the best one for transmitting. Hereinafter, we extend ϵ-UCB to consider multiple priority users, where a SU can selfishly estimate and access the channels according to his prior rank. The simulation results show the superiority of the proposed algorithms for a single or multi-user cases compared to the existing MAB algorithms.
Fichier principal
Vignette du fichier
PersWireless.pdf (834.06 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03151936 , version 1 (15-03-2021)

Identifiants

Citer

Mahmoud Almasri, Ali Mansour, Christophe Moy, Ammar Assoum, Denis Le Jeune, et al.. Distributed Competitive Decision Making Using Multi-Armed Bandit Algorithms. Wireless Personal Communications, 2021, 118 (2), pp.1165-1188. ⟨10.1007/s11277-020-08064-w⟩. ⟨hal-03151936⟩
120 Consultations
174 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More