Sparse stochastic bandits - INRAE - Institut national de recherche pour l’agriculture, l’alimentation et l’environnement
Conference Papers Year : 2017

Sparse stochastic bandits

Joon Kwon

Abstract

In the classical multi-armed bandit problem, d arms are available to the decision maker who pulls them sequentially in order to maximize his cumulative reward. Guarantees can be obtained on a relative quantity called regret, which scales linearly with d (or with √ d in the minimax sense). We here consider the sparse case of this classical problem in the sense that only a small number of arms, namely s < d, have a positive expected reward. We are able to leverage this additional assumption to provide an algorithm whose regret scales with s instead of d. Moreover, we prove that this algorithm is optimal by providing a matching lower bound – at least for a wide and pertinent range of parameters that we determine – and by evaluating its performance on simulated data.
Fichier principal
Vignette du fichier
main_colt_1.pdf (1.91 Mo) Télécharger le fichier
Origin Publisher files allowed on an open archive
Loading...

Dates and versions

hal-02734034 , version 1 (02-06-2020)

Identifiers

  • HAL Id : hal-02734034 , version 1
  • PRODINRA : 481343

Cite

Joon Kwon, Vianney Perchet, Claire Vernade. Sparse stochastic bandits. 2017 Conference on Learning Theory (COLT), Jul 2017, Amsterdam, Netherlands. ⟨hal-02734034⟩
32 View
31 Download

Share

More