Gains and Losses are Fundamentally Different in Regret Minimization: The Sparse Case - INRAE - Institut national de recherche pour l’agriculture, l’alimentation et l’environnement Accéder directement au contenu
Article Dans Une Revue Journal of Machine Learning Research Année : 2016

Gains and Losses are Fundamentally Different in Regret Minimization: The Sparse Case

Vianney Perchet
  • Fonction : Auteur

Résumé

We demonstrate that, in the classical non-stochastic regret minimization problem with d decisions, gains and losses to be respectively maximized or minimized are fundamentally different. Indeed, by considering the additional sparsity assumption (at each stage, at most s decisions incur a nonzero outcome), we derive optimal regret bounds of different orders. Specifically, with gains, we obtain an optimal regret guarantee after T stages of order √ T log s, so the classical dependency in the dimension is replaced by the sparsity size. With losses, we provide matching upper and lower bounds of order T s log(d)/d, which is decreasing in d. Eventually, we also study the bandit setting, and obtain an upper bound of order T s log(d/s) when outcomes are losses. This bound is proven to be optimal up to the logarithmic factor log(d/s).
Fichier principal
Vignette du fichier
15-503.pdf (491.07 Ko) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte
Licence : CC BY - Paternité

Dates et versions

hal-04217305 , version 1 (25-09-2023)

Licence

Paternité

Identifiants

  • HAL Id : hal-04217305 , version 1
  • ARXIV : 1511.08405
  • WOS : 000391916200001

Citer

Joon Kwon, Vianney Perchet. Gains and Losses are Fundamentally Different in Regret Minimization: The Sparse Case. Journal of Machine Learning Research, 2016, 17 (229), pp.1-32. ⟨hal-04217305⟩
8 Consultations
6 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More