Gains and Losses are Fundamentally Different in Regret Minimization: The Sparse Case - INRAE - Institut national de recherche pour l’agriculture, l’alimentation et l’environnement Access content directly
Journal Articles Journal of Machine Learning Research Year : 2016

Gains and Losses are Fundamentally Different in Regret Minimization: The Sparse Case

Vianney Perchet
  • Function : Author

Abstract

We demonstrate that, in the classical non-stochastic regret minimization problem with d decisions, gains and losses to be respectively maximized or minimized are fundamentally different. Indeed, by considering the additional sparsity assumption (at each stage, at most s decisions incur a nonzero outcome), we derive optimal regret bounds of different orders. Specifically, with gains, we obtain an optimal regret guarantee after T stages of order √ T log s, so the classical dependency in the dimension is replaced by the sparsity size. With losses, we provide matching upper and lower bounds of order T s log(d)/d, which is decreasing in d. Eventually, we also study the bandit setting, and obtain an upper bound of order T s log(d/s) when outcomes are losses. This bound is proven to be optimal up to the logarithmic factor log(d/s).
Fichier principal
Vignette du fichier
15-503.pdf (491.07 Ko) Télécharger le fichier
Origin : Publisher files allowed on an open archive
Licence : CC BY - Attribution

Dates and versions

hal-04217305 , version 1 (25-09-2023)

Licence

Attribution

Identifiers

  • HAL Id : hal-04217305 , version 1
  • ARXIV : 1511.08405
  • WOS : 000391916200001

Cite

Joon Kwon, Vianney Perchet. Gains and Losses are Fundamentally Different in Regret Minimization: The Sparse Case. Journal of Machine Learning Research, 2016, 17 (229), pp.1-32. ⟨hal-04217305⟩
8 View
6 Download

Altmetric

Share

Gmail Facebook X LinkedIn More