June 24, 2019
We propose an online algorithm for cumulative regret minimization in a stochastic multi-armed bandit. The algorithm adds O(t) i.i.d. pseudo-rewards to its history in round t and then pulls the arm with the highest average reward in its perturbed history. Therefore, we call it perturbed-history exploration (PHE). The pseudo-rewards are designed to offset the underestimated mean rewards of arms in round t with a high probability. We analyze PHE in a K-armed bandit and derive both O(K \Delta^{-1} \log n) and O(\sqrt{K n \log n}) bounds on its n-round regret, where \Delta denotes the minimum gap between the mean rewards of the optimal and suboptimal arms. The key to our analysis is a novel argument that shows that randomized Bernoulli rewards lead to optimism. We empirically compare PHE to several baselines and show that it is competitive with the best of them.
Written by
Mohammad Ghavamzadeh
Branislav Kveton
Craig Boutilier
Csaba Szepesvari
Publisher
IJCAI
November 28, 2022
Nicolas Ballas, Bernhard Schölkopf, Chris Pal, Francesco Locatello, Li Erran, Martin Weiss, Nasim Rahaman, Yoshua Bengio
November 28, 2022
November 27, 2022
Andrea Tirinzoni, Aymen Al Marjani, Emilie Kaufmann
November 27, 2022
November 16, 2022
Kushal Tirumala, Aram H. Markosyan, Armen Aghajanyan, Luke Zettlemoyer
November 16, 2022
November 10, 2022
Unnat Jain, Abhinav Gupta, Himangi Mittal, Pedro Morgado
November 10, 2022
Foundational models
Latest news
Foundational models