Vianney Perchet

Tutorial on 15/6/15 and 16/6/15:

Online Learning, Individual Sequences and Game Theory: Beyond Regret Minimization

I will first briefly consider the framework of games with imperfect
information (à la Aumann-Maschler) in order to introduce and motivate
the main online-learning setup that will be develop in this tutorial,
namely the Blackwell Approachability. This generic tool is used in
game theory to derive optimal strategies of the uninformed player and
in machine learning as a generic black box tool that is more general
than the regret minimization.

I will then show how to use the Blackwell approachability in order to
derive regret-free and calibrated algorithms (another popular
objective in online learning). Finally, we shall that see how the
former can actually be used to find optimal strategies in zero-sum
games and the later to compute (epsilon)-Nash equilibria in general
games.

The tutorial will mostly be based on the paper « Approachability,
Regret and Calibration. Implications and Equivalences » Journal of
Dynamics and Games, 1:181-254, 2014.

Talk on 17/6/15:

From bandits to ethical clinical trials; optimal sample size for multi-stage problems

Motivated by practical applications, chiefly clinical trials, we study
the regret achievable for stochastic multi-armed bandits under the
constraint that the employed policy must split trials into a small
number of batches. Our results show that a very small number of
batches gives already close to minimax optimal regret bounds and we
also evaluate the number of trials in each batch. As a byproduct, we
derive optimal policies with low switching cost for stochastic
bandits.

* Slides *

 

Top