Research Post
In this paper we investigate the Follow the Regularized Leader dynamics in sequential imperfect information games (IIG). We generalize existing results of Poincaré recurrence from normal-form games to zero-sum two-player imperfect information games and other sequential game settings. We then investigate how adapting the reward (by adding a regularization term) of the game can give strong convergence guarantees in monotone games. We continue by showing how this reward adaptation technique can be leveraged to build algorithms that converge exactly to the Nash equilibrium. Finally, we show how these insights can be directly used to build state-of-the-art model-free algorithms for zero-sum two-player Imperfect Information Games (IIG).
Feb 24th 2022
Research Post
Feb 1st 2022
Research Post
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chairs Neil Burch and Michael Bowling: Rethinking formal models of partially observable multiagent decision making
Dec 6th 2021
Research Post
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chairs Neil Burch and Micheal Bowling: Player of Games
Looking to build AI capacity? Need a speaker at your event?