Research Post
Extensive-form games (EFGs) are a common model of multi-agent interactions with imperfect information. State-of-the-art algorithms for solving these games typically perform full walks of the game tree that can prove prohibitively slow in large games. Alternatively, sampling-based methods such as Monte Carlo Counterfactual Regret Minimization walk one or more trajectories through the tree, touching only a fraction of the nodes on each iteration, at the expense of requiring more iterations to converge due to the variance of sampled values. In this paper, we extend recent work that uses baseline estimates to reduce this variance. We introduce a framework of baseline-corrected values in EFGs that generalizes the previous work. Within our framework, we propose new baseline functions that result in significantly reduced variance compared to existing techniques. We show that one particular choice of such a function — predictive baseline — is provably optimal under certain sampling schemes. This allows for efficient computation of zero-variance value estimates even along sampled trajectories.
Feb 24th 2022
Research Post
Feb 1st 2022
Research Post
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chairs Neil Burch and Michael Bowling: Rethinking formal models of partially observable multiagent decision making
Dec 6th 2021
Research Post
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chairs Neil Burch and Micheal Bowling: Player of Games
Looking to build AI capacity? Need a speaker at your event?