Alberta Machine Intelligence Institute

DeepStack: First AI to Outplay Human Poker Pros

Published

May 5, 2017

Categories

Featured, Updates

AI Application

AI in Gaming & Game Theory, Deep Learning (DL), Machine Learning (ML)

Amii researchers produce the first AI to outplay human pros at Heads-Up No-Limit Poker

Overview

DeepStack bridges the gap between AI techniques for games of perfect information – like checkers, chess and Go – with ones for imperfect information games – like poker – to reason while it plays. It uses “intuition” honed through deep learning to reassess its strategy with each decision.

In a study completed December 2016 and involving 44,000 hands of poker, DeepStack defeated 11 professional poker players – with only one win being outside the margin of statistical significance. Over all games played, DeepStack won 49 big blinds/100 (a measure of success in poker; a strategy of always folding would only lose 75 bb/100), over four standard deviations from zero, making it the first computer program to beat professional poker players in heads-up no-limit Texas hold'em poker. The DeepStack paper was published in the May 2017 issue of Science.

About the Algorithm

Imperfect information games (or games with hidden information) provide a general mathematical model that describes how decision-makers interact. AI research has a long history of using parlour games to study these models, but attention has been focused primarily on perfect information games, like checkers, chess or go, where all information about the game is accessible to all players. Poker is the quintessential game of imperfect information, where you and your opponent hold information that each other doesn't have (your cards).

Until now, competitive AI approaches in imperfect information games have typically reasoned about the entire game, producing a complete strategy prior to play. However, to make this approach feasible in heads-up no-limit Texas hold’em—a game with vastly more unique situations than there are atoms in the universe—a simplified abstraction of the game is often needed.

A Fundamentally Different Approach

DeepStack is the first theoretically sound application of heuristic search methods – which have been famously successful in games like checkers, chess, and Go – to imperfect information games.

At the heart of DeepStack is continual re-solving, a sound local strategy computation that only considers situations as they arise during play. This lets DeepStack avoid computing a complete strategy in advance, skirting the need for explicit abstraction. Instead, DeepStack computes a strategy based on the current state of the game for only the remainder of the hand, not maintaining one for the full game, which leads to lower overall exploitability.

During re-solving, DeepStack doesn’t need to reason about the entire remainder of the game because it substitutes computation beyond a certain depth with a fast approximate estimate, also called DeepStack’s "intuition" – a gut feeling of the value of holding any possible private cards in any possible poker situation. Much like human intuition, DeepStack’s “intuition” needs to be trained. We train it with deep learning using examples generated from random poker situations (more games than have been played in the history of humankind). Finally, DeepStack relies on sparse look-ahead trees, in which it considers a reduced number of actions, allowing it to play at conventional human speeds. The system re-solves games in under five seconds using a simple gaming laptop with an Nvidia GPU.

Heuristic Search

At a conceptual level, DeepStack’s continual re-solving, “intuitive” local search and sparse lookahead trees describe heuristic search, which is responsible for many AI successes in perfect information games. Until DeepStack, no theoretically sound application of heuristic search was known in imperfect information games.

Despite using ideas from abstraction, DeepStack is fundamentally different from abstraction-based approaches, which compute and store a strategy prior to play. While DeepStack restricts the number of actions in its lookahead trees, it has no need for explicit abstraction as each re-solve starts from the actual public state, meaning DeepStack always perfectly understands the current situation.

DeepStack is theoretically sound, produces strategies substantially more difficult to exploit than abstraction-based techniques and defeats professional poker players at heads-up no-limit poker with statistical significance.

Testing & Evaluation

Professional Matches

We evaluated DeepStack by playing it against a pool of professional poker players recruited by the International Federation of Poker. 44,852 games were played by 33 players from 17 countries. Eleven players completed the requested 3,000 games with DeepStack beating all but one by a statistically significant margin. Over all games played, DeepStack outperformed players by over four standard deviations from zero.

Low-variance Evaluation

The performance of DeepStack and its opponents was evaluated using AIVAT, a provably unbiased low-variance technique based on carefully constructed control variates. Thanks to this technique, which gives an unbiased performance estimate with an 85% reduction in standard deviation, we can show statistical significance in matches with as few as 3,000 games.

Acknowledgements

DeepStack was jointly developed by an international team from Charles University, the Czech Technical University – both in Prague, Czech Republic – and the University of Alberta in Edmonton, Canada.

DeepStack was developed by Matej Moravčík, Martin Schmid, Neil Burch, Viliam Lisý, Dustin Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, and Michael Bowling.

The researchers would like to thank the professional players who committed valuable time to play DeepStack as well as our many reviewers and our families & friends.

Our research is supported by the International Federation of Poker, IBM, the Alberta Machine Intelligence Institute, the Natural Sciences and Engineering Research Council of Canada and the Charles University Grant Agency.

DeepStack was possible thanks to computing resources provided by Compute Canada and Calcul Québec.

Not Your Average AI Conference

Learn from Leading Minds in AI at Upper Bound

Be one of the thousands of AI professionals, researchers, business leaders, entrepreneurs, investors, and students in Edmonton this spring. Explore new ideas, challenge the status quo, and help shape a positive AI future.

Authors

Spencer Murray

Share