Alberta Machine Intelligence Institute

Published

Aug 14, 2020

Amii Fellows have made many incredible contributions to the Games & Game Theory research area; the area focuses on understanding and optimizing strategic interactions between individuals within an environment. It includes human interactions as well as game-playing AI.

Popular Science, the quarterly magazine that has been a leading source of science and technology news since its inception in 1872, recently shone a light on game-playing AI in their article How computers beat us at our own games. Published in the Summer 2020 issue, the article takes a look at some major contributions to the field by researchers at Amii and the University of Alberta:

  • Chinook solves Checkers: Named one of Science magazine’s top discoveries of 2007, a team led by Amii co-founder Jonathan Schaeffer produced Chinook, a Checkers-playing AI that can only ever be played to a loss or draw.

DeepStack achieves expert-level play at Heads-Up No-Limit Poker: In a study completed in December 2016 and published in Science in March 2017, a team led by Amii Fellow Michael Bowling developed DeepStack, the first AI to beat professional poker players at heads-up no-limit Texas hold’em poker.

In fact, Amii and the University of Alberta have had a hand in each of the advancements highlighted on the list. UAlberta alumnus Murray Campbell was part of the team at IBM that developed Deep Blue, and Amii researchers are responsible for advancements that have been foundational to other breakthroughs in games: 

  • The UCT algorithm: In 2006, Amii Fellow Csaba Szepesvári co-developed the UCT algorithm, a foundational machine learning algorithm at the heart of many recent advancements in games research, including AlphaGo.

  • The Arcade Learning Environment (ALE): A team of researchers led by Michael Bowling issued a new challenge to the AI community by launching the Arcade Learning Environment in 2013, a software platform for evaluating the general competence of AI algorithms. It was instrumental in establishing the subfield of Deep Reinforcement Learning.

You can read the Popular Science article on their website or pick up a paper copy of their Summer 2020 issue: Play On.

Share