Research Post
We introduce the partially observable history process (POHP) formalism for reinforcement learning. POHP centers around the actions and observations of a single agent and abstracts away the presence of other players without reducing them to stochastic processes. Our formalism provides a streamlined interface for designing algorithms that defy categorization as exclusively single or multi-agent, and for developing theory that applies across these domains. We show how the POHP formalism unifies traditional models including the Markov decision process, the Markov game, the extensive-form game, and their partially observable extensions, without introducing burdensome technical machinery or violating the philosophical underpinnings of reinforcement learning. We illustrate the utility of our formalism by concisely exploring observable sequential rationality, examining some theoretical properties of general immediate regret minimization, and generalizing the extensive-form regret minimization (EFR) algorithm.
Feb 1st 2022
Research Post
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chairs Neil Burch and Michael Bowling: Rethinking formal models of partially observable multiagent decision making
Dec 6th 2021
Research Post
Read this research paper, co-authored by Amii Fellow and Canada CIFAR AI Chairs Neil Burch and Micheal Bowling: Player of Games
Nov 13th 2021
Research Post
Looking to build AI capacity? Need a speaker at your event?