Research Post
We study reinforcement learning (RL) with linear function approximation where the underlying transition probability kernel of the Markov decision process (MDP) is a linear mixture model (Jia et al., 2020; Ayoub et al., 2020; Zhou et al., 2020) and the learning agent has access to either an integration or a sampling oracle of the individual basis kernels. We propose a new Bernstein-type concentration inequality for self-normalized martingales for linear bandit problems with bounded noise. Based on the new inequality, we propose a new, computationally efficient algorithm with linear function approximation named UCRL-VTR+ for the aforementioned linear mixture MDPs in the episodic undiscounted setting. We show that UCRL-VTR+ attains an Õ (dHT‾‾√) regret where d is the dimension of feature mapping, H is the length of the episode and T is the number of interactions with the MDP. We also prove a matching lower bound Ω(dHT‾‾√) for this setting, which shows that UCRL-VTR+ is minimax optimal up to logarithmic factors. In addition, we propose the UCLK+ algorithm for the same family of MDPs under discounting and show that it attains an Õ (dT‾‾√/(1−γ)1.5) regret, where γ∈[0,1) is the discount factor. Our upper bound matches the lower bound Ω(dT‾‾√/(1−γ)1.5) proved by Zhou et al. (2020) up to logarithmic factors, suggesting that UCLK+ is nearly minimax optimal. To the best of our knowledge, these are the first computationally efficient, nearly minimax optimal algorithms for RL with linear function approximation.
Feb 1st 2023
Research Post
Read this research paper, co-authored by Fellow & Canada CIFAR AI Chair at Russ Greiner: Towards artificial intelligence-based learning health system for population-level mortality prediction using electrocardiograms
Jan 31st 2023
Research Post
Jan 20th 2023
Research Post
Looking to build AI capacity? Need a speaker at your event?