Research Post
The construction by Du et al. (2019) implies that even if a learner is given linear features in ℝd that approximate the rewards in a bandit with a uniform error of ϵ, then searching for an action that is optimal up to O(ϵ) requires examining essentially all actions. We use the Kiefer-Wolfowitz theorem to prove a positive result that by checking only a few actions, a learner can always find an action that is suboptimal with an error of at most O(ϵ√d). Thus, features are useful when the approximation error is small relative to the dimensionality of the features. The idea is applied to stochastic bandits and reinforcement learning with a generative model where the learner has access to d-dimensional linear features that approximate the action-value functions for all policies to an accuracy of ϵ. For linear bandits, we prove a bound on the regret of order √dn log(k)+ϵn√d log(n) with k the number of actions and n the horizon. For RL we show that approximate policy iteration can learn a policy that is optimal up to an additive error of order ϵ√d/(1−γ)^2 and using d/(ϵ^2(1−γ)^4) samples from a generative model. These bounds are independent of the finer details of the features. We also investigate how the structure of the feature set impacts the tradeoff between sample complexity and estimation error.
Feb 1st 2023
Research Post
Read this research paper, co-authored by Fellow & Canada CIFAR AI Chair at Russ Greiner: Towards artificial intelligence-based learning health system for population-level mortality prediction using electrocardiograms
Jan 31st 2023
Research Post
Jan 20th 2023
Research Post
Looking to build AI capacity? Need a speaker at your event?