RL Theory Seminar: Corruption robust exploration in episodic reinforcement learning
Online
Online
Amii is proud to support our province's growing AI community. The RL Theory Seminars are hosted independently by researchers: Gergely Neu, Ciara Pike-Burke, and Amii Fellow Csaba Szepesvári.
Speaker: Thodoris Lykouris (Microsoft Research)
Paper: https://arxiv.org/abs/1911.08689
Authors: Thodoris Lykouris, Max Simchowitz, Aleksandrs Slivkins, Wen Sun
Abstract: We initiate the study of multi-stage episodic reinforcement learning under adversarial corruptions in both the rewards and the transition probabilities of the underlying system extending recent results for the special case of stochastic bandits. We provide a framework which modifies the aggressive exploration enjoyed by existing reinforcement learning approaches based on "optimism in the face of uncertainty", by complementing them with principles from "action elimination". Importantly, our framework circumvents the major challenges posed by naively applying action elimination in the RL setting, as formalized by a lower bound we demonstrate. Our framework yields efficient algorithms which (a) attain near-optimal regret in the absence of corruptions and (b) adapt to unknown levels corruption, enjoying regret guarantees which degrade gracefully in the total corruption encountered. To showcase the generality of our approach, we derive results for both tabular settings (where states and actions are finite) as well as linear-function-approximation settings (where the dynamics and rewards admit a linear underlying representation). Notably, our work provides the first sublinear regret guarantee which accommodates any deviation from purely i.i.d. transitions in the bandit-feedback model for episodic reinforcement learning.
Looking to build AI capacity? Need a speaker at your event?