Abstract:Several algorithms for learning near-optimal policies in Markov Decision Processes have been analyzed and proven efficient. Empirical results have suggested that Model-based Interval Estimation (MBIE) learns efficiently in practice, effectively balancing exploration and exploitation. This paper presents a theoretical analysis of MBIE and a new variation called MBIE-EB, proving their efficiency even under worst-case conditions. The paper also introduces a new performance metric, average loss, and relates it to … Show more
“…Most approaches for exploration focus on the tabular case and generally learn models of the environment (e.g., Brafman & Tennenholtz, 2002;Kearns & Singh, 2002;Strehl & Littman, 2008). The community is just beginning to investigate exploration strategies in model-free settings when function approximation is required (e.g., Bellemare et al, 2016b;Machado, Bellemare, & Bowling, 2017;Martin et al, 2017;Osband, Blundell, Pritzel, & Roy, 2016;Ostrovski et al, 2017;Vezhnevets et al, 2017).…”
The Arcade Learning Environment (ALE) is an evaluation platform that poses the challenge of building AI agents with general competency across dozens of Atari 2600 games. It supports a variety of different problem settings and it has been receiving increasing attention from the scientific community, leading to some high-profile success stories such as the much publicized Deep Q-Networks (DQN). In this article we take a big picture look at how the ALE is being used by the research community. We show how diverse the evaluation methodologies in the ALE have become with time, and highlight some key concerns when evaluating agents in the ALE. We use this discussion to present some methodological best practices and provide new benchmark results using these best practices. To further the progress in the field, we introduce a new version of the ALE that supports multiple game modes and provides a form of stochasticity we call sticky actions. We conclude this big picture look by revisiting challenges posed when the ALE was introduced, summarizing the state-of-the-art in various problems and highlighting problems that remain open.
“…Most approaches for exploration focus on the tabular case and generally learn models of the environment (e.g., Brafman & Tennenholtz, 2002;Kearns & Singh, 2002;Strehl & Littman, 2008). The community is just beginning to investigate exploration strategies in model-free settings when function approximation is required (e.g., Bellemare et al, 2016b;Machado, Bellemare, & Bowling, 2017;Martin et al, 2017;Osband, Blundell, Pritzel, & Roy, 2016;Ostrovski et al, 2017;Vezhnevets et al, 2017).…”
The Arcade Learning Environment (ALE) is an evaluation platform that poses the challenge of building AI agents with general competency across dozens of Atari 2600 games. It supports a variety of different problem settings and it has been receiving increasing attention from the scientific community, leading to some high-profile success stories such as the much publicized Deep Q-Networks (DQN). In this article we take a big picture look at how the ALE is being used by the research community. We show how diverse the evaluation methodologies in the ALE have become with time, and highlight some key concerns when evaluating agents in the ALE. We use this discussion to present some methodological best practices and provide new benchmark results using these best practices. To further the progress in the field, we introduce a new version of the ALE that supports multiple game modes and provides a form of stochasticity we call sticky actions. We conclude this big picture look by revisiting challenges posed when the ALE was introduced, summarizing the state-of-the-art in various problems and highlighting problems that remain open.
“…Another possibility, called the average loss (Strehl and Littman, 2008a), compares the loss in cumulative reward of an agent on the sequence of states the agent actually visits:…”
Section: Average Lossmentioning
confidence: 99%
“…Furthermore, it can be shown that every PAC-MDP algorithm is probably approximately correct in the average loss criterion (Strehl and Littman, 2008a).…”
Section: Average Lossmentioning
confidence: 99%
“…For instance, the binary concept of knownness of a state-action may be replaced by the use of interval estimation that smoothly quantifies the prediction uncertainty in the maximum-likelihood estimates, yielding the MBIE algorithm (Strehl and Littman, 2008a). Furthermore, it is possible to replace the constant optimistic value V max by a non-constant, optimistic value function to gain further improvement in the sample complexity bound (Strehl et al, 2009;Szita and Lőrincz, 2008).…”
Efficient exploration is widely recognized as a fundamental challenge inherent in reinforcement learning. Algorithms that explore efficiently converge faster to near-optimal policies. While heuristics techniques are popular in practice, they lack formal guarantees and may not work well in general. This chapter studies algorithms with polynomial sample complexity of exploration, both model-based and model-free ones, in a unified manner. These so-called PAC-MDP algorithms behave near-optimally except in a "small" number of steps with high probability. A new learning model known as KWIK is used to unify most existing model-based PAC-MDP algorithms for various subclasses of Markov decision processes. We also compare the sample-complexity framework to alternatives for formalizing exploration efficiency such as regret minimization and Bayes optimal solutions.
“…They later added a Bayesian model-based method that maintains a distribution over MDPs, determines value functions for sampled MDPs, and then uses those value functions to approximate the true value distribution (Dearden et al, 1999). In modelbased interval estimation (MBIE) one tries to build confidence intervals for the transition probability and reward estimates and then optimistically selects the action maximising the value within those confidence intervals (Wiering & Schmidhuber, 1998;Strehl & Littman, 2008). Strehl & Littman (2008) proved that MBIE is able to find near-optimal policies in polynomial time.…”
Section: Efficient Exploration In Reinforcement Learningmentioning
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.