We consider a Markov decision process (MDP) setting in which the reward function is allowed to change after each time step (possibly in an adversarial manner), yet the dynamics remain fixed. Similar to the experts setting, we address the question of how well an agent can do when compared to the reward achieved under the best stationary policy over time. We provide efficient algorithms, which have regret bounds with no dependence on the size of state space. Instead, these bounds depend only on a certain horizon time of the process and logarithmically on the number of actions.1. Introduction. Finite state and actions Markov decision processes (MDPs) are a popular and attractive way to formulate many stochastic optimization problems ranging from robotics to finance (Puterman [17], Bertsekas and Tsitsiklis [2], Sutton and Barto [18]). Unfortunately, in many applications the Markovian assumption made is only a relaxation of the real model. A popular framework that is not Markovian is the experts problem, in which during every round a learner chooses one of n decision-making experts and incurs the loss of the chosen expert. The setting is typically an adversarial one, where Nature provides the examples to a learner. The standard objective here is a myopic, backwards-looking one-in retrospect, we desire that our performance is not much worse than had we chosen any single expert on the sequence of examples provided by Nature. Expert algorithms have played an important role in computer science in the past decade, solving problems varying from classification to online portfolios (see Littlestone and Warmuth [13], Blum and Kalai [3], Helmbold et al. [8]).There is an inherent tension between the objectives in an expert setting and those in a reinforcement learning (RL) setting. In contrast to the myopic nature of the expert algorithms, an RL setting typically makes the much stronger assumption of a fixed environment, and the forward-looking objective is to maximize some measure of the future reward with respect to this fixed environment. Therefore, in RL the past actions have a major influence on the current reward, whereas in the regret setting they have no influence. In this paper, we relax the Markovian assumption of the MDPs by letting the reward function be time dependent, and even chosen by an adversary as is done in the expert setting, but still keeping the underlying structure of an MDP.The motivation of this work is to understand how to efficiently incorporate the benefits of existing experts' algorithms into a more adversarial reinforcement learning setting, where certain aspects of the environment could change over time. A naive way to implement an experts' algorithm is to simply associate an expert with each fixed policy. The running time of such algorithms is polynomial in the number of experts, and the regret (the difference from the optimal reward) is logarithmic in the number of experts. For our setting, the number of policies is huge, namely, for an MDP with state space S and action space A we have A S polic...
We study a network creation game recently proposed by Fabrikant, Luthra, Maneva, Papadimitriou and Shenker. In this game, each player (vertex) can create links (edges) to other players at a cost of α per edge. The goal of every player is to minimize the sum consisting of (a) the cost of the links he has created and (b) the sum of the distances to all other players.Fabrikant et al. conjectured that there exists a constant A such that, for any α > A, all non-transient Nash equilibria graphs are trees. They showed that if a Nash equilibrium is a tree, the price of anarchy is constant. In this paper we disprove the tree conjecture. More precisely, we show that for any positive integer n 0 , there exists a graph built by n ≥ n 0 players which contains cycles and forms a nontransient Nash equilibrium, for any α with 1 < α ≤ n/2. Our construction makes use of some interesting results on finite affine planes. On the other hand we show that, for α ≥ 12n log n , every Nash equilibrium forms a tree.Without relying on the tree conjecture, Fabrikant et al. ). Additionally, we develop characterizations of Nash equilibria and extend our results to a weighted network creation game as well as to scenarios with cost sharing.
We study a network creation game recently proposed by Fabrikant, Luthra, Maneva, Papadimitriou and Shenker. In this game, each player (vertex) can create links (edges) to other players at a cost of α per edge. The goal of every player is to minimize the sum consisting of (a) the cost of the links he has created and (b) the sum of the distances to all other players.Fabrikant et al. conjectured that there exists a constant A such that, for any α > A, all non-transient Nash equilibria graphs are trees. They showed that if a Nash equilibrium is a tree, the price of anarchy is constant. In this paper we disprove the tree conjecture. More precisely, we show that for any positive integer n 0 , there exists a graph built by n ≥ n 0 players which contains cycles and forms a nontransient Nash equilibrium, for any α with 1 < α ≤ n/2. Our construction makes use of some interesting results on finite affine planes. On the other hand we show that, for α ≥ 12n log n , every Nash equilibrium forms a tree.Without relying on the tree conjecture, Fabrikant et al. ). Additionally, we develop characterizations of Nash equilibria and extend our results to a weighted network creation game as well as to scenarios with cost sharing.
We study the number of steps required to reach a pure Nash equilibrium in a load balancing scenario where each job behaves selfishly and attempts to migrate to a machine which will minimize its cost. We consider a variety of load balancing models, including identical, restricted, related, and unrelated machines. Our results have a crucial dependence on the weights assigned to jobs. We consider arbitrary weights, integer weights, K distinct weights, and identical (unit) weights. We look both at an arbitrary schedule (where the only restriction is that a job migrates to a machine which lowers its cost) and specific efficient schedulers (e.g., allowing the largest weight job to move first). A by-product of our results is establishing a connection between various scheduling models and the game-theoretic notion of potential games. We show that load balancing in unrelated machines is a generalized ordinal potential game, load balancing in related machines is a weighted potential game, and load balancing in related machines and unit weight jobs is an exact potential game.
Abstract. We consider the spread maximization problem that was defined by Domingos and Richardson [7,22]. In this problem, we are given a social network represented as a graph and are required to find the set of the most "influential" individuals that by introducing them with a new technology, we maximize the expected number of individuals in the network, later in time, that adopt the new technology. This problem has applications in viral marketing, where a company may wish to spread the rumor of a new product via the most influential individuals in popular social networks such as Myspace and Blogsphere. The spread maximization problem was recently studied in several models of social networks [14,15,20]. In this short paper we study this problem in the context of the well studied probabilistic voter model. We provide very simple and efficient algorithms for solving this problem. An interesting special case of our result is that the most natural heuristic solution, which picks the nodes in the network with the highest degree, is indeed the optimal solution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.