Agent modelling involves considering how other agents will behave, in order to influence your own actions. In this paper, we explore the use of agent modelling in the hiddeninformation, collaborative card game Hanabi. We implement a number of rule-based agents, both from the literature and of our own devising, in addition to an Information Set-Monte Carlo Tree Search (IS-MCTS) agent. We observe poor results from IS-MCTS, so construct a new, predictor version that uses a model of the agents with which it is paired. We observe a significant improvement in game-playing strength from this agent in comparison to IS-MCTS, resulting from its consideration of what the other agents in a game would do. In addition, we create a flawed rule-based agent to highlight the predictor's capabilities with such an agent.
Abstract-This paper introduces the revival of the popular Ms. Pac-Man Versus Ghost Team competition. We present an updated game engine with Partial Observability constraints, a new MultiAgent Systems approach to developing Ghost agents, and several sample controllers to ease the development of entries. A restricted communication protocol is provided for the Ghosts, providing a more challenging environment than before. The competition will debut at the IEEE Computational Intelligence and Games Conference 2016. Some preliminary results showing the effects of Partial Observability and the benefits of simple communication are also presented.
This paper outlines the Hanabi competition, first run at CIG 2018, and returning for COG 2019. Hanabi presents a useful domain for game agents which must function in a cooperative environment. The paper presents the results of the two tracks which formed the 2018 competition and introduces the learning track, a new track for 2019 which allows the agents to collect statistics across multiple games.
Abstract-This paper highlights an experiment to see how standard Monte Carlo Tree Search handles simple co-operative problems with no prior or provided knowledge. These problems are formed from a simple grid world that has a set of goals, doors and buttons as well as walls that cannot be walked through. Two agents have to reach every goal present on the map. For a door to be open, an agent must be present on at least one of the buttons that is linked to it. When laid out correctly, the world requires each agent to do certain things at certain times in order to achieve the goal. With no modification to allow communication between the two agents, Monte Carlo Tress Search performs well and very "purposefully" when given enough computational time. I. INTRODUCTIONThe research problem studied in this paper consists of how General Game Playing (GGP) agents perform when trying to solve a simple co-operative problem without co-operative abilities, with a focus on Monte Carlo Tree Search (MCTS). GGP is the field of writing Artificial Intelligence (AI) agents that can play a multitude of games without being written specifically for each one individually [1]. GGP in real time video games has a popular competition [2] run frequently.Games that feature co-operation of some form between human players and AI agents are commonplace. Most however feature very limited forms of co-operation that are typically scripted such as in most First-Person Shooter (FPS) games. Typically FPS games give the mere impression of cooperation, though any player that looks carefully at it will see the tell tale signs of scripting. Where FPS games typically excel at co-operation is in online modes that enable teams of humans to play against each other. Some games even provide squad structures and communication allowing direct command for the purpose of better co-ordination as in Battlefield 2142 (EA Digital Illusions CE, 2006). Real-Time Strategy (RTS) games also often have a small number of features designed to facilitate communication in a bid to facilitate co-operation. Two games that stand out for co-operation are Rise of Nations (Big Huge Games, 2003) and Empire Earth II (Mad Doc Software, 2005). Rise of Nations allowed a human and AI player to operate the same set of units and buildings, though no communication was possible at all. This allowed a form of co-operation but the AI operated to its own agenda. Empire Earth 2 allowed for humans and AI agents to co-operate by letting plans be drawn up between them that could also be followed by both the human and AI agent. These allowed a All authors are with the
No abstract
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.