2020 IEEE Conference on Games (CoG) 2020
DOI: 10.1109/cog47356.2020.9231687
|View full text |Cite
|
Sign up to set email alerts
|

Action Space Shaping in Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
42
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 71 publications
(43 citation statements)
references
References 9 publications
1
42
0
Order By: Relevance
“…The set of all valid actions in a given environment is called as an action space abbreviated as S [47]. Some environment such as Atari and Go have discrete action spaces, where only finite number of actions are available to the agent [48]. Other environments have continuous action spaces, such as, where an agent controls a robot in a physical world [49].…”
Section: A Basic Conceptmentioning
confidence: 99%
“…The set of all valid actions in a given environment is called as an action space abbreviated as S [47]. Some environment such as Atari and Go have discrete action spaces, where only finite number of actions are available to the agent [48]. Other environments have continuous action spaces, such as, where an agent controls a robot in a physical world [49].…”
Section: A Basic Conceptmentioning
confidence: 99%
“…The majority of found applications in the field of DRL are evaluated in game-like environments [32]. To the best of our knowledge and based on scanning and searching using Google Scholar, very few applications of DRL for HFS scheduling problems could be found.…”
Section: Related Workmentioning
confidence: 99%
“…The goal is eventually to present an encoding similar to a video game, in which an agent has access to either one or two controllers and can interact with the environment such as a game. This encoding is motivated by the fact that most of the applications in the field of deep reinforcement learning are conducted in game-like environments [32].…”
Section: Action Space Observation Space and Reward Functionmentioning
confidence: 99%
See 1 more Smart Citation
“…The compatibility of the two types of action space can effectively adapt to a variety of reinforcement learning algorithms. Due to the low precision requirements of the filtering threshold for small and medium-scale datasets, discretization of actions can reduce the number of action explorations [47] while meeting the basic accuracy requirements, which ensures efficient access to high-performance areas. For the dataset with large-scale neighbors, a large number of discrete actions that meet the high-precision requirements will affect the learning effect [17].…”
mentioning
confidence: 99%