2009 IEEE Symposium on Computational Intelligence and Games 2009
DOI: 10.1109/cig.2009.5286456
|View full text |Cite
|
Sign up to set email alerts
|

Realtime execution of automated plans using evolutionary robotics

Abstract: Abstract-Applying neural networks to generate robust agent controllers is now a seasoned practice, with time needed only to isolate particulars of domain and execution. However we are often constrained to local problems due to an agents inability to reason in an abstract manner. While there are suitable approaches for abstract reasoning and search, there is often the issues that arise in using offline processes in real-time situations. In this paper we explore the feasibility of creating a decentralised archit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 9 publications
(11 reference statements)
0
3
0
Order By: Relevance
“…LEBL is a learning technique devised to learn planning knowledge in two-player games, using incomplete explanations produced by considering only a subset of the set of possible actions or moves at each state during the expansion of the state space. Another example is the work in [283], which proposes an architecture combining the reasoning capabilities of classical planning (i.e., the ability to reason over long-term goals and devise plans to achieve them) with the reactive control capabilities of NNs (to execute those plans). Yet another example is the PEORL [225] framework, which proposes a methodology to address the problem of decision making in dynamic environments with uncertainty, based on the integration of symbolic planning, to guide the learning process, with hierarchical RL, to enrich symbolic knowledge and improve planning.…”
Section: Other Frameworkmentioning
confidence: 99%
“…LEBL is a learning technique devised to learn planning knowledge in two-player games, using incomplete explanations produced by considering only a subset of the set of possible actions or moves at each state during the expansion of the state space. Another example is the work in [283], which proposes an architecture combining the reasoning capabilities of classical planning (i.e., the ability to reason over long-term goals and devise plans to achieve them) with the reactive control capabilities of NNs (to execute those plans). Yet another example is the PEORL [225] framework, which proposes a methodology to address the problem of decision making in dynamic environments with uncertainty, based on the integration of symbolic planning, to guide the learning process, with hierarchical RL, to enrich symbolic knowledge and improve planning.…”
Section: Other Frameworkmentioning
confidence: 99%
“…However, the planning component is not connected to any real simulation. Thompson and Levine () compared the performance of an agent employing a classical planner on several runs in static and dynamic versions of the same environment. The article is, however, focused on the agent architecture and the performance comparison is very brief.…”
Section: Related Workmentioning
confidence: 99%
“…However, the planning component is not connected to any real simulation. Thompson and Levine (2009) compared a performance of an agent employing a classical planner on several runs in static and dynamic versions of the same environment. The paper is however focused on the agent architecture and the performance comparison is very brief.…”
Section: Related Workmentioning
confidence: 99%