The 2012 International Joint Conference on Neural Networks (IJCNN) 2012
DOI: 10.1109/ijcnn.2012.6252763
|View full text |Cite
|
Sign up to set email alerts
|

Self-organizing neural networks for learning air combat maneuvers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(18 citation statements)
references
References 23 publications
0
13
0
Order By: Relevance
“…On the other side of the spectrum, various approaches are emerged in machine learning community to generate adaptive agent behavior automatically [5,[20][21][22]. This field has been studied basically from two perspectives [1,2]: learning from observation (LfO) (a.k.a, learning from demonstration, programming from demonstration) and learning from experience.…”
Section: Agent Behavior Modeling and Evolving Behavior Treesmentioning
confidence: 99%
See 2 more Smart Citations
“…On the other side of the spectrum, various approaches are emerged in machine learning community to generate adaptive agent behavior automatically [5,[20][21][22]. This field has been studied basically from two perspectives [1,2]: learning from observation (LfO) (a.k.a, learning from demonstration, programming from demonstration) and learning from experience.…”
Section: Agent Behavior Modeling and Evolving Behavior Treesmentioning
confidence: 99%
“…Ontañón et al [23] use learning from demonstration for realtime strategy games in the context of case-based planning. While the later leads virtual agent to learn and optimize its behavior by interacting with environment repeatedly (e.g., reinforcement learning and evolutionary algorithms) [5,22]. The performance of the agent is measured according to how well the task is performed as expert's evaluation criterions, which may sometimes find creative solutions that are not found by humans termed as computational creativity [4].…”
Section: Agent Behavior Modeling and Evolving Behavior Treesmentioning
confidence: 99%
See 1 more Smart Citation
“…However, due to the computational overload caused by the curse of dimensionality, the traditional RL method is not suitable for solving large-scale Markov decision processes (MDP) such as air combat. In [12,13], an approximate learning method combining approximation techniques with reinforcement learning is used to approximate a value function or a state space by a function approximator. Avoiding accurate solutions can alleviate the problems caused by the dimensional curse to a certain extent.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, the modeling of N-PCs has been intensively studied all over the world, along with the development of computer technologies. The NPCs technologies could be commonly applied in extensive areas such as commercial games, the virtual spaces with education or nursing purposes [63] [165], and computer generated forces (CGF) in military command and control simulations [113] [155] [154]. In these areas, NPCs are required to realize artificial and humanoid performance such as behavior ability, affective expression ability, learning ability and adaptability.…”
Section: Chapter 1 Introduction 11 Background and Motivationsmentioning
confidence: 99%