2010
DOI: 10.1007/978-3-642-14274-1_18
|View full text |Cite
|
Sign up to set email alerts
|

Goal-Driven Autonomy with Case-Based Reasoning

Abstract: Abstract. The vast majority of research on AI planning has focused on automated plan recognition, in which a planning agent is provided with a set of inputs that include an initial goal (or set of goals). In this context, the goal is presumed to be static; it never changes, and the agent is not provided with the ability to reason about whether it should change this goal. For some tasks in complex environments, this constraint is problematic; the agent will not be able to respond to opportunities or plan execut… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 19 publications
(11 citation statements)
references
References 5 publications
0
11
0
Order By: Relevance
“…Most GDA agents (e.g., CB-gda (Jaidee et al, 2010)) are given knowledge about state expectations, discrepancies, goals to achieve, and the means to achieve the goals (e.g., the plans or the policies). While some prior work has focused on learning some of these, GRL is the first GDA agent to learn all of them simultaneously.…”
Section: Related Workmentioning
confidence: 99%
“…Most GDA agents (e.g., CB-gda (Jaidee et al, 2010)) are given knowledge about state expectations, discrepancies, goals to achieve, and the means to achieve the goals (e.g., the plans or the policies). While some prior work has focused on learning some of these, GRL is the first GDA agent to learn all of them simultaneously.…”
Section: Related Workmentioning
confidence: 99%
“…This reduces the knowledge engineering tasks for system designers to either annotating expert gameplay traces or simply collecting them. For example, CB-gda uses observed discrepancies as the retrieval cue to select task goals in a team shooter game (Muñoz-Avila et al 2010). EISBots performed competently against the built-in AI of Starcraft by selecting goal states using the current state as the retrieval cue from a library of game play traces (Weber, Mateas, and Jhala 2010).…”
Section: Related Approaches For Goal Reasoningmentioning
confidence: 99%
“…GDA agents use a four-step strategy to respond competently to unexpected situations in their environment: (1) detect any discrepancy between the observed state and the expected state(s), (2) explain this discrepancy, (3) formulate a goal to resolve it (if needed), and (4) manage this new goal along with its pending goals (Molineaux et al, 2010;Muñoz-Avila et al, 2010). In step 3, these agents use a variety of models to formulate new goals.…”
Section: Related Workmentioning
confidence: 99%