AIAA Guidance, Navigation, and Control Conference and Exhibit 2005
DOI: 10.2514/6.2005-6152
|View full text |Cite
|
Sign up to set email alerts
|

Deception in Autonomous Vehicle Decision Making in an Adversarial Environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2007
2007
2020
2020

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…While the foundation of this approach, risk-sensitive estimation, has been reported by some of these authors in [38][39][40], this work contributes a major extension that accounts for broad range of information inputs, environmental complexity and near-real-time nature of the problem. Furthermore, this is the first work in which risk-sensitive estimation has been experimentally validated in rigorous and complex experiments that approximated a realistic problem domain.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…While the foundation of this approach, risk-sensitive estimation, has been reported by some of these authors in [38][39][40], this work contributes a major extension that accounts for broad range of information inputs, environmental complexity and near-real-time nature of the problem. Furthermore, this is the first work in which risk-sensitive estimation has been experimentally validated in rigorous and complex experiments that approximated a realistic problem domain.…”
Section: Discussionmentioning
confidence: 99%
“…See [7,[38][39][40] for further discussion of the value function in a similar game problem. We now illustrate risk-sensitive estimation using a simple example.…”
Section: Risk-sensitive Controller Theorymentioning
confidence: 99%
“…We note that, by ( 5)-( 6), the optimal deception problem is a reward maximization problem on the MDP M = (S×B, A, P ), where P is given by (6). With such a model, it is well-known that the optimal policy π * is memoryless, i.e., π * t depends solely on s t , B t , and t, and Problem 7 is solvable by previously known and extensively discussed methods (see, e.g., [19] for a detailed study).…”
Section: A Optimal Deceptive Policymentioning
confidence: 99%
“…The concept of deception is naturally present in a variety of contexts that have an adversarial element. Examples include cybersecurity [1]- [3], bio-inspired robotics [4], genetic algorithms [5], vehicle decision making [6], warfare strategy -a particularly voluminous study of the role of deception in war has been made in [7] -and interpersonal relationships [8]. A deceptive strategy employed by an agent in an adversarial setting rests on dual goals of the agent: 1) achieving its objective, 2) modifying the adversary's beliefs about the nature of that objective -for instance, objective location, distance, or reward attained at an objective.…”
Section: Introductionmentioning
confidence: 99%
“…The motivational application here is the military command and control (C 2 ) problem for air operations, with unmanned/uninhabited air vehicles (UAVs). See [2], [5], [16], [21], [28], [31], [24], [25] for related information. This application has specific characteristics such that we will be able to construct a reasonable problem formulation which is particularly nice from the point of view of analysis and computation.…”
Section: Introductionmentioning
confidence: 99%