2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC) 2020
DOI: 10.1109/dasc50938.2020.9256581
|View full text |Cite
|
Sign up to set email alerts
|

Run-Time Assurance for Learning-Based Aircraft Taxiing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(13 citation statements)
references
References 7 publications
1
11
0
Order By: Relevance
“…Verification Tightness: To compute tighter verification (i.e., tighter reachable set computation) in ReachNN, we set the CutoffThreshold as 1e − 10, QueueSize as 1000, and degree bound of Bernstein polynomials as [2,2]. For less tight verification, we set 1e − 7, 300, and [1,1] for CutoffThreshold, QueueSize, and degree bound, respectively. For Wasserstein distance on the oscillator system, the tighter reachable set computation in average takes around 40 steps with about 115 seconds for each step to learn a neural network controller, compared to the less tight computation that takes about 55 iterations with 86 seconds for each step.…”
Section: Additional Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Verification Tightness: To compute tighter verification (i.e., tighter reachable set computation) in ReachNN, we set the CutoffThreshold as 1e − 10, QueueSize as 1000, and degree bound of Bernstein polynomials as [2,2]. For less tight verification, we set 1e − 7, 300, and [1,1] for CutoffThreshold, QueueSize, and degree bound, respectively. For Wasserstein distance on the oscillator system, the tighter reachable set computation in average takes around 40 steps with about 115 seconds for each step to learn a neural network controller, compared to the less tight computation that takes about 55 iterations with 86 seconds for each step.…”
Section: Additional Discussionmentioning
confidence: 99%
“…Safety-critical autonomous systems, such as avionics systems [1] and self-driving vehicles [2], often operate in highly dynamic environments with significant uncertainties and disturbances. It is critical yet challenging to formally ensure their safety, especially for the control and decision making modules.…”
Section: Introductionmentioning
confidence: 99%
“…A first family of approaches predicts misbehavior of a ML model by analyzing its inputs. For instance, reconstruction errors from auto encoders can be used to determine if an input is within the validity domain of the model [17], [18], [19]. Another kind of Neural Network (NN) monitor models activation patterns of internal layers during training, and uses it to detect out-of-distribution (OOD) data [20].…”
Section: B Runtime Monitoring Of MLmentioning
confidence: 99%
“…For example, the highest softmax activation can be used as an anomaly score [21], and when possible, softmax values can even be calibrated using OOD data [22]. We also mention that some works have implemented MLRM mechanism within the context of practical safety applications [19], [18], [23].…”
Section: B Runtime Monitoring Of MLmentioning
confidence: 99%
“…Traditional methods from control theory provide powerful tools for safety and performance analysis but lack the expressiveness to deal with rich sensing models such as vision or Lidar. On the other hand, learning-based methods have been used successfully on visual-feedback control tasks including autonomous driving [1] and aircraft taxiing [2], but ensuring the safety of these controllers remains an open question.…”
Section: Introductionmentioning
confidence: 99%