2021
DOI: 10.1007/978-3-030-90870-6_41
|View full text |Cite
|
Sign up to set email alerts
|

Formal Analysis of Neural Network-Based Systems in the Aircraft Domain

Abstract: Neural networks are being increasingly used for efficient decision making in the aircraft domain. Given the safety-critical nature of the applications involved, stringent safety requirements must be met by these networks. In this work we present a formal study of two neural network-based systems developed by Boeing. The Venus verifier is used to analyse the conditions under which these systems can operate safely, or generate counterexamples that show when safety cannot be guaranteed. Our results confirm the ap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…As witnessed by an extensive recent survey by Huang et al (2020) of more than 200 papers, the response from the scientific community to the problem of ensuring correct behavior of DNNs has been substantial. Verification (Bak et al, 2020, Demarchi et al, 2022, Eramo et al, 2022, Ferrari et al, 2022, Guidotti, 2022, Guidotti et al, 2019b,b, 2020, 2023c,d,e, Henriksen and Lomuscio, 2021, Katz et al, 2019, Kouvaros et al, 2021, Singh et al, 2019a, which aims to provide formal assurances regarding the behavior of neural networks, has emerged as a potential solution to the aforementioned robustness issues. In addition to the development of verification tools and techniques, a substantial amount of research is also directed towards modifying networks to align with specified criteria (Guidotti et al, 2019a,b, Henriksen et al, 2022, Kouvaros et al, 2021, Sotoudeh and Thakur, 2021, and exploring methods for training networks that adhere to specific constraints on their behavior (Cohen et al, 2019, Eaton-Rosen et al, 2018, Giunchiglia and Lukasiewicz, 2021, Giunchiglia et al, 2022, Hu et al, 2016.…”
Section: Introductionmentioning
confidence: 99%
“…As witnessed by an extensive recent survey by Huang et al (2020) of more than 200 papers, the response from the scientific community to the problem of ensuring correct behavior of DNNs has been substantial. Verification (Bak et al, 2020, Demarchi et al, 2022, Eramo et al, 2022, Ferrari et al, 2022, Guidotti, 2022, Guidotti et al, 2019b,b, 2020, 2023c,d,e, Henriksen and Lomuscio, 2021, Katz et al, 2019, Kouvaros et al, 2021, Singh et al, 2019a, which aims to provide formal assurances regarding the behavior of neural networks, has emerged as a potential solution to the aforementioned robustness issues. In addition to the development of verification tools and techniques, a substantial amount of research is also directed towards modifying networks to align with specified criteria (Guidotti et al, 2019a,b, Henriksen et al, 2022, Kouvaros et al, 2021, Sotoudeh and Thakur, 2021, and exploring methods for training networks that adhere to specific constraints on their behavior (Cohen et al, 2019, Eaton-Rosen et al, 2018, Giunchiglia and Lukasiewicz, 2021, Giunchiglia et al, 2022, Hu et al, 2016.…”
Section: Introductionmentioning
confidence: 99%
“…Providing formal guarantees on the performance of neural networks [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20], known as verification, or making them compliant with such guarantees [21][22][23][24][25][26][27], known as repair, has proven to be similarly challenging, even when using models with limited complexity and size. Additionally, neural networks have recently been found to be prone to reliability issues known as adversarial perturbations [28], where seemingly insignificant variations in their inputs cause unforeseeable and undesirable changes in their behavior.…”
Section: Introductionmentioning
confidence: 99%
“…If a NN model is verified to be robust, no adversarial attack exists for the model, input and perturbation under analysis (Goodfellow, Shlens, and Szegedy 2014). NN verification has been used in many areas including safety-critical systems (Tran et al 2020;Julian and Kochenderfer 2021;Kouvaros et al 2021;Manzanas Lopez et al 2021).…”
Section: Introductionmentioning
confidence: 99%