2023
DOI: 10.3390/info14070397
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging Satisfiability Modulo Theory Solvers for Verification of Neural Networks in Predictive Maintenance Applications

Abstract: Interest in machine learning and neural networks has increased significantly in recent years. However, their applications are limited in safety-critical domains due to the lack of formal guarantees on their reliability and behavior. This paper shows recent advances in satisfiability modulo theory solvers used in the context of the verification of neural networks with piece-wise linear and transcendental activation functions. An experimental analysis is conducted using neural networks trained on a real-world pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 45 publications
0
1
0
Order By: Relevance
“…As witnessed by an extensive recent survey by Huang et al (2020) of more than 200 papers, the response from the scientific community to the problem of ensuring correct behavior of DNNs has been substantial. Verification (Bak et al, 2020, Demarchi et al, 2022, Eramo et al, 2022, Ferrari et al, 2022, Guidotti, 2022, Guidotti et al, 2019b,b, 2020, 2023c,d,e, Henriksen and Lomuscio, 2021, Katz et al, 2019, Kouvaros et al, 2021, Singh et al, 2019a, which aims to provide formal assurances regarding the behavior of neural networks, has emerged as a potential solution to the aforementioned robustness issues. In addition to the development of verification tools and techniques, a substantial amount of research is also directed towards modifying networks to align with specified criteria (Guidotti et al, 2019a,b, Henriksen et al, 2022, Kouvaros et al, 2021, Sotoudeh and Thakur, 2021, and exploring methods for training networks that adhere to specific constraints on their behavior (Cohen et al, 2019, Eaton-Rosen et al, 2018, Giunchiglia and Lukasiewicz, 2021, Giunchiglia et al, 2022, Hu et al, 2016.…”
Section: Introductionmentioning
confidence: 99%
“…As witnessed by an extensive recent survey by Huang et al (2020) of more than 200 papers, the response from the scientific community to the problem of ensuring correct behavior of DNNs has been substantial. Verification (Bak et al, 2020, Demarchi et al, 2022, Eramo et al, 2022, Ferrari et al, 2022, Guidotti, 2022, Guidotti et al, 2019b,b, 2020, 2023c,d,e, Henriksen and Lomuscio, 2021, Katz et al, 2019, Kouvaros et al, 2021, Singh et al, 2019a, which aims to provide formal assurances regarding the behavior of neural networks, has emerged as a potential solution to the aforementioned robustness issues. In addition to the development of verification tools and techniques, a substantial amount of research is also directed towards modifying networks to align with specified criteria (Guidotti et al, 2019a,b, Henriksen et al, 2022, Kouvaros et al, 2021, Sotoudeh and Thakur, 2021, and exploring methods for training networks that adhere to specific constraints on their behavior (Cohen et al, 2019, Eaton-Rosen et al, 2018, Giunchiglia and Lukasiewicz, 2021, Giunchiglia et al, 2022, Hu et al, 2016.…”
Section: Introductionmentioning
confidence: 99%