2021
DOI: 10.48550/arxiv.2112.00646
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Reliability Assessment and Safety Arguments for Machine Learning Components in System Assurance

Abstract: The increasing use of Machine Learning (ML) components embedded in autonomous systems -so-called Learning-Enabled Systems (LES) -has resulted in the pressing need to assure their functional safety. As for traditional functional safety, the emerging consensus within both, industry and academia, is to use assurance cases for this purpose. Typically assurance cases support claims of reliability in support of safety, and can be viewed as a structured way of organising arguments and evidence generated from safety a… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 58 publications
0
1
0
Order By: Relevance
“…2. Moreover, we note that while verification is currently working with point-wise robustness, the evidence collected through robustness verification and testing techniques [18,19,47,48] can be utilised to construct safety case for the certification of real-world autonomous systems [65][66][67].…”
Section: Safety/formal Verificationmentioning
confidence: 99%
“…2. Moreover, we note that while verification is currently working with point-wise robustness, the evidence collected through robustness verification and testing techniques [18,19,47,48] can be utilised to construct safety case for the certification of real-world autonomous systems [65][66][67].…”
Section: Safety/formal Verificationmentioning
confidence: 99%