The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
Proceedings of the 10th ACM/IEEE International Conference on Cyber-Physical Systems 2019
DOI: 10.1145/3302509.3311038
|View full text |Cite
|
Sign up to set email alerts
|

Towards safe machine learning for CPS

Abstract: Machine learning (ML) techniques are increasingly applied to decisionmaking and control problems in Cyber-Physical Systems among which many are safety-critical, e.g., chemical plants, robotics, autonomous vehicles. Despite the significant benefits brought by ML techniques, they also raise additional safety issues because 1) most expressive and powerful ML models are not transparent and behave as a black box and 2) the training data which plays a crucial role in ML safety is usually incomplete. An important tec… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…Critically, ML systems are increasingly trusted within cyber physical systems [15], such as power stations, factories, and oil and gas industries. In such complex physical environments, the potential damage that could be caused by a vulnerable system might even be life threatening [16].…”
Section: Introductionmentioning
confidence: 99%
“…Critically, ML systems are increasingly trusted within cyber physical systems [15], such as power stations, factories, and oil and gas industries. In such complex physical environments, the potential damage that could be caused by a vulnerable system might even be life threatening [16].…”
Section: Introductionmentioning
confidence: 99%
“…ML models learn from a subset of possible scenarios represented on the training dataset, which determines the training space, defining the upper limit of the ML model performance [64]. As pointed out above, the feature space could miss data, be incomplete, and can cover only a very small part of the entire feature space, with no guarantee that the training data are even representative [23].…”
mentioning
confidence: 99%