2022
DOI: 10.1007/978-3-030-95405-5_20
|View full text |Cite
|
Sign up to set email alerts
|

Know Your Limits: Machine Learning with Rejection for Vehicle Engineering

Abstract: New vehicle designs need to be tested in representative driving scenarios to evaluate their durability. Because these tests are costly, only a limited number of them can be performed. These have traditionally been selected using rules of thumb, which are not always applicable to modern vehicles. Hence, there is a need to ensure that vehicle tests are aligned with their real-world usage. One possibility for obtaining a broad real-world usage overview is to exploit the data collected by sensors embedded in produ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 17 publications
0
0
0
Order By: Relevance
“…One potential improvement to our model would be to include abstention during the training phase. In the survey [83], three architectures for abstained classification are described: separated, dependent, or integrated rejection. We perform dependent rejection: abstention is applied to the output of the model, after the training step.…”
Section: Discussionmentioning
confidence: 99%
“…One potential improvement to our model would be to include abstention during the training phase. In the survey [83], three architectures for abstained classification are described: separated, dependent, or integrated rejection. We perform dependent rejection: abstention is applied to the output of the model, after the training step.…”
Section: Discussionmentioning
confidence: 99%
“…For such examples, the model's predictions have an elevated risk of being incorrect, and hence may not be trustworthy. An example can be rejected due to ambiguity (i.e., how well the decision boundary is defined in a region) or novelty (i.e., how anomalous an example is with respect to the observed training data) [22]. The OC-score metric goes beyond measuring ambiguity in an ensemble (i.e., the model's confidence in a prediction).…”
Section: Related Workmentioning
confidence: 99%
“…An underdeveloped research line consists of rejecting the output of an AI system in favor of escalating the decision to a human agent who could possibly take into account additional (qualitative) information. This is considered in the area of classification with a reject option (or selective classification) (Hendrickx et al 2021). There is a trade-off here between the performance of an AI system on the accepted region, which should be maximized, and the probability of rejecting, which should be minimized (as human agents' effort is limited).…”
Section: Trusting Fair-aimentioning
confidence: 99%