2021
DOI: 10.1145/3450356
|View full text |Cite
|
Sign up to set email alerts
|

Testing Deep Learning-based Visual Perception for Automated Driving

Abstract: Due to the impressive performance of deep neural networks (DNNs) for visual perception, there is an increased demand for their use in automated systems. However, to use deep neural networks in practice, novel approaches are needed, e.g., for testing. In this work, we focus on the question of how to test deep learning-based visual perception functions for automated driving. Classical approaches for testing are not sufficient: A purely statistical approach based on a dataset split is not enough, as testing needs… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 59 publications
0
7
0
Order By: Relevance
“…This set of claims evaluates individual properties P that are required in order to minimize the safety-related performance insufficiencies in the model. The estimated failure rate with respect to different properties P may be estimated using testing techniques or with formal verification (Huang et al, 2020;Abrecht et al, 2021). Formal verification can include an exhaustive exploration of a bounded hypersphere defining the vicinity of particular samples to demonstrate local robustness properties (Cheng et al, 2017;Huang et al, 2017) and several techniques have been put forward to apply constraint solving to this problem.…”
Section: Evaluation Of Performancementioning
confidence: 99%
“…This set of claims evaluates individual properties P that are required in order to minimize the safety-related performance insufficiencies in the model. The estimated failure rate with respect to different properties P may be estimated using testing techniques or with formal verification (Huang et al, 2020;Abrecht et al, 2021). Formal verification can include an exhaustive exploration of a bounded hypersphere defining the vicinity of particular samples to demonstrate local robustness properties (Cheng et al, 2017;Huang et al, 2017) and several techniques have been put forward to apply constraint solving to this problem.…”
Section: Evaluation Of Performancementioning
confidence: 99%
“…For the white box coverage criteria, neuron coverage [12] and extensions (e.g., SS-coverage [15]) motivated by MC/DC coverage in classical software have been proposed. For the black box coverage criteria, multiple results are utilizing combinatorial testing [3,1] to argue about the relative completeness of the test data. Readers may reference Section 5.1 of a recent survey paper [6] for an overview of existing results in coverage-driven testing.…”
Section: Related Workmentioning
confidence: 99%
“…Most notably, if researchers have no understanding of software execution, it could be the only practicable method [6]. Undoubtedly, due to the change in advancement framework caused by ML, this feature is also an essential advantage for evaluating ML systems [7]. Conventional software is built axiomatically by attempting to write down the regulations as program codes, and the system's behavior is controlled by these known regulations.…”
Section: Fig 1 Representation Of Testing Using Machine Learningmentioning
confidence: 99%