2021
DOI: 10.48550/arxiv.2111.05534
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Verifying Controllers with Convolutional Neural Network-based Perception: A Case for Intelligible, Safe, and Precise Abstractions

Abstract: Convolutional Neural Networks (CNN) for object detection, lane detection, and segmentation now sit at the head of most autonomy pipelines, and yet, their safety analysis remains an important challenge. Formal analysis of perception models is fundamentally difficult because their correctness is hard if not impossible to specify. We present a technique for inferring intelligible and safe abstractions for perception models from system-level safety requirements, data, and program analysis of the modules that are d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 33 publications
0
1
0
Order By: Relevance
“…An additional challenge with perception inputs is that it is challenging to even define the observation space for verification -for example, not all 256 × 256 × 3 arrays make a valid realworld RGB image. To overcome these challenges, there have been attempts to abstract the observation space through GANs [4], piecewise affine abstractions [13], or a geometric sensor mapping [14]. However, the obtained failures are only as accurate as the abstraction itself.…”
Section: Related Workmentioning
confidence: 99%
“…An additional challenge with perception inputs is that it is challenging to even define the observation space for verification -for example, not all 256 × 256 × 3 arrays make a valid realworld RGB image. To overcome these challenges, there have been attempts to abstract the observation space through GANs [4], piecewise affine abstractions [13], or a geometric sensor mapping [14]. However, the obtained failures are only as accurate as the abstraction itself.…”
Section: Related Workmentioning
confidence: 99%