2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00224
|View full text |Cite
|
Sign up to set email alerts
|

Detecting the Unexpected via Image Resynthesis

Abstract: Classical semantic segmentation methods, including the recent deep learning ones, assume that all classes observed at test time have been seen during training. In this paper, we tackle the more realistic scenario where unexpected objects of unknown classes can appear at test time. The main trends in this area either leverage the notion of prediction uncertainty to flag the regions with low confidence as unknown, or rely on autoencoders and highlight poorly-decoded regions. Having observed that, in both cases, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
139
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 133 publications
(156 citation statements)
references
References 29 publications
2
139
0
Order By: Relevance
“…2 scale poorly to the level of detail in urban driving, good results have been achieved with generative adversarial networks (Wang et al, 2018;Isola et al, 2017) that synthesize driving scenes from semantic segmentation. Lis et al (2019) uses such a method to find outliers by comparing the original and resynthesized image, where they train the comparison on flipped semantic labels in the ID data and therefore do not require outliers in training. While the original work (Lis et al, 2019) experimented with lower resolution segmentation data, Di Biase et al (2021) submitted an adapted, scaled-up model.…”
Section: Submitted Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…2 scale poorly to the level of detail in urban driving, good results have been achieved with generative adversarial networks (Wang et al, 2018;Isola et al, 2017) that synthesize driving scenes from semantic segmentation. Lis et al (2019) uses such a method to find outliers by comparing the original and resynthesized image, where they train the comparison on flipped semantic labels in the ID data and therefore do not require outliers in training. While the original work (Lis et al, 2019) experimented with lower resolution segmentation data, Di Biase et al (2021) submitted an adapted, scaled-up model.…”
Section: Submitted Methodsmentioning
confidence: 99%
“…Lis et al (2019) uses such a method to find outliers by comparing the original and resynthesized image, where they train the comparison on flipped semantic labels in the ID data and therefore do not require outliers in training. While the original work (Lis et al, 2019) experimented with lower resolution segmentation data, Di Biase et al (2021) submitted an adapted, scaled-up model.…”
Section: Submitted Methodsmentioning
confidence: 99%
“…Instead of comparing each pixel of the input image with the one in the resynthesized image directly, Lis et al (2019) propose to train a discrepancy network on artificially generated anomalies that directly outputs the regions where the reconstruction failed. Since their method requires pixelprecise semantic annotations of the training data, we do not consider this method for our benchmark.…”
Section: Generative Adversarial Network (Gans)mentioning
confidence: 99%
“…Anomaly detection by reconstruction. Anomalies can be detected by training an autoencoder [2], [11] or generative model [39], [57] on in-distribution data and use the quality of the reconstruction as a proxy OOD, as the autoencoder is unlikely to decode accurately patterns not seen during training.…”
Section: Related Workmentioning
confidence: 99%