2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00174
|View full text |Cite
|
Sign up to set email alerts
|

Robust Semantic Segmentation by Redundant Networks With a Layer-Specific Loss Contribution and Majority Vote

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 19 publications
(5 citation statements)
references
References 54 publications
0
5
0
Order By: Relevance
“…There are several works that attempt to increase robustness for semantic segmentation. [1] proposes a specialized student-teacher architecture for robust semantic segmentation. [18] relies on increasing shape bias of networks to build the robust semantic segmentation system, inspired by the success of image classification using a similar bias approach [9].…”
Section: Related Work and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…There are several works that attempt to increase robustness for semantic segmentation. [1] proposes a specialized student-teacher architecture for robust semantic segmentation. [18] relies on increasing shape bias of networks to build the robust semantic segmentation system, inspired by the success of image classification using a similar bias approach [9].…”
Section: Related Work and Discussionmentioning
confidence: 99%
“…The other approach is to apply an existing robust data augmentation technique during transfer learning. While applying robustification techniques during finetuning for downstream tasks is an option, a naive application of these methods can decrease downstream task performances 1 and often requires further modifications tailored for downstream tasks to maintain good accuracy while achieving robustness [6], partly because object detection and semantic segmentation systems tend to be more com- 1 See Table 4 in the Appendix as an example plex than image classification. Therefore, rather than entirely resorting to data augmentation during fine-tuning, it is critical to better understand robustness transfer to achieve both robustness and good clean accuracy in downstream tasks.…”
Section: Introductionmentioning
confidence: 99%
“…Among online-capable metrics for the overall performance prediction, some only focus at malfunction detection and correction (again involving an ensemble of DNNs) [17], [64], or exploit temporal inconsistency between consecutive predictions [15], which has to be defined in a highly task-specific way. The closest prior work to ours is presumably from Löhdefink et al [18], who propose to train an autoencoder to reconstruct an image on the same data a semantic segmentation DNN is trained on, showing a correlation between both task's metrics.…”
Section: Performance Prediction Of Neural Networkmentioning
confidence: 99%
“…one typically assumes that an offline-measured performance of a DNN is also valid in inference, this is actually not true due to the mentioned environment changes. Meanwhile, less-frequently proposed online-capable algorithms are either task-specific [15], rely on ensembles of DNNs [16], [17], or only show the correlation of a proposed metric to the absolute performance metric without further outlining an online-capable predictive scheme [15], [18]. Naively using the confidence scores of the network itself [19] is not recommended as DNNs often assign a probability of close to one to a single class [20], and even more important, the uncertainty of measurements does not bear predictive power to estimate the absolute DNN performance.…”
mentioning
confidence: 99%
“…An alternative robust training procedure uses a redundant Teacher-Student framework consisting of three networks named the static teacher, the static student, and the adaptive student [21,22]. The two students apply model distillation of the teacher by learning to predict its output, while having a considerably simpler architecture.…”
Section: Empirically Justifiedmentioning
confidence: 99%