2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00790
|View full text |Cite
|
Sign up to set email alerts
|

Towards Fewer Annotations: Active Learning via Region Impurity and Prediction Uncertainty for Domain Adaptive Semantic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 45 publications
(29 citation statements)
references
References 62 publications
0
20
0
Order By: Relevance
“…And the improvement compared to MADA demonstrates that the new sample selection metric and ST based semi-supervised domain adaptation can effectively address the weaknesses of the previous method. In addition, our method consistently shows better performance than RIPU [73], which demonstrates that selecting a few images to annotate is better than labeling a few regions of each sample, despite the more time latter one costs. The visualization of three example images is displayed in Fig.…”
Section: Imagementioning
confidence: 81%
See 2 more Smart Citations
“…And the improvement compared to MADA demonstrates that the new sample selection metric and ST based semi-supervised domain adaptation can effectively address the weaknesses of the previous method. In addition, our method consistently shows better performance than RIPU [73], which demonstrates that selecting a few images to annotate is better than labeling a few regions of each sample, despite the more time latter one costs. The visualization of three example images is displayed in Fig.…”
Section: Imagementioning
confidence: 81%
“…For the UDA task, traditional adversarial-based [16], [39], [42]- [44], prototype-STbased [47]- [49] and the ResNet-101 version of HRDA [51] (which is the current SOTA) UDA methods are included for comparison (Transformer based methods are not listed due to the different feature extraction ability and segmentation performance upperbound). For the active DA task, we compare with the sample-based [21] and region-based [73] approaches using the same amount of annotation. The results of MADA [22] are also listed.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the selected points ignore the pixel spatial continuity of the image. Recently, Xie et al [70] greatly improved the segmentation performance in the target domain by exploring the consistency of the image space and selecting the most diverse and uncertain image regions. However, most active learning research works ignore the utilization of massive unlabeled data in the target domain, resulting in high labeling costs.…”
Section: Related Workmentioning
confidence: 99%
“…Evaluation metric. As a common practice [45,46,56,70], we report the mean Intersection-over-Union (mIoU) [17] on the Cityscapes validation set. Specifically, we report the mIoU on the shared 19 classes for GTAV → Cityscapes and report the results on 13 (mIoU*) and 16 (mIoU) common classes for SYNTHIA → Cityscapes.…”
Section: Implementation Detailsmentioning
confidence: 99%