2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023
DOI: 10.1109/wacv56688.2023.00046
|View full text |Cite
|
Sign up to set email alerts
|

SALAD : Source-free Active Label-Agnostic Domain Adaptation for Classification, Segmentation and Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 38 publications
0
6
0
Order By: Relevance
“…Self-attention is promising, especially transformerbased SFDA methods. Although the self-attention-based SFDA methods are less used in image classification [70], [124] and mostly seen in semantic segmentation [11], [112], [129], the transformer-based method TransDA [124] achieves the highest accuracy of 79.3% on the Office-Home dataset, which is 4.8% higher than the second highest method [139]. This somewhat suggests that encouraging models to turn their attention to the object region may be quite effective for reducing domain shift.…”
Section: Domain-based Reconstruction Is Commonly Used In Sfdamentioning
confidence: 99%
See 2 more Smart Citations
“…Self-attention is promising, especially transformerbased SFDA methods. Although the self-attention-based SFDA methods are less used in image classification [70], [124] and mostly seen in semantic segmentation [11], [112], [129], the transformer-based method TransDA [124] achieves the highest accuracy of 79.3% on the Office-Home dataset, which is 4.8% higher than the second highest method [139]. This somewhat suggests that encouraging models to turn their attention to the object region may be quite effective for reducing domain shift.…”
Section: Domain-based Reconstruction Is Commonly Used In Sfdamentioning
confidence: 99%
“…Based on this, CADX [128] divides the target domain image into supported images and query images under the sourcefree setting, and improves the original patch-to-patch operation to image-to-image in order to capture the overall representations and reduce the computational burden. In addition, some approaches [11], [129] further process the features from both spatial attention and channel attention perspectives to enrich the contextual semantics of the representations. Currently, attention-based SFDA methods are still relatively rare, especially transformer-based SFDA methods.…”
Section: Self-attentionmentioning
confidence: 99%
See 1 more Smart Citation
“…Bohdal et al [28] partitioned target domain images into support images and query images under the unsupervised setting and improved the original patch-to-patch operation to image-to-image to capture holistic representations and reduce computational burden. Kothandaraman et al [29] further processed features from both spatial and channel perspectives, conducted feature distillation from pre-trained networks to target networks, and supplemented target samples mined based on transferability and uncertainty criteria to enrich contextual semantics. However, for some complex domains and tasks, self-attention mechanisms might not capture all critical features as they primarily focus on local relationships within input sequences while neglecting global relationships.…”
Section: Related Workmentioning
confidence: 99%
“…In real-world scenarios, the source data is usually inaccessible due to data privacy, leading to the SFOD problem (Lee et al 2022;Zong et al 2022;Ding et al 2022;Kothandaraman et al 2022;Wang et al 2022b). Due to complex background and negative examples, SFOD is far more challenging than conventional source-free image classification (Agarwal et al 2022;Ambekar et al 2022;Bohdal et al 2022;Xia, Zhao, and Ding 2021).…”
Section: Related Workmentioning
confidence: 99%