2020 IEEE Winter Applications of Computer Vision Workshops (WACVW) 2020
DOI: 10.1109/wacvw50321.2020.9096945
|View full text |Cite
|
Sign up to set email alerts
|

Impact of ImageNet Model Selection on Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
20
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 34 publications
(23 citation statements)
references
References 33 publications
1
20
1
Order By: Relevance
“…It is obvious that pre-trained features from EfficientNetB7 have the smallest distance between two domains, which suggests the EfficientNetB7 is the best deep network compared to the other 16 networks. In addition, this observation is consistent with [36], that a better ImageNet model will produce better pre-trained features for UDA. Furthermore, we also list the computation time for feature extraction of each dataset.…”
Section: Resultssupporting
confidence: 87%
See 2 more Smart Citations
“…It is obvious that pre-trained features from EfficientNetB7 have the smallest distance between two domains, which suggests the EfficientNetB7 is the best deep network compared to the other 16 networks. In addition, this observation is consistent with [36], that a better ImageNet model will produce better pre-trained features for UDA. Furthermore, we also list the computation time for feature extraction of each dataset.…”
Section: Resultssupporting
confidence: 87%
“…However, one disadvantage of feature extraction using pretrained networks is that it has lower performance than finetuning the same network since feature extraction is a single pass over the images. However, previous work [36] suggests that a better ImageNet model will produce better features for UDA. Therefore, we can extract features from a better ImageNet model to compensate for the lower performance of not fine-tuning.…”
Section: Pre-training Feature Representationmentioning
confidence: 95%
See 1 more Smart Citation
“…We clearly observe that CUDA outperforms several state-of-the-art methods that also use ResNet-50 [67] and even further surpasses by using ResNet-152 [67] encoder with CUDA. From our ablations study in Table 2, we show the effect of selection of ImageNet pre-trained ResNet-50 and ResNet-152 models on the domain adaptation with similar implications with the work [72].…”
Section: Office-31 Dataset Single-source Domain Adaptation Resultsmentioning
confidence: 53%
“…We implement our approach using PyTorch with an Nvidia GeForce 1080 Ti GPU and extract features for the three datasets from a fine-tuned ResNet50 network [14]. The 1,000 features are then extracted from the last fully connected layer [15]. Parameters in domain distribution alignment are η = 0.1, λ = 10, and ρ = 10, which are fixed based on previous research [10], and τ = 0.31 is from [16].…”
Section: B Implementation Detailsmentioning
confidence: 99%