2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.591
|View full text |Cite
|
Sign up to set email alerts
|

Deeper, Broader and Artier Domain Generalization

Abstract: The problem of domain generalization is to learn from multiple training domains, and extract a domain-agnostic model that can then be applied to an unseen domain. Domain generalization (DG) has a clear motivation in contexts where there are target domains with distinct characteristics, yet sparse data for training. For example recognition in sketch images, which are distinctly more abstract and rarer than photos. Nevertheless, DG methods have primarily been evaluated on photo-only benchmarks focusing on allevi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

8
834
0
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 951 publications
(876 citation statements)
references
References 20 publications
8
834
0
1
Order By: Relevance
“…Datasets To evaluate the performance of JiGen when training over multiple sources we considered three domain generalization datasets. PACS [27] covers 7 object categories and 4 domains (Photo, Art Paintings, Cartoon and Sketches). We followed the experimental protocol in [27] and trained our model considering three domains as source datasets and the remaining one as target.…”
Section: Methodsmentioning
confidence: 99%
“…Datasets To evaluate the performance of JiGen when training over multiple sources we considered three domain generalization datasets. PACS [27] covers 7 object categories and 4 domains (Photo, Art Paintings, Cartoon and Sketches). We followed the experimental protocol in [27] and trained our model considering three domains as source datasets and the remaining one as target.…”
Section: Methodsmentioning
confidence: 99%
“…[28] imposed Maximum Mean Discrepancy measure to align the distributions among different domains and train the network with adversarial feature learning. [26] assigned a separate network duplication to each training domain during training and used the shared parameter for inference. [27] improved generalization performance by using a meta-learning approach on the split training sets.…”
Section: Related Workmentioning
confidence: 99%
“…Office-Caltech, we compare our method on the DG scenario with the state-of-the-art DG methods: learned -support vector machine (L-SVM) [47], kernel fisher discriminant analysis (KDA) [49], domain-invariant component analysis (DICA) [36], multi-task auto-encoder (MTAE) [20], domain separation network (DSN) [48], deeper, broader and artier domain generalization (DBADG) [21], conditional invariant deep domain generalization (CIDDG) [38], undoing the damage of dataset bias (Undo-Bias) [19], unbiased metric learning (UML) [46], multi-task autoencoders (MTAE) [20] and deep domain generalization with structured low-rank constraint (DGLRC) [22].…”
Section: Alexnetmentioning
confidence: 99%
“…This approach significantly enhances the ability of prior adversarial adaptation approaches through our additional proposed domain discrepancy module. The proposed method was evaluated on five benchmark datasets (Office-31 [25], Office-Home [26], ImageCLEF-DA 1 , Office-Caltech [27] and PACS [21]) on standard unsupervised DA and DG settings. Experiments prove that our proposed model for unsupervised DA and DG yields state-of-the-art results.…”
Section: Introductionmentioning
confidence: 99%