Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence 2022
DOI: 10.24963/ijcai.2022/187
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Domain Generalization

Abstract: Both social group detection and group emotion recognition in images are growing fields of interest, but never before have they been combined. In this work we aim to detect emotional subgroups in images, which can be of great importance for crowd surveillance or event analysis. To this end, human annotators are instructed to label a set of 171 images, and their recognition strategies are analysed. Three main strategies for labeling images are identified, with each strategy assigning either 1) more weight to emo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 1 publication
0
14
0
Order By: Relevance
“…However, the growing number of parameters would increase the risk of overfitting. Some studies have been developed to solve this problem by squeezing the dynamic parameters into a low-dimensional space [44,47]. Sun et al [44] decomposes the network parameters into a dynamic component and a static component to reduce the dynamic parameter space and achieve dynamic domain generalization (DDG).…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…However, the growing number of parameters would increase the risk of overfitting. Some studies have been developed to solve this problem by squeezing the dynamic parameters into a low-dimensional space [44,47]. Sun et al [44] decomposes the network parameters into a dynamic component and a static component to reduce the dynamic parameter space and achieve dynamic domain generalization (DDG).…”
Section: Related Workmentioning
confidence: 99%
“…Most existing arts aim to obtain a robust static model by projecting multiple source domains into a common distribution, namely domain-invariant space [31,35,42,56], where the domain-specific information is neglected, resulting in an adaptation failure to the unknown target domains (see Figure 1(a)). Fortunately, the limitation of static models has been alleviated by Dynamic Domain Generalization (DDG) [44], which develops a dynamic network to achieve training-free adaptation on the unknown target domains, as shown in Figure 1(b). By decoupling the network parameters into a static component and a dynamic component, DDG allows the static component to learn domain-invariant features shared across instances, while the dynamic component adapts to each instance on the fly, learning domain-specific features.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…However, the domain shift is usually agnostic in real-world scenarios since the target data is not available for training. This issue inspires the research area of domain generalization (DG) [1,22,27,28,30,34,41,43,45,47,51,52,54,74,75,[78][79][80], which is aimed to make models trained on seen domains achieve accurate predictions on unseen domainss, i.e., the conditional distribution P (Y |X) is robust with shifted marginal distribution P (X). Canonical DG focuses on learning a domaininvariant feature distribution P (F (X)) across domains for the robustness of conditional distribution P (Y |F (X)).…”
Section: Introductionmentioning
confidence: 99%