2022 IEEE International Conference on Multimedia and Expo (ICME) 2022
DOI: 10.1109/icme52920.2022.9859706
|View full text |Cite
|
Sign up to set email alerts
|

Rethinking Hard-Parameter Sharing in Multi-Domain Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 5 publications
0
4
0
Order By: Relevance
“…NumDomain + = 1 layers. We know from prior research that the end layer of a CNN has lower representation capacity compared to other layers in the architecture and is thus more sensitive to domain-specific information 43 . We, therefore, choose n to be the last fully connected layer of the CNN architecture shown in Fig.…”
Section: Federated Multi-domain Learningmentioning
confidence: 99%
“…NumDomain + = 1 layers. We know from prior research that the end layer of a CNN has lower representation capacity compared to other layers in the architecture and is thus more sensitive to domain-specific information 43 . We, therefore, choose n to be the last fully connected layer of the CNN architecture shown in Fig.…”
Section: Federated Multi-domain Learningmentioning
confidence: 99%
“…Multi-task learning [21][22][23] is the joint training of multiple related tasks, aiming to improve the generalization and robustness of each task through information transmission and integration among multiple tasks. Naturally multi-task learning is more in line with human cognitive learning mechanism.…”
Section: Multi-task Learningmentioning
confidence: 99%
“…Hard parameter sharing is the most commonly used approach by which the hidden CNN layers are shared between all tasks while keeping several task-specific outputs (Zhang et al, 2022). Caruana (1997) present examples of the usages of MTL.…”
Section: Background and Related Workmentioning
confidence: 99%