2019
DOI: 10.1109/access.2019.2943604
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison of Loss Weighting Strategies for Multi task Learning in Deep Neural Networks

Abstract: With the success of deep learning in a wide variety of areas, many deep multi-task learning (MTL) models have been proposed claiming improvements in performance obtained by sharing the learned structure across several related tasks. However, the dynamics of multi-task learning in deep neural networks is still not well understood at either the theoretical or experimental level. In particular, the usefulness of different task pairs is not known a priori. Practically, this means that properly combining the losses… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
54
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 90 publications
(65 citation statements)
references
References 12 publications
(22 reference statements)
1
54
0
Order By: Relevance
“…47 However, these two methods achieved poor results owing to the fact that they considered only either loss intensity or task difficulty. 48 To overcome the heterogeneity (also called in-homogeneity) 49 of the hippocampus, T1-weighted MRIs with and without contrast were adopted in the training phase. Experimental results showed that using multimodality MRIs surpassed using single-modality MRIs by a significant margin in terms of Dice, HD, and AVD, which suggested that multimodality MRIs provided complementary contextual information.…”
Section: Discussionmentioning
confidence: 99%
“…47 However, these two methods achieved poor results owing to the fact that they considered only either loss intensity or task difficulty. 48 To overcome the heterogeneity (also called in-homogeneity) 49 of the hippocampus, T1-weighted MRIs with and without contrast were adopted in the training phase. Experimental results showed that using multimodality MRIs surpassed using single-modality MRIs by a significant margin in terms of Dice, HD, and AVD, which suggested that multimodality MRIs provided complementary contextual information.…”
Section: Discussionmentioning
confidence: 99%
“…This means human knowledge on the application subject and on the various task sharing methods is a necessity to find the best method for the application. It has been found that MTL is unlikely to improve performance unless the tasks and the weighting strategies are carefully chosen ( Gong et al., 2019 ). Continued research is needed for optimal strategies to choose and balance tasks.…”
Section: Methods To Integrate Human Knowledgementioning
confidence: 99%
“…To achieve this, we designed the custom loss function to optimize the ratio between accuracy and under-estimations. The intuition behind the design of the custom loss function stems from multi task learning [16], we used the weighted average of the two loss functions used in previous work [25]. Our objective is to increase the accuracy of our predictions while maintaining a low level of under-estimations.…”
Section: Introductionmentioning
confidence: 99%