2017
DOI: 10.48550/arxiv.1702.08303
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Identifying beneficial task relations for multi-task learning in deep neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
26
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 25 publications
(28 citation statements)
references
References 0 publications
2
26
0
Order By: Relevance
“…Two early concurrent works of learning to group tasks are (Alonso and Plank, 2016) and (Bingel and Søgaard, 2017).…”
Section: Grouping Tasksmentioning
confidence: 99%
“…Two early concurrent works of learning to group tasks are (Alonso and Plank, 2016) and (Bingel and Søgaard, 2017).…”
Section: Grouping Tasksmentioning
confidence: 99%
“…Then, either the parameters of the models are encouraged to be similar by regularizing a distance [29,30] or the knowledge of tasks are linearly combined to produce output for each task [31]. Other works try to improve the multi-task performance by addressing what to share [32,33,34], which tasks to train together [35,36], or inferring task-specifc model weights [37,30,38]. Recently, the transference is proposed to analyze the information transfer in a general MTL framework [2].…”
Section: Related Workmentioning
confidence: 99%
“…On the problem of identifying task relatedness, Ben-David and Schuller (2003) provided a formal framework for task relatedness and derived generalization error bounds for learning of multiple tasks. Bingel and Søgaard (2017) explored task relatedness via exhaustively experimenting with all possible two task tuples in a nonautomated multi-task setup. Other related works explored data selection, where the goal is to select or reorder the examples from one or more domains (usually in a single task) to either improve the training efficiency or enable better transfer learning.…”
Section: Related Workmentioning
confidence: 99%