2019
DOI: 10.48550/arxiv.1904.04153
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

AutoSeM: Automatic Task Selection and Mixing in Multi-Task Learning

Abstract: Multi-task learning (MTL) has achieved success over a wide range of problems, where the goal is to improve the performance of a primary task using a set of relevant auxiliary tasks. However, when the usefulness of the auxiliary tasks w.r.t. the primary task is not known a priori, the success of MTL models depends on the correct choice of these auxiliary tasks and also a balanced mixing ratio of these tasks during alternate training. These two problems could be resolved via manual intuition or hyper-parameter t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 32 publications
(55 reference statements)
0
7
0
Order By: Relevance
“…Because neural networks are prone to overfitting, that is, they tend to even memorize the random labels in the training set or mostly focus only on obvious data features; auxiliary tasks help the models learn the features that are overlooked, but meaningful. The choice of auxiliary tasks to use and avert negative transfers has been an active area of research in multi-task learning [5,19,20].…”
Section: Self-supervision Tasks and Auxiliary Task Learningmentioning
confidence: 99%
“…Because neural networks are prone to overfitting, that is, they tend to even memorize the random labels in the training set or mostly focus only on obvious data features; auxiliary tasks help the models learn the features that are overlooked, but meaningful. The choice of auxiliary tasks to use and avert negative transfers has been an active area of research in multi-task learning [5,19,20].…”
Section: Self-supervision Tasks and Auxiliary Task Learningmentioning
confidence: 99%
“…(3) Efficient intermediate task selection for pre-finetuning. Based on the transferability among different tasks, researchers also explore how to efficiently choose the most appropriate combinations of intermediate tasks from an abundance of candidate tasks through embedding-based methods [535], manually-defined features [532], task gradients [536] and Beta-Bernoulli multi-armed bandit [537]. (4) The power of scale for multi-task pre-finetuning.…”
Section: Multi-task Learningmentioning
confidence: 99%
“…Optimization techniques. Guo et al (2019) use ideas from the multi-armed bandit literature to develop a method for weighting each task. Compared to their method, our SVD-based method is conceptually simpler and requires much less computation.…”
Section: Related Workmentioning
confidence: 99%