1997
DOI: 10.1023/a:1007379606734
|View full text |Cite
|
Sign up to set email alerts
|

Untitled

Abstract: Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new resu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
640
0
3

Year Published

2008
2008
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 4,340 publications
(647 citation statements)
references
References 34 publications
4
640
0
3
Order By: Relevance
“…Our work is mainly related to lifelong learning and multi-task learning (Thrun, 1998, Caruana, 1997, Chen and Liu, 2014b, Silver et al, 2013. Existing lifelong learning approaches focused on exploiting invariances (Thrun, 1998) and other types of knowledge (Chen and Liu, 2014b, Chen and Liu, 2014a, Ruvolo and Eaton, 2013 across multiple tasks.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Our work is mainly related to lifelong learning and multi-task learning (Thrun, 1998, Caruana, 1997, Chen and Liu, 2014b, Silver et al, 2013. Existing lifelong learning approaches focused on exploiting invariances (Thrun, 1998) and other types of knowledge (Chen and Liu, 2014b, Chen and Liu, 2014a, Ruvolo and Eaton, 2013 across multiple tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Although many machine learning topics and techniques are related to LL, e.g., lifelong learning (Thrun, 1998, Chen and Liu, 2014b, Silver et al, 2013, transfer learning (Jiang, 2008, Pan andYang, 2010), multi-task learning (Caruana, 1997), never-ending learning (Carlson et al, 2010), selftaught learning (Raina et al, 2007), and online learning (Bottou, 1998), there is still no unified definition for LL.…”
Section: Introductionmentioning
confidence: 99%
“…This approach is proposed to be complemented as follows. Weights of the j -th hidden layer after pretraining with auto-encoder can be refined by training network with one hidden layer on the original or a similar problem, using transfer learning concept [7][8][9]. That means, we have to use the weights W , received on the second paragraph of the j -th step of the algorithm for initialization of the network with one hidden layer, an and an output layer size of m , and train the network on , 1 { , }, =1…”
Section: Algorithm Descriptionmentioning
confidence: 99%
“…There are many methods to define the correlation among multiple tasks [7][8][9][10][11][12]. One important way is to assume that each task shares common features.…”
Section: Introductionmentioning
confidence: 99%