2015
DOI: 10.1007/978-3-319-23525-7_8
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Task Learning with Group-Specific Feature Space Sharing

Abstract: When faced with learning a set of inter-related tasks from a limited amount of usable data, learning each task independently may lead to poor generalization performance. Multi-Task Learning (MTL) exploits the latent relations between tasks and overcomes data scarcity limitations by co-learning all these tasks simultaneously to offer improved performance. We propose a novel Multi-Task Multiple Kernel Learning framework based on Support Vector Machines for binary classification tasks. By considering pair-wise ta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…[46] provides a bound on the empirical GRC of the aforementioned hypothesis space. However, similar to the proof of Corollary 21, we can easily convert it to a distribution dependent GRC bound which matches our global bound in (56) (for the defined hypothesis space (65)) and in our notation reads as…”
Section: Comparisons To Related Workmentioning
confidence: 90%
See 1 more Smart Citation
“…[46] provides a bound on the empirical GRC of the aforementioned hypothesis space. However, similar to the proof of Corollary 21, we can easily convert it to a distribution dependent GRC bound which matches our global bound in (56) (for the defined hypothesis space (65)) and in our notation reads as…”
Section: Comparisons To Related Workmentioning
confidence: 90%
“…At the core of each MTL formulation lies a mechanism that encodes task relatedness into the learning problem [24]. Such relatedness mechanism can always be thought of as jointly constraining the tasks' hypothesis spaces, so that their geometry is mutually coupled, e.g., via a block norm constraint [65]. Thus, from a regularization perspective, the tasks mutually regularize their learning based on their inter-task relatedness.…”
Section: Introductionmentioning
confidence: 99%
“…Nair et al [25] evaluated the Spark primary architecture running ML algorithms and reported extensive results that emphasized Spark's advantages. Other groups [26,27] have employed Apache Spark for Twitter data sentiment analysis, and there have been many important contributions to large data collection analysis (interested readers are referred to [28][29][30][31] for further details).…”
Section: Background and Related Workmentioning
confidence: 99%
“…As shown by the past empirical works [5,1,2,31,17], it is beneficial to learn multiple related tasks simultaneously instead of independently as typically done in practice. A commonly utilized information sharing strategy for Multi-Task Learning (MTL) is to use a (partially) common feature mapping φ to map the data from all tasks to a (partially) shared feature space H. Such a method, named kernel-based MTL, not only allows information sharing across tasks, but also enjoys the non-linearity that is brought by the feature mapping φ.…”
Section: Introductionmentioning
confidence: 99%