Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '02 2002
DOI: 10.1145/775107.775111
|View full text |Cite
|
Sign up to set email alerts
|

A theoretical framework for learning from a pool of disparate data sources

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2004
2004
2014
2014

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 0 publications
0
8
0
Order By: Relevance
“…For example: a) the case of having the same output ("y's") and different inputs ("x's"), which corresponds to the problem of integrating information from heterogeneous databases [7]; or, b) the case of multi-modal learning or learning by components, where the (x, y) data for each of the tasks do not belong to the same space X × Y but data for task t come from a space Xt ×Yt -this is for example the machine vision case of learning to recognize a face by first learning to recognize parts of the face, such as eyes, mouth, and nose [14]. Each of these related tasks can be learned using images of different size (or different representations).…”
Section: Notation and Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…For example: a) the case of having the same output ("y's") and different inputs ("x's"), which corresponds to the problem of integrating information from heterogeneous databases [7]; or, b) the case of multi-modal learning or learning by components, where the (x, y) data for each of the tasks do not belong to the same space X × Y but data for task t come from a space Xt ×Yt -this is for example the machine vision case of learning to recognize a face by first learning to recognize parts of the face, such as eyes, mouth, and nose [14]. Each of these related tasks can be learned using images of different size (or different representations).…”
Section: Notation and Setupmentioning
confidence: 99%
“…There has been a lot of experimental work showing the benefits of such multi-task learning relative to individual task learning when tasks are related, see [4,11,15,22]. There have also been various attempts to theoretically study multi-task learning, see [4,5,6,7,8,15,23].…”
Section: Introductionmentioning
confidence: 99%
“…Examples include: (1) handwritten digit recognition where training examples are provided by several persons, (2) medical diagnosis where predictive (diagnostic) model, say for lung cancer, is estimated using a training data set ofmale and female patients, etc. Incorporating this additional information has lead to approaches known as Multi-Task Learning [1,2,6,10] and, more recently, to Learning with Structured Data (aka SVM+) [9], as discussed next.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, if the group labels of future test samples are given, the problem is Multi-Task Learning (MTL) problem [1,2,6,8]. The goal in multi-task learning is to find t related mapping functions {h, 12'...' It} so that the sum of expected losses for each task…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation