2014
DOI: 10.1002/sam.11226
|View full text |Cite
|
Sign up to set email alerts
|

Multi‐transfer: Transfer learning with multiple views and multiple sources

Abstract: Transfer learning, which aims to help learning tasks in a target domain by leveraging knowledge from auxiliary domains, has been demonstrated to be effective in different applications such as text mining, sentiment analysis, and so on. In addition, in many real‐world applications, auxiliary data are described from multiple perspectives and usually carried by multiple sources. For example, to help classify videos on Youtube, which include three perspectives: image, voice and subtitles, one may borrow data from … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(12 citation statements)
references
References 18 publications
0
12
0
Order By: Relevance
“…Multitask Learning Frameworks for learning representations across two different sources within the same domain follow multitask learning (Caruana, 1997). The ability to utilize knowledge from various sources compensates for missing data and complements existing meta-data (Tan et al, 2013;Ding et al, 2014), thus allowing for effective sharing of task-invariant features (Caruana, 1997;Zhang and Wang, 2016;Zhang et al, 2018). MTL has been utilized for name error recognition (Cheng et al, 2015), tagging-chunking (Collobert et al, 2011), machine translation (Luong et al, 2015) and relation extraction (Gupta et al, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…Multitask Learning Frameworks for learning representations across two different sources within the same domain follow multitask learning (Caruana, 1997). The ability to utilize knowledge from various sources compensates for missing data and complements existing meta-data (Tan et al, 2013;Ding et al, 2014), thus allowing for effective sharing of task-invariant features (Caruana, 1997;Zhang and Wang, 2016;Zhang et al, 2018). MTL has been utilized for name error recognition (Cheng et al, 2015), tagging-chunking (Collobert et al, 2011), machine translation (Luong et al, 2015) and relation extraction (Gupta et al, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…These approaches can be expressed as instance based or feature based. Instance‐based transfer approaches (such that a portion of sample points from the source domain data can be used in the target domain) have been created, which use boosting (Dai, Yang, Xue, & Yu, 2007; Pardoe & Stone, 2010), multiple input sources (Tan, Zhong, Xiang, & Yang, 2014), and reweighting approaches based on covariate shift setting (Cortes & Mohri, 2014; Gretton et al., 2009; Huang et al., 2007; Sugiyama, Nakajima, Kashima, Buenau, & Kawanabe, 2008). In feature‐based transfer approaches, the source and target data are mapped into a space where the shared information from both data can be applied to the target domain (Argyriou, Evgeniou, & Pontil, 2007, 2008).…”
Section: Literature Reviewmentioning
confidence: 99%
“…These algorithms reduce negative transfers by increasing the number of sources. Tan et al [28] presented a novel algorithm to leverage knowledge from different views and sources collaboratively by letting different views from different sources complement each other through a co-training style framework to reduce the differences in distribution across different domains. Beyond transferring the source data, Zhuang et al [29] discovered a more powerful feature representation of the data when transferring knowledge from multiple source domains to the target domain.…”
Section: Fuzzy Multiple-source Transfer Learningmentioning
confidence: 99%