2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA) 2015
DOI: 10.1109/apsipa.2015.7415532
|View full text |Cite
|
Sign up to set email alerts
|

Transfer learning for speech and language processing

Abstract: Abstract-Transfer learning is a vital technique that generalizes models trained for one setting or task to other settings or tasks. For example in speech recognition, an acoustic model trained for one language can be used to recognize speech in another language, with little or no re-training data. Transfer learning is closely related to multi-task learning (cross-lingual vs. multilingual), and is traditionally studied in the name of 'model adaptation'. Recent advance in deep learning shows that transfer learni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
107
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
3
3
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 180 publications
(107 citation statements)
references
References 123 publications
(149 reference statements)
0
107
0
Order By: Relevance
“…In natural language processing, transfer learning methods have been successfully applied to speech recognition, document classification and sentiment analysis (Wang and Zheng, 2015). Deep learning models discover multiple levels of representation, some of which may be useful across tasks, which makes them particularly suited to transfer learning (Bengio, 2012).…”
Section: Transfer Learningmentioning
confidence: 99%
“…In natural language processing, transfer learning methods have been successfully applied to speech recognition, document classification and sentiment analysis (Wang and Zheng, 2015). Deep learning models discover multiple levels of representation, some of which may be useful across tasks, which makes them particularly suited to transfer learning (Bengio, 2012).…”
Section: Transfer Learningmentioning
confidence: 99%
“…Ideally, transfer learning improves generalization of the model, reduces training times on the target dataset, and reduces the amount of labeled data needed to obtain high performance. The idea has been successfully applied to many fields, such as speech recognition (Wang and Zheng, 2015), finance (Stamate et al, 2015) and computer vision (Zeiler and Fergus, 2013;Yosinski et al, 2014;Oquab et al, 2014). Despite its popularity, few studies have been performed on transfer learning for DNN-based models in the field of natural language processing (NLP).…”
Section: Introductionmentioning
confidence: 99%
“…This opens the opportunity to utilize large amounts of data for a related task to improve the performance across all tasks. There has been recent work in NLP demonstrating improved performance for machine translation (Dong et al, 2015) and syntactic parsing .…”
Section: Related Workmentioning
confidence: 99%
“…There are several kinds of transfer learning. The predominant one being applied to ASR is heterogeneous transfer learning (Wang and Zheng, 2015) which involves training a base model on multiple languages (and tasks) simultaneously. While this achieves some competitive results (Chen and Mak, 2015;Knill et al, 2014), it still requires large amounts of data to yield robust improvements (Heigold et al, 2013).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation