2018
DOI: 10.1109/tnnls.2018.2804895
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing Kernel Machines Using Deep Learning

Abstract: Building highly nonlinear and nonparametric models is central to several state-of-the-art machine learning systems. Kernel methods form an important class of techniques that induce a reproducing kernel Hilbert space (RKHS) for inferring non-linear models through the construction of similarity functions from data. These methods are particularly preferred in cases where the training data sizes are limited and when prior knowledge of the data similarities is available. Despite their usefulness, they are limited b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 50 publications
(35 citation statements)
references
References 42 publications
0
34
0
1
Order By: Relevance
“…The inspiration of adopting deep learning to correct faulty shock signals comes from the aphorism of "diligence redeems stupidity" that humans or even biological entities can improve their skills through continuous training and effort. Moreover, recent research in applying deep learning to process time-series signals [15], [24] also proves the concept in related domains, and paves a new avenue towards the goal of this paper. Similarly, the DNN is trained with the collected shock signal datasets.…”
Section: The Proposed Dnn 1) Overviewmentioning
confidence: 64%
See 1 more Smart Citation
“…The inspiration of adopting deep learning to correct faulty shock signals comes from the aphorism of "diligence redeems stupidity" that humans or even biological entities can improve their skills through continuous training and effort. Moreover, recent research in applying deep learning to process time-series signals [15], [24] also proves the concept in related domains, and paves a new avenue towards the goal of this paper. Similarly, the DNN is trained with the collected shock signal datasets.…”
Section: The Proposed Dnn 1) Overviewmentioning
confidence: 64%
“…Over the past decade, deep learning has achieved great success in a variety of fields. Examples include computer vision [15], fault diagnosis in mechanics [16], aerospace engineering [17], etc. However, few works have been done to introduce deep learning to the field of calibrating industrial high-g accelerometer.…”
Section: Introductionmentioning
confidence: 99%
“…尽管文献 [31] 为学习最优核矩阵提供了良好的开端, 但实践中, 如 何选择或构造出合适的核矩阵基底并求得最优核等问题尚未解决. 目前, 利用深度学习和启发式学 习 [38,39] 等方式的研究为最优核学习提供了一个新的方向. 此外, 核方法在其他机器学习问题中也大有可为.…”
Section: 展望unclassified
“…Despite its effectiveness, it is important to note that the feature extraction process is disentangled from the metric learning network and hence cannot support endto-end inferencing. However, recent success of such end-to-end learning systems in computer vision applications [20,21,22] motivates the design of a deep metric learning architecture that works directly on the temporal sequences. Long Short-Term Memory (LSTM) based recurrent networks have become the de facto solution to sequence modeling tasks including acoustic modeling [23], speech recognition [24] and Natural Language Processing (NLP) [25].…”
Section: Related Workmentioning
confidence: 99%