2018
DOI: 10.1016/j.neucom.2017.12.065
|View full text |Cite
|
Sign up to set email alerts
|

Deep hybrid neural-kernel networks using random Fourier features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 37 publications
(33 citation statements)
references
References 10 publications
0
33
0
Order By: Relevance
“…In the AI community, methods that combine kernel methods with deep learning are now being developed, such as neural kernel networks [58,59], deep neural kernel blocks [60], and deep kernel learning [61,62]. A soft sensor based on deep kernel learning was recently applied in a polymerization process [63].…”
Section: Relationship Between Kernel Methods and Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…In the AI community, methods that combine kernel methods with deep learning are now being developed, such as neural kernel networks [58,59], deep neural kernel blocks [60], and deep kernel learning [61,62]. A soft sensor based on deep kernel learning was recently applied in a polymerization process [63].…”
Section: Relationship Between Kernel Methods and Neural Networkmentioning
confidence: 99%
“…Meanwhile, random Fourier features was adopted by Wu et al [279] for kernel PCA. This scheme exploits Bochner's theorem [59,279], in which the kernel mapping is approximated by passing the data through a randomized projection and cosine functions. This results to a map of lower dimensions which saves computational cost.…”
Section: Fast Computation Of Kernel Featuresmentioning
confidence: 99%
“…where ω and b are parameters of the model that have to be determined and φ(x) is the nonlinear feature map which maps an input space into a higher dimensional feature space. Then, the optimal solution is sought in that space by minimizing the residual between the model outputs and the measurements [47]. To this end, the LS-SVM model in the primal is formulated as the following optimization problem [37,48]:…”
Section: Least Squares Support Vector Machinesmentioning
confidence: 99%
“…In particular, it learns multiple levels of hierarchical representation from the given raw input data by means of successive nonlinear modules that are stacked in a hierarchical architecture. Thanks to the the staked nonlinear layers, the learnt representation (features) at one level is transformed into a slightly more abstract representation at a higher level [3]. Recent years have witnessed the significant impact of various deep learning architectures including for instance Restricted Boltzmann Machines [4,5,6], Stacked Denoising Autoencoders [7,8], , Convolutional Neural Networks [9,10], Long Short Term Memories [11] among others.…”
Section: Introductionmentioning
confidence: 99%
“…kernel based models are well established with strong foundations in learning theory and optimization. They are well suited for problems with limited training instances and are able to extend linear methods to nonlinear ones with theoretical guarantees [3]. However, in their classical formulation, they cannot learn features from raw data and do not scale well to the size of the training datasets.…”
Section: Introductionmentioning
confidence: 99%