2015
DOI: 10.1016/j.asoc.2015.07.040
|View full text |Cite
|
Sign up to set email alerts
|

Feedforward kernel neural networks, generalized least learning machine, and its deep learning with application to image classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 50 publications
(16 citation statements)
references
References 39 publications
(62 reference statements)
0
10
0
Order By: Relevance
“…Our previous work [35] reveals that a TSK fuzzy system can be equivalent to a fuzzy neural network, so CSK-TSK-FS can also be considered as a special neural network shown in Fig.4, where only one feature is involved in each yellow node of the hidden layer. In [8], [9], authors demonstrate that the optimization of a single-layer feedforward neural network is equivalence to solving a ridge regression problem that can be fast solved by the least learning machine (LLM). Therefore, obviously, the neural network in Fig.4 can also be fast solved by LLM only by augmenting original input features into the hidden layer.…”
Section: Training Algorithm Of Csk-tsk-fsmentioning
confidence: 99%
“…Our previous work [35] reveals that a TSK fuzzy system can be equivalent to a fuzzy neural network, so CSK-TSK-FS can also be considered as a special neural network shown in Fig.4, where only one feature is involved in each yellow node of the hidden layer. In [8], [9], authors demonstrate that the optimization of a single-layer feedforward neural network is equivalence to solving a ridge regression problem that can be fast solved by the least learning machine (LLM). Therefore, obviously, the neural network in Fig.4 can also be fast solved by LLM only by augmenting original input features into the hidden layer.…”
Section: Training Algorithm Of Csk-tsk-fsmentioning
confidence: 99%
“…In terms of algorithms, various machine-learning models, such as Naïve Bayes [15,16], Ensemble [17], or Deep Learning Structure [18] have been used for crime prediction, but Deep Neural Networks (DNN) provided better results in our previous experiments. This study uses DNN because it reflects representation learning and has been used in crosslingual transfer [19], speech recognition [20][21][22][23], image recognition [24][25][26][27], sentiment analysis [28][29][30][31][32], and biomedical [33]. Although the upper bound of the prediction performance still depends on the problem and the data themselves, DNN's auto-feature extraction [34] allows us to use rapid model building without feature processing, thus reducing the application threshold due to feature processing.…”
Section: Related Workmentioning
confidence: 99%
“…Thus, accessing the concealed data without knowing the topology of the NN appears practically infeasible [ 1 ]. Although some researchers prefer models with interpretability power such as explicit mathematical or statistical models or even heuristically encoded models such as fuzzy models, it has been proved that black box type of models when learning is feasible have more capability of capturing complicated knowledge and proving functionality in real world type of systems [ 2 ][ 3 ][ 4 ]. Such black box models have dramatically proved high efficiency in the state of the art of speech recognition, visual object recognition and many other fields [ 5 ].…”
Section: Introductionmentioning
confidence: 99%