2020
DOI: 10.1155/2020/9868017
|View full text |Cite
|
Sign up to set email alerts
|

Iterative Deep Neighborhood: A Deep Learning Model Which Involves Both Input Data Points and Their Neighbors

Abstract: Deep learning models, such as deep convolutional neural network and deep long-short term memory model, have achieved great successes in many pattern classification applications over shadow machine learning models with hand-crafted features. The main reason is the ability of deep learning models to automatically extract hierarchical features from massive data by multiple layers of neurons. However, in many other situations, existing deep learning models still cannot gain satisfying results due to the limitation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…) is the square norm of the difference between x1 and x2 , when E( x1 , x2 ) approaches 0, the gradient of E( x1 , x2 ) relative to the model parameters will disappear [39].…”
Section: Stratification Of the Rotor Systemmentioning
confidence: 99%
“…) is the square norm of the difference between x1 and x2 , when E( x1 , x2 ) approaches 0, the gradient of E( x1 , x2 ) relative to the model parameters will disappear [39].…”
Section: Stratification Of the Rotor Systemmentioning
confidence: 99%
“…Figure 1 illustrates the overview framework of our model. The model comprises a CNN model (Jeong et al, 2020 ), indicated as c , one concatenation layer, 1 Fully Connected (FC) layer, and one Softmax nonlinear transformation layer inspired by the work and architecture in (Liu et al, 2020 ). The dataflow in the model along with the function of these layers are explained as follows:…”
Section: First Proposed Framework: Eccnnmentioning
confidence: 99%
“…DE was used for finely adapting Naïve Bayesian Classifier (NBC) and used for text classification in Diab and El Hindi ( 2017 ). A multiple partially observed view for multilingual text categorization (Amini et al, 2009 ), an iterative deep neighborhood model for text classification (Liu et al, 2020 ), integrating bidirectional Long Term Short Memory (LSTM) with 2D max pooling for text classification (Zhou et al, 2016 ), Recurrent Neural Network (RNN) for text classification with multi-task learning (Liu et al, 2016 ), Recurrent CNN for text classification (Lai et al, 2015 ), and a character level convolutional network for text classification (Zhang et al, 2015 ) are some of the most famous deep learning works proposed in the literature. A ranking based deep learning representation for efficient text classification (Zheng et al, 2018 ), a hierarchical neural network document representation approach for text classification using three different models (Kowsari et al, 2017 ), a C-LSTM neural network for text classification (Zhou, 2015 ), and a neural attention model for leveraging contextual sentences for text classification (Yan, 2019 ) are again some of the works which help the research community to a great extent for further analysis.…”
Section: Introductionmentioning
confidence: 99%