2018 9th International Conference on Awareness Science and Technology (iCAST) 2018
DOI: 10.1109/icawst.2018.8517219
|View full text |Cite
|
Sign up to set email alerts
|

A Single Filter CNN Performance for Basic Shape Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 2 publications
0
2
0
Order By: Relevance
“…This layer uses two types of processes: max-pooling and average pooling, which means selecting the maximum and minimum values in each feature map [11]. The fully-connected layer is a layer that has a complete connection to all activations in the previous layer followed by nonlinear functions, such as ReLU [28].…”
Section: A Convolutional Neural Network (Cnn)mentioning
confidence: 99%
“…This layer uses two types of processes: max-pooling and average pooling, which means selecting the maximum and minimum values in each feature map [11]. The fully-connected layer is a layer that has a complete connection to all activations in the previous layer followed by nonlinear functions, such as ReLU [28].…”
Section: A Convolutional Neural Network (Cnn)mentioning
confidence: 99%
“…The process is repeated several times to filter the feature maps obtained with the use of subsequent convolutional kernels. Characteristic parameters of the convolution layer are the number and size of filters in individual layers, the step by which the window corresponding to the filter is moved (Murata et al, 2018). The pooling layer is usually placed between two convolutional layers (Zhao and Wang, 2019).…”
Section: Convolutional Neural Networkmentioning
confidence: 99%
“…Deep learning is an advanced machine learning implementation method based on artificial neural networks, popularly adopted in the past few years. Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) are commonly used for pattern detection [24]- [26], object detection [27]- [29], image classification [30]- [32], and other purposes. At the same time, RNN has shortcomings, especially the problem of long-term dependence on time series data which causes loss of gradient, leading to the formation of the Long Short-Term Memory (LSTM) algorithm, which is a development of RNN in overcoming these problems.…”
Section: Introductionmentioning
confidence: 99%