2023
DOI: 10.1007/s11042-023-14603-x
|View full text |Cite
|
Sign up to set email alerts
|

Novel CNN with investigation on accuracy by modifying stride, padding, kernel size and filter numbers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 27 publications
0
0
0
Order By: Relevance
“…In addition, many other hyperparameters, including kernel sizes, strides, padding, activation functions, and kernel initializer, are utilized by Huang and Jafari 28 that proposed CDRAGAN and BAGAN-GP, which are the state-of-the-art class conditional GAN methods. Furthermore, the number of kernels is determined as two to the powers, including 32, 64, and 128, commonly used in the existing studies using convolutional neural networks 48 , 49 .…”
Section: Methodsmentioning
confidence: 99%
“…In addition, many other hyperparameters, including kernel sizes, strides, padding, activation functions, and kernel initializer, are utilized by Huang and Jafari 28 that proposed CDRAGAN and BAGAN-GP, which are the state-of-the-art class conditional GAN methods. Furthermore, the number of kernels is determined as two to the powers, including 32, 64, and 128, commonly used in the existing studies using convolutional neural networks 48 , 49 .…”
Section: Methodsmentioning
confidence: 99%
“…Padding, stride, and filters are important parameters in CNNs that determine the size and resolution of the output feature maps (Feat_Maps) [86][87][88][89][90][91][92][93]. These parameters play a crucial role in controlling the amount of information that is retained in the Feat_Maps and can greatly impact the performance of the network.…”
Section: Padding Stride and Filtersmentioning
confidence: 99%
“…In this study, we developed an enhanced U-Net model with several improvements [13]. First, we streamlined the model by removing redundant crops, which not only reduced computational complexity but also improved efficiency [14,15]. Second, we simplified the model by decreasing the number of epochs, resulting in faster training while maintaining performance [16].…”
Section: Modelmentioning
confidence: 99%