2022
DOI: 10.1016/j.jpi.2022.100114
|View full text |Cite
|
Sign up to set email alerts
|

Number of necessary training examples for Neural Networks with different number of trainable parameters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 54 publications
0
3
0
Order By: Relevance
“…1, a systematic approach was undertaken within a dedicated repository of image datasets of disasters and emergencies [38]. Next, global variables crucial for model training were modified, encompassing parameters such as the generic seed, number of epochs, learning rates, choice of pre-trained base models including EfficientNetB0, B7, V2B0, and V2L, InceptionV3, ResNet50, and VGG19, together with the pre-processing methods and optimization algorithms like Adam and RMSprop [39]. Subsequently, the dataset was read and decoded into pairs, ensuring proper preparation for training [40].…”
Section: Methodsmentioning
confidence: 99%
“…1, a systematic approach was undertaken within a dedicated repository of image datasets of disasters and emergencies [38]. Next, global variables crucial for model training were modified, encompassing parameters such as the generic seed, number of epochs, learning rates, choice of pre-trained base models including EfficientNetB0, B7, V2B0, and V2L, InceptionV3, ResNet50, and VGG19, together with the pre-processing methods and optimization algorithms like Adam and RMSprop [39]. Subsequently, the dataset was read and decoded into pairs, ensuring proper preparation for training [40].…”
Section: Methodsmentioning
confidence: 99%
“…During the training process, the CNN was trained using 70% of the available images for training purposes and the remaining 30% for validation, which is considered by the literature as the optimal partition of the dataset for training these types of CNNs [35,76].…”
Section: Training Model Process and Testingmentioning
confidence: 99%
“…In addition to data stream combinations, network architecture and training data set size also influence CNN performance. Networks with higher numbers of trainable parameters require more data during training to reduce the effects of overfitting (Götz et al., 2022). Network performance was investigated for PHD‐CNN architectures of varying complexity.…”
Section: Effect Of Network Architecture Complexity and Data Quantity ...mentioning
confidence: 99%