2022
DOI: 10.3390/fractalfract6120706
|View full text |Cite
|
Sign up to set email alerts
|

Optimization of Hyperparameters in Object Detection Models Based on Fractal Loss Function

Abstract: Hyperparameters involved in neural networks (NNs) have a significant impact on the accuracy of model predictions. However, the values of the hyperparameters need to be manually preset, and finding the best hyperparameters has always puzzled researchers. In order to improve the accuracy and speed of target recognition by a neural network, an improved genetic algorithm is proposed to optimize the hyperparameters of the network by taking the loss function as the research object. Firstly, the role of all loss func… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 39 publications
0
1
0
Order By: Relevance
“…A high-performing neural network model should be characterized by high classification accuracy, low computation time, and a low number of model parameters. Typically, scientists and researchers have employed various approaches to improve the radar target classification performance of neural network models, including feature extraction using preprocessing techniques [1][2][3], optimization of neural network structures [4,5], hyperparameter optimization for network training [6], and application of data augmentation algorithms [7].…”
Section: Introductionmentioning
confidence: 99%
“…A high-performing neural network model should be characterized by high classification accuracy, low computation time, and a low number of model parameters. Typically, scientists and researchers have employed various approaches to improve the radar target classification performance of neural network models, including feature extraction using preprocessing techniques [1][2][3], optimization of neural network structures [4,5], hyperparameter optimization for network training [6], and application of data augmentation algorithms [7].…”
Section: Introductionmentioning
confidence: 99%