2022
DOI: 10.1007/s11042-022-12254-y
|View full text |Cite
|
Sign up to set email alerts
|

Optimized deep learning for genre classification via improved moth flame algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 44 publications
0
4
0
Order By: Relevance
“…The dataset for analysis was downloaded from Reference 43. Accordingly, the performance of the adopted approach was measured over extant models such as ensemble classifiers + WOA, 44 ensemble classifiers + SLnO, 38 ensemble classifiers + SA‐SLnO, 45 and ensemble classifiers + IMFO 34 regarding certain positive and negative measures. Here, the performance analysis was performed for varied learning percentage (LP) that ranges from 50, 60, 70, and 80.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The dataset for analysis was downloaded from Reference 43. Accordingly, the performance of the adopted approach was measured over extant models such as ensemble classifiers + WOA, 44 ensemble classifiers + SLnO, 38 ensemble classifiers + SA‐SLnO, 45 and ensemble classifiers + IMFO 34 regarding certain positive and negative measures. Here, the performance analysis was performed for varied learning percentage (LP) that ranges from 50, 60, 70, and 80.…”
Section: Resultsmentioning
confidence: 99%
“…The updated distances 33 of positive matrix RN$$ {R}_N $$ and HN$$ {H}_N $$ is shown in Equations () and (), wherein, italicOne$$ One $$ indicates identity matrix 34 HRgoodbreak+μw()BRHgoodbreak−italicOneHT$$ H:= R+{\mu}_w\left(\frac{B}{RH}- One\right){H}^T $$ HHgoodbreak+μhRT()BRHgoodbreak−italicOne$$ H:= H+{\mu}_h{R}^T\left(\frac{B}{RH}- One\right) $$ …”
Section: Extraction Of Proposed Featuresmentioning
confidence: 99%
“…Kumaraswamy and Poonacha (2021) used the self-adaptive sea lion optimization algorithm to train a neural network that receives a set of features as input. Similarly, Kumaraswamy (2022) uses an improved moth flame optimization algorithm for weight learning of the neural network for the MGC task. Our work differs from these approaches by focusing on the architecture design of the CNN neural network for the MGC task rather than tuning parameters using an optimization algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…Accurate results on the GTZAN dataset were obtained. Balachandra [12] improved the moth algorithm IMOF and successfully applied it to the task of music genre classification. He achieved good classification results by optimizing the weights of a deep belief network (DBN) and performing classification.…”
Section: Introductionmentioning
confidence: 99%