Global Oceans 2020: Singapore – U.S. Gulf Coast 2020
DOI: 10.1109/ieeeconf38699.2020.9389373
|View full text |Cite
|
Sign up to set email alerts
|

Dealing With Highly Unbalanced Sidescan Sonar Image Datasets for Deep Learning Classification Tasks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 5 publications
0
4
0
1
Order By: Relevance
“…The F1 score was selected due to the context of the problem (multi-class classification) and the nature of the data; which had imbalanced classes. It is noted that accuracy tends to underestimate classes with a smaller number of samples in relation to those with a larger number (Steiniger et al, 2020 ). Therefore, the accuracy score was chosen to compare its performance against the F1-score as it was the most used evaluation metric across many insects' song classification problems (Silva et al, 2013 ; Noda et al, 2016 , 2019 ; Amlathe, 2018 ; Kim et al, 2021 ).…”
Section: Methodsmentioning
confidence: 99%
“…The F1 score was selected due to the context of the problem (multi-class classification) and the nature of the data; which had imbalanced classes. It is noted that accuracy tends to underestimate classes with a smaller number of samples in relation to those with a larger number (Steiniger et al, 2020 ). Therefore, the accuracy score was chosen to compare its performance against the F1-score as it was the most used evaluation metric across many insects' song classification problems (Silva et al, 2013 ; Noda et al, 2016 , 2019 ; Amlathe, 2018 ; Kim et al, 2021 ).…”
Section: Methodsmentioning
confidence: 99%
“…We based performance mostly on the F1-score since classes were unbalanced and Accuracy tends to underestimate classes with a smaller number of samples in relation to those with a larger number (Steiniger et al, 2020). The F 1 measure is a combination of the precision and recall measures and is defined by Eq.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…We employed macro-F1 since Accuracy underestimates classes with a smaller number of samples in relation to the larger ones. Macro-F1 score is considered a suited metric for an unbalanced test set because it better describes performance by class and not by sample number (Steiniger et al, 2020).…”
Section: Imbalanced Data Bias and Noise Corruptionmentioning
confidence: 99%
“…Sedangkan Steiniger et.al. melakukan penelitian terkait dengan permasalahan dataset dari sonar yang tidak seimbang (balance) untuk tugas klasifikasi dengan menggunakan deep learning [7]. Penelitian yang terkait SONAR juga dilakukan oleh Ghosh, dengan membandingkan 29 macam algoritme pembelajaran mesin antara lain: Logistic Regression, Decision Tree, K-Nearest Neighbor (KNN), Nave Bayes, Support Vector Machines, Random Forest, Ada Boost, dan lain sebagainya [8].…”
Section: Pendahuluanunclassified