2020
DOI: 10.3390/app10030973
|View full text |Cite
|
Sign up to set email alerts
|

Boosting Minority Class Prediction on Imbalanced Point Cloud Data

Abstract: Data imbalance during the training of deep networks can cause the network to skip directly to learning minority classes. This paper presents a novel framework by which to train segmentation networks using imbalanced point cloud data. PointNet, an early deep network used for the segmentation of point cloud data, proved effective in the point-wise classification of balanced data; however, performance degraded when imbalanced data was used. The proposed approach involves removing between-class data point imbalanc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(11 citation statements)
references
References 38 publications
0
9
0
Order By: Relevance
“…Imbalance data remains a key challenge against classification models [15,18]. The majority of literature considered re-sampling approaches, i.e., both over-sampling and under-sampling, to alleviate degradation due to the issue of imbalanced data [1,17,19,33,37].…”
Section: Theoretical Backgroundmentioning
confidence: 99%
See 1 more Smart Citation
“…Imbalance data remains a key challenge against classification models [15,18]. The majority of literature considered re-sampling approaches, i.e., both over-sampling and under-sampling, to alleviate degradation due to the issue of imbalanced data [1,17,19,33,37].…”
Section: Theoretical Backgroundmentioning
confidence: 99%
“…The low volume of the potential target/important customer data (i.e., imbalanced data distribution) is a major challenge in extracting the latent knowledge in banks marketing data [1,3,10]. There is still an insisting need for handling the imbalanced dataset distribution reliably [15][16][17]; commonly used approaches [1,15,16,[18][19][20][21] impose processing overhead or lead to loss of information.…”
Section: Introductionmentioning
confidence: 99%
“…To force the model to more reliably predict the minority class, you can up-sample that class through synthetic data generation. Alternatively, if your sample size is sufficient, you may down sample the majority class [ 181 ] to better balance your input data, although care should be taken not to exclude relevant subgroups. An important distinction is that while synthetic or resampled can be applied to model training data, it is generally not acceptable to include synthetic or resampled data in testing datasets.…”
Section: Data Extraction and Preprocessingmentioning
confidence: 99%
“…In another view, most of the sampling methods rely on balancing the mini-batches of training samples [34], [35], [36]; however, lots of graph neural networks, including GCN, should be trained in the full-batch mode [37], [38]. Even in solving class imbalance for non-graph datasets, oversampling the minority class(es) causes the overfitting to the duplicate samples drawn from the minor class(es), and under-sampling the majority class(es) causes the exclusion of required samples for discrimination [39]. These weaknesses lead many methods to costsensitive approaches.…”
Section: Related Workmentioning
confidence: 99%