2021
DOI: 10.1080/09540091.2021.1970719
|View full text |Cite
|
Sign up to set email alerts
|

Cost-sensitive regression learning on small dataset through intra-cluster product favoured feature selection

Abstract: Massive regression and forecasting tasks are generally cost-sensitive regression learning problems with asymmetric costs between overprediction and under-prediction. However, existing classic methods, such as clustering and feature selection, are subject to difficulties in dealing with small datasets. As one of the key challenges, it is difficult to statistically validate the importance of features using traditional algorithms (e.g. the Boruta algorithm) owing to insufficient available data. By leveraging the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 56 publications
0
2
0
Order By: Relevance
“…Moreover, these studies contribute to forecasting by integrating backorders into traditional inventory models and enhancing our understanding of managing backorders in inventory management. In a different approach, researchers combined ARIMA and ANN for backorder prediction ( [11], [12]). The development of a Bayesian method for demand forecasting based on compound Poisson distributions is a significant contribution that outperforms other current methods [13].…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, these studies contribute to forecasting by integrating backorders into traditional inventory models and enhancing our understanding of managing backorders in inventory management. In a different approach, researchers combined ARIMA and ANN for backorder prediction ( [11], [12]). The development of a Bayesian method for demand forecasting based on compound Poisson distributions is a significant contribution that outperforms other current methods [13].…”
Section: Literature Reviewmentioning
confidence: 99%
“…By restricting the number of features considered at each split, the base estimators are forced to make more independent decisions, which can lead to a more robust The BBC combines the advantages of bagging and sampling techniques to address the issue of imbalanced datasets [20]. Several researchers in recent times and the past have recommended fuzzy logic (e.g., [11], [20], [21], [22], [23], [24], [25]). The objective is to add human-centric design along with advanced machine-learning algorithms.…”
Section: Modeling and Evaluationmentioning
confidence: 99%