2013
DOI: 10.1016/j.econmod.2013.05.007
|View full text |Cite
|
Sign up to set email alerts
|

Small sample-oriented case-based kernel predictive modeling and its economic forecasting applications under n-splits-k-times hold-out assessment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 61 publications
0
3
0
Order By: Relevance
“…The training set is used to train the model to optimize the parameters that relate the inputs to the outputs, while the testing set is applied to assess the performance of the model by comparing the outcome of the model with various ML algorithms mentioned in the following sub‐section of this project. Based on the hold‐out method, the data has been divided into training and testing sets as 2/3 and 1/3 of the total data, respectively [17] . All the ML algorithms used were programmed based on the software “MATLAB”.…”
Section: Experimental Setup and Acquisition Of Datasetsmentioning
confidence: 99%
“…The training set is used to train the model to optimize the parameters that relate the inputs to the outputs, while the testing set is applied to assess the performance of the model by comparing the outcome of the model with various ML algorithms mentioned in the following sub‐section of this project. Based on the hold‐out method, the data has been divided into training and testing sets as 2/3 and 1/3 of the total data, respectively [17] . All the ML algorithms used were programmed based on the software “MATLAB”.…”
Section: Experimental Setup and Acquisition Of Datasetsmentioning
confidence: 99%
“…Dividing more training set data means that the model would have a better performance, but the smaller the test set data, the poor accuracy would be of the model generalization error. Commonly, data set partitioning methods mainly include the cross-validation method, bootstrapping method, and hold-out method. , Among these methods, the hold-out method is recommended for data sets with small sample sizes and clear classification characteristics. , Considering the characteristics of the small data sets in this study and the complexity of the proposed model, we selected the hold-out method to divide the data sets, that is, 75% of the total samples were randomly selected as the training set, and the remaining 25% were used as the test set to verify the prediction results. Therefore, all 36 groups of samples were randomly divided into 75% (27 samples) as the training set for model construction and the remaining 25% (9 samples) as the test set to verify the prediction accuracy.…”
Section: Application Of the Proposed Modelmentioning
confidence: 99%
“…Huấn luyện bởi các mô hình: Dữ liệu được thực hiện huấn luyện bởi các mô hình Support Vector Machines (SVM), Naive Bayes (NB), Random Forrest (RF), Neural Network (NN) và Decision Tree (DT). Quá trình huấn luyện được tiến hành theo phương pháp Hold-Out, chia ngẫu nhiên tập dữ liệu đã được gán nhãn thành hai tập con theo tỷ lệ 70% dữ liệu huấn luyện và 30% dữ liệu dùng để kiểm thử (Li, Hong, He, Xu, & Sun, 2013;Yussupova & ctg., 2016). Đánh giá và lựa chọn mô hình: Để lựa chọn mô hình huấn luyện tốt nhất, nghiên cứu sử dụng cách đánh giá phổ biến là dựa trên các chỉ số tính toán trong ma trận nhầm lẫn (Confusion Matrix) của tác giả Kulkarni, Chong, và Batarseh (2020) được mô tả như trong Bảng 1.…”
Section: Phương Pháp Nghiên Cứuunclassified