2019
DOI: 10.48550/arxiv.1901.01334
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Adaptive Weighted Deep Forest Classifier

Abstract: A modification of the confidence screening mechanism based on adaptive weighing of every training instance at each cascade level of the Deep Forest is proposed. The idea underlying the modification is very simple and stems from the confidence screening mechanism idea proposed by Pang et al. to simplify the Deep Forest classifier by means of updating the training set at each level in accordance with the classification accuracy of every training instance. However, if the confidence screening mechanism just remov… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 24 publications
0
1
0
Order By: Relevance
“…Only informative features were used for each level of learning, which greatly saved time cost and memory requirements. Utkin et al (2019) proposed a gcForest algorithm with the idea of AdaBoost. Each layer of gcForest cannot only extract the feature information of the data, but also use the wrongly classified samples to update the sampling weight of the samples, and then update the data distribution of the training data set of each layer, so that subsequent layers pay more attention to misclassified samples.…”
Section: Integration and Deep Cascadementioning
confidence: 99%
“…Only informative features were used for each level of learning, which greatly saved time cost and memory requirements. Utkin et al (2019) proposed a gcForest algorithm with the idea of AdaBoost. Each layer of gcForest cannot only extract the feature information of the data, but also use the wrongly classified samples to update the sampling weight of the samples, and then update the data distribution of the training data set of each layer, so that subsequent layers pay more attention to misclassified samples.…”
Section: Integration and Deep Cascadementioning
confidence: 99%