2020
DOI: 10.1007/s40747-020-00169-w
|View full text |Cite
|
Sign up to set email alerts
|

Optimal subset selection for causal inference using machine learning ensembles and particle swarm optimization

Abstract: We suggest and evaluate a method for optimal construction of synthetic treatment and control samples for the purpose of drawing causal inference. The balance optimization subset selection problem, which formulates minimization of aggregate imbalance in covariate distributions to reduce bias in data, is a new area of study in operations research. We investigate a novel metric, cross-validated area under the receiver operating characteristic curve (AUC) as a measure of balance between treatment and control group… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 60 publications
0
7
0
Order By: Relevance
“…The proposed method requires only a small amount of feature extraction computation and can still reach a significant accuracy for multichannel sEMG. In the future, we will further study the sEMG based applications in the human-machine interaction, such as the real control of the robot, or some rehabilitation treatments based on sEMG, and seek for better pattern recognition method [44,45].…”
Section: Discussionmentioning
confidence: 99%
“…The proposed method requires only a small amount of feature extraction computation and can still reach a significant accuracy for multichannel sEMG. In the future, we will further study the sEMG based applications in the human-machine interaction, such as the real control of the robot, or some rehabilitation treatments based on sEMG, and seek for better pattern recognition method [44,45].…”
Section: Discussionmentioning
confidence: 99%
“…Then, we apply two machine learning methods, i.e., the Logistic Regression (LR) model and the Random Forest (RF) model as the classifiers for training and prediction. The effect of the models is compared using relevant evaluation indexes such as F1-score, G-means, MCC, and AUC [45][46][47].…”
Section: Smote-tomek Link Algorithmmentioning
confidence: 99%
“…The F1-measure is the harmonic mean of the recall rate R and precision rate P, which can evaluate the overall classification of unbalanced data sets [45][46][47]. The larger the value of F1, the better is the classification effect of the classifier, with.…”
Section: R T P T P + F Nmentioning
confidence: 99%
See 2 more Smart Citations