2021
DOI: 10.31234/osf.io/7asgz
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Everything has its Price: Foundations of Cost-Sensitive Learning and its Application in Psychology

Abstract: Psychology has seen an increase in machine learning (ML) methods. In many applications, observations are classified into one of two groups (binary classification). Off-the-shelf classification algorithms assume that the costs of a misclassification (false-positive or false-negative) are equal. Because this is often not reasonable (e.g., in clinical psychology), cost-sensitive learning (CSL) methods can take different cost ratios into account. We present the mathematical foundations and introduce a taxonomy of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(13 citation statements)
references
References 67 publications
(108 reference statements)
1
11
0
Order By: Relevance
“…That could be particularly true for Adversity and Deception, both of which were "absent" in the great majority of situational snapshots in our sample, confirming the low variances repeatedly found for these dimensions in past studies (s., Jonason & Sherman, 2020;Rauthmann et al, 2014;Sherman et al, 2015). However, we countered this under-representation problem by applying specific weighting techniques for imbalanced class distributions (s., Sterner et al, 2021) in our data analyses. Thus, we can rule out the possibility that the differential predictability of the eight DIAMONDS dimensions was merely caused by methodological issues.…”
Section: Differential Prediction Performance Patterns For Situation C...supporting
confidence: 87%
See 1 more Smart Citation
“…That could be particularly true for Adversity and Deception, both of which were "absent" in the great majority of situational snapshots in our sample, confirming the low variances repeatedly found for these dimensions in past studies (s., Jonason & Sherman, 2020;Rauthmann et al, 2014;Sherman et al, 2015). However, we countered this under-representation problem by applying specific weighting techniques for imbalanced class distributions (s., Sterner et al, 2021) in our data analyses. Thus, we can rule out the possibility that the differential predictability of the eight DIAMONDS dimensions was merely caused by methodological issues.…”
Section: Differential Prediction Performance Patterns For Situation C...supporting
confidence: 87%
“…We ran these models with class-dependent costs to account for imbalanced class distributions (e.g., Deception was only "present" in 3% of all sampled situations). Thereby, we assigned a class-dependent theoretical weight to each observation to increase the effect of the minority class and decrease the effect of the majority class observations (Sterner et al, 2021).…”
Section: Machine Learningmentioning
confidence: 99%
“…This absence of situations high in Deception and Adversity may reflect an actual (fortunate) lack of such situations in everyday life or may be a methodological artifact introduced by a sampling bias if participants do not answer ESs in these situations across studies. Either way, we countered this underrepresentation by applying specific weighting techniques for imbalanced class distributions in our data analyses, as explained in our methods section (see also Sterner et al, 2021). Thus, we can rule out that the differential predictability of the eight DIAMONDS dimensions was merely a methodological artifact of the algorithms applied in our study.…”
Section: Discussionmentioning
confidence: 99%
“…When looking at the results, we notice that while M M CE is very similar for RF and LASSO, the models slightly differ in their respective tradeoff of SEN S and SP EC. This finding exemplifies the need to consider other performance measures beyond mean classification error or accuracy in many applied classification settings, in which the practical cost of false positive and false negative predictions are not the same (Sterner et al, 2021).…”
Section: Practical Exercise Iii: Model Comparisons With Benchmark Exp...mentioning
confidence: 77%
“…AUC is based on the receiver operating curve (ROC) which plots 1 − SP EC against SEN S. Each combination results from a different prediction threshold, that means we predict class 1 if the predicted probability for class 1 is greater than the threshold (0.5 by default). For more advanced techniques, Sterner et al (2021) give an introduction to cost-sensitive learning for psychologists with mlr3.…”
Section: Table 2 Confusion Matrixmentioning
confidence: 99%