2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA) 2018
DOI: 10.1109/icmla.2018.00073
|View full text |Cite
|
Sign up to set email alerts
|

Detecting Compromised Implicit Association Test Results Using Supervised Learning

Abstract: An implicit association test is a human psychological test used to measure subconscious associations. While widely recognized by psychologists as an effective tool in measuring attitudes and biases, the validity of the results can be compromised if a subject does not follow the instructions or attempts to manipulate the outcome. Compared to previous work, we collect training data using a more generalized methodology. We train a variety of different classifiers to identify a participant's first attempt versus a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(16 citation statements)
references
References 16 publications
0
16
0
Order By: Relevance
“…Machine Learning is Able to Detect Fakers Boldt et al (2018) used native Bayes, support vector machines, multinomial logistic regression, multilayer perceptron, simple logistic regression, propositional rule learner, and random forest on data from a self-developed IAT and showed that machine learning was able to detect fakers successfully. Machine learning performed better than Agosta et al's (2011) IAT faking index.…”
Section: Status Quo Faking Detection With Machine Learningmentioning
confidence: 99%
“…Machine Learning is Able to Detect Fakers Boldt et al (2018) used native Bayes, support vector machines, multinomial logistic regression, multilayer perceptron, simple logistic regression, propositional rule learner, and random forest on data from a self-developed IAT and showed that machine learning was able to detect fakers successfully. Machine learning performed better than Agosta et al's (2011) IAT faking index.…”
Section: Status Quo Faking Detection With Machine Learningmentioning
confidence: 99%
“…There is more faking of low scores than of high scores, and thus, classifiers should be better at detecting faked low scores than at detecting faked high scores. However, previous studies have either included only one faking direction (i.e., faking good; Calanna et al, 2020) or did not distinguish between faking directions (Boldt et al, 2018). Third, faking differs between naive and informed conditions (e.g., Röhner, 2013), and there is more evidence of faking when participants have information than when they are naïve (Röhner et al, 2011).…”
Section: Shortcomings and Open Questionsmentioning
confidence: 98%
“…Taking a look at the importance of the features offers insights into what (most) fakers did and whether their behavior varied across conditions. Boldt et al (2018) used native Bayes, support vector machines, multinomial logistic regression, multilayer perceptron, simple logistic regression, propositional rule learner, and random forest on data from a self-developed IAT and showed that machine learning was able to detect fakers successfully. Machine learning performed better than Agosta et al's (2011) IAT faking index.…”
Section: Feature Importancementioning
confidence: 99%
See 2 more Smart Citations