2016
DOI: 10.1371/journal.pone.0157914
|View full text |Cite
|
Sign up to set email alerts
|

The Success of Linear Bootstrapping Models: Decision Domain-, Expertise-, and Criterion-Specific Meta-Analysis

Abstract: The success of bootstrapping or replacing a human judge with a model (e.g., an equation) has been demonstrated in Paul Meehl’s (1954) seminal work and bolstered by the results of several meta-analyses. To date, however, analyses considering different types of meta-analyses as well as the potential dependence of bootstrapping success on the decision domain, the level of expertise of the human judge, and the criterion for what constitutes an accurate decision have been missing from the literature. In this study,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
11
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(12 citation statements)
references
References 54 publications
(90 reference statements)
1
11
0
Order By: Relevance
“…The fact that teachers think that human sources are more reliable, accurate, transparent, trustworthy, and beneficial for the student goes a long way in explaining their behavior. Although expert models improve judgment and decision making (e.g., Kaufmann & Wittmann, ; Meehl, ), it seems that their potential is not recognized within the education field. Thus, the answer to our second question is that, in line with other domains (Dietvorst et al., ; Önkal et al., ), everything else being equal, teachers favor advice from human sources over computerized expert systems.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The fact that teachers think that human sources are more reliable, accurate, transparent, trustworthy, and beneficial for the student goes a long way in explaining their behavior. Although expert models improve judgment and decision making (e.g., Kaufmann & Wittmann, ; Meehl, ), it seems that their potential is not recognized within the education field. Thus, the answer to our second question is that, in line with other domains (Dietvorst et al., ; Önkal et al., ), everything else being equal, teachers favor advice from human sources over computerized expert systems.…”
Section: Discussionmentioning
confidence: 99%
“…Good sources of such advice are expert models—formal decision‐making tools, such as mathematical models or algorithms that base their advice on analyses of large databases and can incorporate and combine multiple sources of information (Meehl, ). A review of the accuracy of judgments by teachers and such expert models found that models are more accurate and showed that teachers could benefit from expert model advice (see Kaufmann & Wittmann, ). Evidence from other fields suggests that the acceptance of expert models varies widely (see Dietvorst, Simmons, & Massey, ; Önkal, Goodwin, Thomson, Gönül, & Pollock, ; see also Logg, Minson, & Moore, ).…”
Section: Research Questionsmentioning
confidence: 99%
“…The evaluation challenge is in line with the standard application of the Brunswikian Lens Model to (social) judgment theory (Karelaia and Hogarth 2008), bootstrapping (Camerer 1981;Kaufmann and Wittmann 2016), and behavioral decision modeling (Bose 2015;Glöckner and Betsch 2012;Slovic et al 1977). Many of the above-mentioned planning groups utilized MAUT to identify aspects of consent and dissent among different stakeholder groups for coping with tradeoffs among values (see, e.g., .…”
Section: The Theory Of Probabilistic Functionalism and Multi-criteriamentioning
confidence: 96%
“…Calculators and analytical tools make fewer mathematical errors than human problem solvers, and in more uncertain domains, algorithmic forecasters outperform human forecasters on average (Dawes, Faust, & Meehl, 1989; Meehl, 1954). Meta-analyses investigating predictions in the domains of clinical health (Ægisdóttir et al, 2006); human health and behavior (Grove, Zald, Lebow, Snitz, & Nelson, 2000); medicine, business, psychology, and education (Kaufmann & Wittmann, 2016); and hiring and academic admissions (Kuncel, Klieger, Connelly, & Ones, 2013) have found that algorithms consistently outperform human judgment. Even simplistic linear models beat experts (Dawes, 1979), and models designed to distill an expert’s prediction process almost always outperform the expert they were based on (Camerer, 1981).…”
mentioning
confidence: 99%