2019
DOI: 10.1016/j.obhdp.2018.12.005
|View full text |Cite
|
Sign up to set email alerts
|

Algorithm appreciation: People prefer algorithmic to human judgment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

29
533
15
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 802 publications
(674 citation statements)
references
References 58 publications
29
533
15
1
Order By: Relevance
“…While robo-fund aversion (see Fig. 2 for an illustration of this aversion on pooled data) is consistent with the literature on algorithm aversion (Dietvorst et al, 2015), and aversion towards machines making moral decisions (Bigman & Gray, 2018), it should not be treated as an obvious finding, considering that researchers have shown algorithm appreciation under certain conditions (Logg et al, 2019), and were unable to show algorithm aversion in financial investment (Germann & Merkle, 2019). We should also add that while an internal meta-analysis (Goh et al, 2016) suggests a mean effect size of dweighted = -0.58 across all three studies, our use of the term "computer algorithm" can be considered a conservative measure.…”
Section: Discussionsupporting
confidence: 73%
“…While robo-fund aversion (see Fig. 2 for an illustration of this aversion on pooled data) is consistent with the literature on algorithm aversion (Dietvorst et al, 2015), and aversion towards machines making moral decisions (Bigman & Gray, 2018), it should not be treated as an obvious finding, considering that researchers have shown algorithm appreciation under certain conditions (Logg et al, 2019), and were unable to show algorithm aversion in financial investment (Germann & Merkle, 2019). We should also add that while an internal meta-analysis (Goh et al, 2016) suggests a mean effect size of dweighted = -0.58 across all three studies, our use of the term "computer algorithm" can be considered a conservative measure.…”
Section: Discussionsupporting
confidence: 73%
“…Although a theory of (algorithmic) mind (cf. theory of machine, Logg, Minson, & Moore, ) naturally applies to the idea of algorithmic literacy, its connections to other themes are also apparent: Does an accurate internal model of an algorithm's perceptions moderate the degree to which a human user feels a need for control, the degree to which a user requires extrinsic incentivization, the degree to which a user is capable of integrating an algorithm's decision process, or the degree to which a user is able to align with an algorithm's rational decision outcome?…”
Section: Discussionmentioning
confidence: 99%
“…In some recent studies, Logg [2,3] has shown that people trust more a machine than other people when they need to make a decision in an objective context (e.g., they are looking for information). In other studies, in a subjective context (e.g., looking for book recommendations, looking for joke recommendations), people tend to rely more on other human beings [4,5].…”
Section: Theoretical Frameworkmentioning
confidence: 99%
“…Given the theoretical framework described above, and in particular following the results on decisions' outcomes of Logg [2,3], we expected that a suggestion provided by an intelligent machine (in a technical and relatively complicated decision task) decreases the possibility for the user to blame the advisor and therefore should produce a lower rating on counterfactual emotions and responsibility.…”
Section: Hypothesesmentioning
confidence: 99%