1991
DOI: 10.1016/0747-5632(91)90006-m
|View full text |Cite
|
Sign up to set email alerts
|

True and false dependence on technology: Evaluation with an expert system

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
12
0

Year Published

1993
1993
2018
2018

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(13 citation statements)
references
References 12 publications
1
12
0
Order By: Relevance
“…However, other studies have found that the benefits were not uniform because of malfunctions or failures. For example, Will (1991) showed performance decrement of expert's judgments when experts were given falsified information which was consistent with Riley (1996).…”
Section: Automated Decision Aids and Human Judgment Performance With supporting
confidence: 72%
See 1 more Smart Citation
“…However, other studies have found that the benefits were not uniform because of malfunctions or failures. For example, Will (1991) showed performance decrement of expert's judgments when experts were given falsified information which was consistent with Riley (1996).…”
Section: Automated Decision Aids and Human Judgment Performance With supporting
confidence: 72%
“…Although the importance of automated decision aids in assisting human operators' judgment and decision-making tasks has been emphasized over decades, there has been relatively research directly investigating how decision-aid quality impacts human performance. Will (1991) investigated false dependence on technology, performing an experiment with reservoir petroleum engineers in which misleading suggestions were provided about the appropriate methodology to address a given problem situation. Falsified reasoning explanations in an expert information system were provided to investigate their impact on the decision confidence and performance depending on the level of expertise.…”
Section: Article In Pressmentioning
confidence: 99%
“…There is little evidence that users are generally in awe of computers' decision‐making or advisory abilities; nonetheless, they may have high expectations, as the very name of one class of systems—“expert” systems—suggests. As Winograd and Flores (1986, p. 132) point out, “When we talk of a human ‘expert’ we connote someone whose depth of understanding serves not only to solve specific well‐formulated problems, but also to put them into a larger context.” Words such as “intelligence,” “knowledge,” and “understanding” also carry connotations above and beyond expert systems' capabilities and thus may obscure their inherent limitations (Will, 1991; Winograd & Flores, 1986). Unlike human experts, expert systems are often “brittle” in the sense of being unable either to cope with small deviations from their programmed expertise or to apply broader contextual knowledge and common sense to novel situations.…”
Section: Multidimensional Approaches To Credibilitymentioning
confidence: 99%
“…Anecdotal and empirical evidence also have delineated the nature of the cognitive process through which people assign credibility to computer-based information. Will (1991) suggested that the use of automated systems may subtly induce experts to suspend critical analysis and instead simply accept the information provided by the system. McGuire, Kiesler, and Siegal (1987) found that the use of automated decision aids reduces the extent of group discussion and the amount of information exchanged between group members, suggesting that computer-based information is accepted at face value.…”
mentioning
confidence: 99%