2018
DOI: 10.1111/1559-8918.2018.01213
|View full text |Cite
|
Sign up to set email alerts
|

The Stakes of Uncertainty: Developing and Integrating Machine Learning in Clinical Care

Abstract: The wide‐spread deployment of machine learning tools within healthcare is on the horizon. However, the hype around “AI” tends to divert attention toward the spectacular, and away from the more mundane and ground‐level aspects of new technologies that shape technological adoption and integration. This paper examines the development of a machine learning‐driven sepsis risk detection tool in a hospital Emergency Department in order to interrogate the contingent and deeply contextual ways in which AI technologies … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
52
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(55 citation statements)
references
References 24 publications
0
52
0
1
Order By: Relevance
“…For example, if a model was able to achieve perfect performance on a prediction task or when it is obvious whether the model correctly predicts the outcome (e.g., image classifications that can be verified by visual inspection), explanations might be considered unnecessary. Often, the argument for explanations centers around instilling user trust in the model; however, Elish [10] argues that trust is not predicated on model interpretability, and can instead be developed by involving stakeholders throughout the model development process. Even if explanations are not required to verify model accuracy or instill trust in a model, they may still prove valuable by providing actionable information.…”
Section: Discussionmentioning
confidence: 99%
“…For example, if a model was able to achieve perfect performance on a prediction task or when it is obvious whether the model correctly predicts the outcome (e.g., image classifications that can be verified by visual inspection), explanations might be considered unnecessary. Often, the argument for explanations centers around instilling user trust in the model; however, Elish [10] argues that trust is not predicated on model interpretability, and can instead be developed by involving stakeholders throughout the model development process. Even if explanations are not required to verify model accuracy or instill trust in a model, they may still prove valuable by providing actionable information.…”
Section: Discussionmentioning
confidence: 99%
“…The ensuing decades saw the salad days of what the analytic philosopher John Haugeland names as "Good Old Fashioned AI" (GOFAI). Within the paradigm GOFAI, artificial intelligence essentially referred to "procedural, logic-based reasoning and the capacity to manipulate abstract symbolic representations" [4]. For instance, the commercial "expert systems" of the 1970s and 1980s are typically understood as exemplifying GOFAI.…”
Section: Defining Ai and Machine Learning 21 Defining Ai Systems Andmentioning
confidence: 99%
“…Critical to understanding AI's consequences for epistemology and social practice, anthropologist of technology M.C. Elish stresses that "the datasets and models used in these systems are not objective representations of reality" as systems that utilize machine learning techniques "can only be thought to 'know' something in the sense that it can correlate certain relevant variables accurately" [4].…”
Section: Defining Ai and Machine Learning 21 Defining Ai Systems Andmentioning
confidence: 99%
See 1 more Smart Citation
“…This kind of thinking can reveal how and why complex social problems cannot be addressed with new metrics or algorithms alone. If the "smartness" of AI lies, as Clare Elish writes, in its power to process patterns and numbers with statistics (Elish 2018), then anthropologists need to play a role in the creation and deployment of statistical systems. Anthropologists must widen their horizons to focus not just on users and designs, but also on the machine learning algorithms, data architectures, and institutional hierarchies that make up data-driven organizations.…”
Section: Conclusion: Ethnographic Empowerment In Data-intensive Envirmentioning
confidence: 99%