2017
DOI: 10.1001/jama.2017.7797
|View full text |Cite
|
Sign up to set email alerts
|

Unintended Consequences of Machine Learning in Medicine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
414
1
5

Year Published

2017
2017
2019
2019

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 630 publications
(426 citation statements)
references
References 10 publications
2
414
1
5
Order By: Relevance
“…The consequences of automation on human performance can pose serious safety concerns, with risks depending on the level of automation and the type of automated function 10 , 56 , 57 . Therefore, the use of conversational agents with unconstrained natural language input capabilities and other artificial intelligence applications in healthcare needs to be carefully monitored 57 , 58 …”
Section: Discussionmentioning
confidence: 99%
“…The consequences of automation on human performance can pose serious safety concerns, with risks depending on the level of automation and the type of automated function 10 , 56 , 57 . Therefore, the use of conversational agents with unconstrained natural language input capabilities and other artificial intelligence applications in healthcare needs to be carefully monitored 57 , 58 …”
Section: Discussionmentioning
confidence: 99%
“…The paradox is that these methods, despite their advantages, are far from universal acceptance in medical practice. Arguably, one of the reasons is precisely (lack of) interpretability, expressed as "the need to open the machine learning black box" [13]. As already mentioned, DL-based technologies can worsen the problem, despite having already found their way into biomedicine and healthcare [14,15].…”
Section: Interpretability and Explainabilitymentioning
confidence: 99%
“…It has been argued, though, that this application might lead to a reduction of skills among medical experts. This negative consequence of the use of ML methods in medicine has been described as ML methods' undue "focus on text and the demise of context" [12]. The second example involves the implementation of a European Union directive for general data protection regulation that will be enforced in 2018 and mandates a right to explanation of all decisions made by automated or AI algorithmic systems [13].…”
Section: Present Artificial Intelligence In Medicine: Challenges For mentioning
confidence: 99%
“…Three main challenges for the application of ML in medicine have recently been listed [12], and one of them is precisely interpretability, expressed as "the need to open the machine learning black box." Not that this is a new challenge for ML, because the black box syndrome was already on the table decades ago [16].…”
Section: Present Artificial Intelligence In Medicine: Challenges For mentioning
confidence: 99%