2018
DOI: 10.21552/edpl/2018/3/9
|View full text |Cite
|
Sign up to set email alerts
|

Artificial Intelligence in Medical Diagnoses and the Right to Explanation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 0 publications
0
5
0
2
Order By: Relevance
“…For instance, in relation to AI-based recommendations and decisions made by automated vehicle technologies responding to unavoidable road traffic accidents (Cunneen et al, 2019 ). Concerns have emerged about the use of AI-based recommendations related to the accuracy of medical diagnosis and prognosis (Jain et al, 2020 ; Thrall et al, 2021 ), how inaccurate AI-based healthcare recommendations may adversely impact levels of trust between physicians and patients (Hoeren & Niehoff, 2018 ), as well as new technology acceptance levels among users (Fan et al, 2018 ). While technology adoption and acceptance issues are brought up as potential adverse consequences of questioning AI-based recommendations, but the discussion of how espoused national cultural values might affect levels of healthcare IS artifact adoption or acceptance is outside the scope of this study.…”
Section: Literature Reviewmentioning
confidence: 99%
“…For instance, in relation to AI-based recommendations and decisions made by automated vehicle technologies responding to unavoidable road traffic accidents (Cunneen et al, 2019 ). Concerns have emerged about the use of AI-based recommendations related to the accuracy of medical diagnosis and prognosis (Jain et al, 2020 ; Thrall et al, 2021 ), how inaccurate AI-based healthcare recommendations may adversely impact levels of trust between physicians and patients (Hoeren & Niehoff, 2018 ), as well as new technology acceptance levels among users (Fan et al, 2018 ). While technology adoption and acceptance issues are brought up as potential adverse consequences of questioning AI-based recommendations, but the discussion of how espoused national cultural values might affect levels of healthcare IS artifact adoption or acceptance is outside the scope of this study.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Artificial intelligence and machine learning applications are complex and known for their black-box nature, providing predictions without enough explanation [ 52 ]. Besides the accurate prediction and the decrease in workload, trust in algorithmic decisions is essential, especially in medicine and health care research [ 53 ]. To overcome the lack of transparency, interpreting machine learning models and their decision-making process has become a growing focus among academic and industrial machine learning experts [ 54 ].…”
Section: Discussionmentioning
confidence: 99%
“…Such balance is generally guaranteed by the data protection legislation such as Health Insurance Portability and Accountability Act 1996 (HIPAA) of the USA and General Data Protection Regulation 2016 (GDPR) of the European Union (EU). The effectiveness of these legislation in the environment of big data analytics is still debatable ( Casey et al., 2019 ; Gil González and de Hert, 2019 ; Hoeren and Niehoff, 2018 ; Kesa and Kerikmae, 2020 ; McGraw and Mandl, 2021 ; Turpin et al., 2020 ). Despite these concerns, artificial intelligence technologies have already shown outstanding results in medicine and have expected to bring further innovations into medical technologies and future healthcare ( Park et al., 2020 ).…”
Section: Related Workmentioning
confidence: 99%