DOI: 10.3990/1.9789036555753
|View full text |Cite
|
Sign up to set email alerts
|

Explainable AI and interpretable computer vision : from oversight to insight

Abstract: The last decades have seen rapid development of advanced machine learning (ML) models, considered a subarea of Artificial Intelligence (AI). Massive computing power and growing data availability have allowed the training of deep artificial neural networks, which learn themselves by finding task-relevant patterns in data. The size and complexity of these deep learning models grew over the years in pursuit of predictive performance. However, such black boxes prevent users from assessing whether the learned behav… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 365 publications
(693 reference statements)
0
1
0
Order By: Relevance
“…However, it is crucial to ensure that these systems support human users and operators who bear the final responsibility for decisionmaking. The models should also act as "super-assistants," providing an additional layer of insight and analysis to aid healthcare professionals in their decision-making process [8]. Yet with benefits, limitations of AI can not be ignored [9].…”
Section: Introductionmentioning
confidence: 99%
“…However, it is crucial to ensure that these systems support human users and operators who bear the final responsibility for decisionmaking. The models should also act as "super-assistants," providing an additional layer of insight and analysis to aid healthcare professionals in their decision-making process [8]. Yet with benefits, limitations of AI can not be ignored [9].…”
Section: Introductionmentioning
confidence: 99%
“…While a certain progress has been achieved in this endeavor, substantial work remains to be done in order to successfully dismantle the "black box" phenomenon. This would enable the attainment of explicability, a critical aspect in comprehending and justifying the responses provided by AI systems, as explored by [14] in their work on understanding AI reasoning.…”
mentioning
confidence: 99%