2023
DOI: 10.1007/s12599-023-00806-x
|View full text |Cite
|
Sign up to set email alerts
|

Explanatory Interactive Machine Learning

Abstract: The most promising standard machine learning methods can deliver highly accurate classification results, often outperforming standard white-box methods. However, it is hardly possible for humans to fully understand the rationale behind the black-box results, and thus, these powerful methods hamper the creation of new knowledge on the part of humans and the broader acceptance of this technology. Explainable Artificial Intelligence attempts to overcome this problem by making the results more interpretable, while… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 80 publications
0
2
0
Order By: Relevance
“…For example, CNNs are applied for image-based cancer detection (e.g., Haenssle et al 2018). Within our literature population, two studies in which image data is processed with AI address health-related issues (Pfeuffer et al 2023;Zhang and Ram 2020). Another field that provides interesting use cases for computer vision is social media.…”
Section: Input Datamentioning
confidence: 99%
“…For example, CNNs are applied for image-based cancer detection (e.g., Haenssle et al 2018). Within our literature population, two studies in which image data is processed with AI address health-related issues (Pfeuffer et al 2023;Zhang and Ram 2020). Another field that provides interesting use cases for computer vision is social media.…”
Section: Input Datamentioning
confidence: 99%
“…In applications such as medical image classifications, deep learning models have been observed to focus on non-relevant or confounding parts of medical images such as artifacts for their classification or prediction outputs [11], [12]. In addition to promoting transparent learning process, XBL has the potential to unlearn such wrong correlations, which are termed as confounding regions, confounders, or spurious correlations (used interchangeably in this paper) [12], [13]; confounding regions are parts of training instances that are not correlated with a category, but incorrectly assumed to be so by a learner.…”
Section: A Explanation Based Learningmentioning
confidence: 99%
“…It has received much attention over the past few years. The purpose of a model explanation is to clarify why the model makes a certain prediction, to increase confidence in the model’s predictions 10 and to describe exactly how a machine learning model achieves its properties. 11 Therefore, using machine learning explanations can increase the transparency, interpretability, fairness, robustness, privacy, trust and reliability of machine learning models.…”
Section: Introductionmentioning
confidence: 99%