Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization 2020
DOI: 10.1145/3340631.3394873
|View full text |Cite
|
Sign up to set email alerts
|

More Than Accuracy: Towards Trustworthy Machine Learning Interfaces for Object Recognition

Abstract: This paper investigates the user experience of visualizations of a machine learning (ML) system that recognizes objects in images. This is important since even good systems can fail in unexpected ways as misclassifications on photo-sharing websites showed. In our study, we exposed users with a background in ML to three visualizations of three systems with different levels of accuracy. In interviews, we explored how the visualization helped users assess the accuracy of systems in use and how the visualization a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 35 publications
0
4
0
Order By: Relevance
“…However, a pilot study in October 2020 provided evidence that explanations like "The average number of words per sentence is low", the "Usage of perceptual words related to hear is low" and "The amount of words related to people is high" are not perceived as helpful by users. Even educated participants had trouble understanding explanations of ML-based systems, a problem that I have previously reported on in the context of ML-based curation systems [3] and object recognition systems [6].…”
Section: How People Can Be Best Supportedmentioning
confidence: 98%
“…However, a pilot study in October 2020 provided evidence that explanations like "The average number of words per sentence is low", the "Usage of perceptual words related to hear is low" and "The amount of words related to people is high" are not perceived as helpful by users. Even educated participants had trouble understanding explanations of ML-based systems, a problem that I have previously reported on in the context of ML-based curation systems [3] and object recognition systems [6].…”
Section: How People Can Be Best Supportedmentioning
confidence: 98%
“…The emergence of complex, opaque, and invisible algorithms that learn from data motivated a variety of investigations, including: algorithm awareness, clarity, variance, and bias [94]. Algorithmic bias for instance, whether it occurs in an unintentional or intentional manner, is found to severely limit the performance of an AI model.…”
Section: Recommendations and The Future Of Ai Assurancementioning
confidence: 99%
“…We included the system predictions as intuitive explanations because they present the predictions in a format that is similar to how news recommendations are encountered by users [35,4,32]. We also presented the participants with the three most important Performance Metrics for ML systems: accuracy, precision, and recall [17,22]. Accuracy is defined as the percentage of correctly predicted news, i.e.…”
Section: Explanations For Ml-based Curation Systemmentioning
confidence: 99%