26th International Conference on Intelligent User Interfaces 2021
DOI: 10.1145/3397481.3450694
|View full text |Cite
|
Sign up to set email alerts
|

How to Support Users in Understanding Intelligent Systems? Structuring the Discussion

Abstract: The opaque nature of many intelligent systems violates established usability principles and thus presents a challenge for humancomputer interaction. Research in the field therefore highlights the need for transparency, scrutability, intelligibility, interpretability and explainability, among others. While all of these terms carry a vision of supporting users in understanding intelligent systems, the underlying notions and assumptions about users and their interaction with the system often remain unclear.We rev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
2

Relationship

1
9

Authors

Journals

citations
Cited by 24 publications
(10 citation statements)
references
References 86 publications
(219 reference statements)
0
7
0
Order By: Relevance
“…However, it is essential to understand the AI partner by the users to build an engaging and trustworthy partnership. Many intelligent systems lack the core interaction design principles such as transparency and explainability and it makes them hard to understand and use [49]. To address the challenge of transparency of AI interaction should be designed to support users in understanding and dealing with intelligent systems despite their complex black-box nature.…”
Section: Discussionmentioning
confidence: 99%
“…However, it is essential to understand the AI partner by the users to build an engaging and trustworthy partnership. Many intelligent systems lack the core interaction design principles such as transparency and explainability and it makes them hard to understand and use [49]. To address the challenge of transparency of AI interaction should be designed to support users in understanding and dealing with intelligent systems despite their complex black-box nature.…”
Section: Discussionmentioning
confidence: 99%
“…However, it is insufficient for in-depth understanding of the underlying dimensions. Using the terms of recent work on understanding of intelligent systems [9], previews mainly facilitated interaction knowledge: Users learned how to use the system effectively -without necessarily being able to explicitly explain it. Here, text labels may help yet require an interpretable model.…”
Section: Discovering Interpretations Of the Control Dimensionsmentioning
confidence: 99%
“…It is important to note that different issues and considerations arise from varying concerns. For instance, some examples include how it is important to ensure the AI system is not difficult to use or explain to users, difficult to manage or maintain, or perceived as creepy by the potential users (as noted above) [29,131]. Additionally, the systems will need awareness and capabilities supporting numerous types of intelligence such as social, emotional, physical, etc.…”
Section: Orthogonal Hcai Conceptsmentioning
confidence: 99%