2021
DOI: 10.1111/cgf.14329
|View full text |Cite
|
Sign up to set email alerts
|

A Survey of Human‐Centered Evaluations in Human‐Centered Machine Learning

Abstract: Visual analytics systems integrate interactive visualizations and machine learning to enable expert users to solve complex analysis tasks. Applications combine techniques from various fields of research and are consequently not trivial to evaluate. The result is a lack of structure and comparability between evaluations. In this survey, we provide a comprehensive overview of evaluations in the field of human‐centered machine learning. We particularly focus on human‐related factors that influence trust, interpre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
32
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 39 publications
(39 citation statements)
references
References 121 publications
(296 reference statements)
1
32
0
Order By: Relevance
“…Evaluation of interactive machine learning. Several works have looked into evaluating IAI systems (see (Boukhelifa, Bezerianos, and Lutton 2018;Sperrle et al 2021) for recent surveys). These range from humancentered evaluations (focusing on user experience) to algorithm-centered evaluations (studying the robustness of the underlying algorithms).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Evaluation of interactive machine learning. Several works have looked into evaluating IAI systems (see (Boukhelifa, Bezerianos, and Lutton 2018;Sperrle et al 2021) for recent surveys). These range from humancentered evaluations (focusing on user experience) to algorithm-centered evaluations (studying the robustness of the underlying algorithms).…”
Section: Related Workmentioning
confidence: 99%
“…Note that this work focuses on evaluating the backend of an IAI system and its ability to consistently learn highquality models, which falls under the class of algorithmcentered evaluation approaches (Boukhelifa, Bezerianos, and Lutton 2018). IAI systems can also be evaluated on user experience, leading to human-centered evaluations (Sperrle et al 2021), which however are outside the scope of this work. For additional discussions on related work, please refer to the corresponding section at the end of the paper.…”
Section: Introductionmentioning
confidence: 99%
“…Kaluarachchi et al [11] presented a survey on deep learning approaches in HCML and concluded that HCML models are developed to mainly serve human needs. Sperrle et al [12] introduced a comprehensive survey of human-centered evaluations and emphasize that humans and machines are equally important throughout the development process. However, a major challenge is how exactly to integrate human mental models in the machine learning process and to evaluate HCML models [12].…”
Section: Introductionmentioning
confidence: 99%
“…Following Bubeck [14], we use the term online optimization rather than online learning. In this work, we argue that the combination of the fourfold HCML [12], quantified human mental models [10], online optimization [14], coupled with the selfattention mechanism [15], can enable adaptive TKG learning for link prediction. In this work, we aim at developing a novel HCML framework to acquire relevant knowledge for TKG representation and to use it for adaptive TKG learning such that the link prediction accuracy can be optimized over time.…”
Section: Introductionmentioning
confidence: 99%
“…Related work. Several works have looked into evaluating IAI systems (see (Boukhelifa, Bezerianos, and Lutton 2018;Sperrle et al 2021) for surveys). These range from human-centered evaluations (focusing on user experience) to algorithm-centered evaluations (studying the robustness of the underlying algorithms).…”
Section: Introductionmentioning
confidence: 99%