Designing Interactive Systems Conference 2022
DOI: 10.1145/3532106.3533556
|View full text |Cite
|
Sign up to set email alerts
|

“Why Do I Care What’s Similar?” Probing Challenges in AI-Assisted Child Welfare Decision-Making through Worker-AI Interface Design Concepts

Abstract: Data-driven AI systems are increasingly used to augment human decision-making in complex, social contexts, such as social work or legal practice. Yet, most existing design knowledge regarding how to best support AI-augmented decision-making comes from studies in comparatively well-defned settings. In this paper, we present fndings from design interviews with 12 social workers who use an algorithmic decision support tool (ADS) to assist their day-to-day child maltreatment screening decisions. We generated a ran… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 22 publications
(29 citation statements)
references
References 46 publications
0
29
0
Order By: Relevance
“…Yet in many real-world settings where AI-based decision support tools are used, the total number of model-observable features and relevant unobservables is significantly larger. For example, call workers tasked with screening child maltreatment calls have hundreds of features available to them [8,28]. In settings where a larger number of features are present, it is possible that highlighting the pieces of information that are not available to an AI model would have a greater effect in improving their predictive performance.…”
Section: Discussion and Future Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Yet in many real-world settings where AI-based decision support tools are used, the total number of model-observable features and relevant unobservables is significantly larger. For example, call workers tasked with screening child maltreatment calls have hundreds of features available to them [8,28]. In settings where a larger number of features are present, it is possible that highlighting the pieces of information that are not available to an AI model would have a greater effect in improving their predictive performance.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Recent research has also demonstrated the potential of information asymmetry as a source of complementary human-AI performance, showing that humans can successfully integrate contextual information when making use of AI recommendations [5,11,21,22]. However, in many settings human decision-makers do not have a clear understanding of what information is and is not available to the algorithm [24,27,28], which may hinder their ability to integrate contextual information. Furthermore, little is known regarding how to best design ADS to support human decision-makers in effectively integrating across AI outputs and unobservables.…”
Section: Algorithmic Prediction Under Unobservablesmentioning
confidence: 99%
See 3 more Smart Citations