2022
DOI: 10.48550/arxiv.2204.10806
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Unifying Framework for Combining Complementary Strengths of Humans and ML toward Better Predictive Decision-Making

Abstract: Hybrid human-ML systems are increasingly in charge of consequential decisions in a wide range of domains. A growing body of work has advanced our understanding of these systems by providing empirical and theoretical analyses. However, existing empirical results are mixed, and theoretical proposals are often incompatible with each other. Our goal in this work is to bring much-needed organization to this field by offering a unifying framework for understanding conditions under which combining complementary stren… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 64 publications
1
7
0
Order By: Relevance
“…We speculate that this may be because humans with contextual knowledge (see Appendix C) are better at certain cases that may involve domain shifts. Prior works have shown that deep neural networks are prone to fail under dataset shifts [36,68,72], and lack the contextual knowledge and commonsense reasoning that humans have [50,73]. Our conjecture aligns with a prior exploration that observed amateur players-machine teaming can out-perform machines alone and grand-masters alone in chess [42].…”
Section: Within-instance Complementarity Between Human and Aisupporting
confidence: 85%
“…We speculate that this may be because humans with contextual knowledge (see Appendix C) are better at certain cases that may involve domain shifts. Prior works have shown that deep neural networks are prone to fail under dataset shifts [36,68,72], and lack the contextual knowledge and commonsense reasoning that humans have [50,73]. Our conjecture aligns with a prior exploration that observed amateur players-machine teaming can out-perform machines alone and grand-masters alone in chess [42].…”
Section: Within-instance Complementarity Between Human and Aisupporting
confidence: 85%
“…There have been efforts to model human-machine collaborations from both the fields of economics and computer science. Rastogi et al [2022] and Donahue et al [2022] investigate the optimal ways to aggregate independent algorithmic and human predictions, and under what conditions these aggregations outperform individual predictions. Straitouri et al [2022] proposes an algorithm that selects an optimal set of possible labels and presents it to a human expect for final selection.…”
Section: Agent Decisionmentioning
confidence: 99%
“…This research is broadly related to two streams of literature: those papers providing conceptual or theoretical frameworks for human-AI collaboration and empirical papers documenting real-world interactions between AI and human analysts. In the first stream, recent literature in computer science aims at the optimal integration of human and AI decisions (Bansal et al 2021, 2019, Donahue et al 2022, Gao et al 2021, Keswani et al 2021, Madras et al 2018, Mozannar and Sontag 2020, Raghu et al 2019, Rastogi et al 2022, Wilder et al 2020). On the one hand, Madras et al (2018) propose a learning-to-defer framework in which the AI can choose to make decision by its own or just pass the task to the downstream human expert.…”
Section: Related Literaturementioning
confidence: 99%
“…Follow-up papers extend the framework to more complex settings, such as multiple experts (Keswani et al 2021), bandit feedback (Gao et al 2021), joint optimization of the prediction algorithm and pass function (Mozannar andSontag 2020, Wilder et al 2020). On the other hand, Donahue et al (2022), Rastogi et al (2022) consider a weighted average aggregation of human and AI decisions and show conditions for human-AI complementarity in which the aggregated decision outperforms both individual decisions. Recently, Grand-Clément and Pauphilet (2022) show that in the setting of sequential decision-making, the AI algorithm should be trained differently when a human analyst is involved.…”
Section: Related Literaturementioning
confidence: 99%