2021
DOI: 10.1609/aaai.v35i7.16738
|View full text |Cite
|
Sign up to set email alerts
|

Classification Under Human Assistance

Abstract: Most supervised learning models are trained for full automation. However, their predictions are sometimes worse than those by human experts on some specific instances. Motivated by this empirical observation, our goal is to design classifiers that are optimized to operate under different automation levels. More specifically, we focus on convex margin-based classifiers and first show that the problem is NP-hard. Then, we further show that, for support vector machines, the corresponding objective function can be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 11 publications
(12 citation statements)
references
References 27 publications
(31 reference statements)
0
3
0
Order By: Relevance
“…To tackle this problem, we reformulate it into a cardinality-constrained set function optimization task. Subsequently, we introduce novel set function properties, (m, m)-partial monotonicity and (γ, γ)-weak submodularity, extending recent notions of partial monotonicity (Mualem and Feldman 2022) and approximate submodularity (Elenberg et al 2018;Harshaw et al 2019;De et al 2021). These properties allow us to design a greedy algorithm GENEX, to compute near-optimal feature subsets, with new approximation guarantee.…”
Section: Discrete Continuous Training Frameworkmentioning
confidence: 99%
“…To tackle this problem, we reformulate it into a cardinality-constrained set function optimization task. Subsequently, we introduce novel set function properties, (m, m)-partial monotonicity and (γ, γ)-weak submodularity, extending recent notions of partial monotonicity (Mualem and Feldman 2022) and approximate submodularity (Elenberg et al 2018;Harshaw et al 2019;De et al 2021). These properties allow us to design a greedy algorithm GENEX, to compute near-optimal feature subsets, with new approximation guarantee.…”
Section: Discrete Continuous Training Frameworkmentioning
confidence: 99%
“…In this context, several approaches jointly train the classifier together with a deferral system (Madras, Pitassi, and Zemel 2018;Okati, De, and Rodriguez 2021;Wilder, Horvitz, and Kamar 2020). Further work utilizes objective functions with theoretical guarantees for regression (De et al 2020) and classification tasks (De et al 2021). Moreover, Mozannar and Sontag (2020) propose a consistent surrogate loss function inspired by cost-sensitive learning.…”
Section: Related Workmentioning
confidence: 99%
“…Cognitive scientists have developed and tested different theories about the cognitive process underpinning responsibility judgments (Alicke, 2000;Chockler & Halpern, 2004;Gerstenberg & Lagnado, 2010;Shaver, 2012). However, the increasing development of AI systems that assist and collaborate with humans, rather than replacing them (Balazadeh Meresht et al, 2022;De et al, 2020De et al, , 2021Mozannar et al, 2022;Okati et al, 2021;Raghu et al, 2019;Straitouri et al, 2021;Wilder et al, 2021), calls for more empirical and theoretical research to shed light on the way humans make responsibility judgments in situations involving human-AI teams (Cañas, 2022). Recent work in that area has identified several factors that influence responsibility judgments (Awad et al, 2020;Lima et al, 2021;Longin et al, 2023).…”
Section: Introductionmentioning
confidence: 99%