Proceedings of the 14th ACM International Conference on Web Search and Data Mining 2021
DOI: 10.1145/3437963.3441705
|View full text |Cite
|
Sign up to set email alerts
|

Providing Actionable Feedback in Hiring Marketplaces using Generative Adversarial Networks

Abstract: Machine learning predictors have been increasingly applied in production settings, including in one of the world's largest hiring platforms, Hired, to provide a better candidate and recruiter experience. The ability to provide actionable feedback is desirable for candidates to improve their chances of achieving success in the marketplace. Until recently, however, methods aimed at providing actionable feedback have been limited in terms of realism and latency. In this work, we demonstrate how, by applying a new… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 5 publications
(9 reference statements)
0
7
0
Order By: Relevance
“…Previous related works validated the proposed CEs with respect to explanations obtained with other local explainability methods, like Local Interpretable Model-agnostic Explanations (LIME) or Layer-Wise Relevance Propagation (LRP) [3], [8] or with respect to other state-of-the-art method for generation of CEs [4], [9], [11]. Often, the validation measure relies on verifying that the CE is correctly associated with its target outcome, based on the prediction of a classifier.…”
Section: Explainabilitymentioning
confidence: 78%
See 2 more Smart Citations
“…Previous related works validated the proposed CEs with respect to explanations obtained with other local explainability methods, like Local Interpretable Model-agnostic Explanations (LIME) or Layer-Wise Relevance Propagation (LRP) [3], [8] or with respect to other state-of-the-art method for generation of CEs [4], [9], [11]. Often, the validation measure relies on verifying that the CE is correctly associated with its target outcome, based on the prediction of a classifier.…”
Section: Explainabilitymentioning
confidence: 78%
“…For example they can be generated in order to understand what are the changes in the characteristics of a medical image that lead to a certain diagnosis of pathology (e.g., [8]and [5]). Another possible use of counterfactuals recently proposed in the literature [11] concerns their application to provide actionable feedback (e.g., realistic changes in expected salary or increase in work experience word count) to candidates in a hiring marketplace in order to improve their profile.…”
Section: Explainabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…Counterfactual explanation methods can be subcategorized to those that incorporate latent space disentanglement (such as DISCOVER) and those that do not. Counterfactual explanation methods without disentanglement (Samangouei et al 2018, Eckstein et al 2021, Narayanaswamy et al 2020, Nemirovsky et al 2020, Shih et al 2020, Joshi et al 2018, can concurrently alter multiple image properties, thus generating less intuitive counterfactual explanations.…”
Section: Discover Was Designed To Overcome Limitations Of Alternative...mentioning
confidence: 99%
“…In recent years, a wide variety of approaches have been proposed that attempt to explain the predictions of black-box DL models [6,[20][21][22][23][24]. In the image domain, post-hoc attribution-based approaches [20][21][22] are the most popular, which generate feature importance maps to identify the areas in the image that were most important to the model's prediction.…”
Section: Introductionmentioning
confidence: 99%