We present a framework for quantifying and mitigating algorithmic bias in mechanisms designed for ranking individuals, typically used as part of web-scale search and recommendation systems. We first propose complementary measures to quantify bias with respect to protected attributes such as gender and age. We then present algorithms for computing fairness-aware re-ranking of results. For a given search or recommendation task, our algorithms seek to achieve a desired distribution of top ranked results with respect to one or more protected attributes. We show that such a framework can be tailored to achieve fairness criteria such as equality of opportunity and demographic parity depending on the choice of the desired distribution. We evaluate the proposed algorithms via extensive simulations over different parameter choices, and study the effect of fairness-aware ranking on both bias and utility measures. We finally present the online A/B testing results from applying our framework towards representative ranking in LinkedIn Talent Search, and discuss the lessons learned in practice. Our approach resulted in tremendous improvement in the fairness metrics (nearly three fold increase in the number of search queries with representative results) without affecting the business metrics, which paved the way for deployment to 100% of LinkedIn Recruiter users worldwide. Ours is the first large-scale deployed framework for ensuring fairness in the hiring domain, with the potential positive impact for more than 630M LinkedIn members.
Online professional social networks such as LinkedIn have enhanced the ability of job seekers to discover and assess career opportunities, and the ability of job providers to discover and assess potential candidates. For most job seekers, salary (or broadly compensation) is a crucial consideration in choosing a new job. At the same time, job seekers face challenges in learning the compensation associated with different jobs, given the sensitive nature of compensation data and the dearth of reliable sources containing compensation data. Towards the goal of helping the world's professionals optimize their earning potential through salary transparency, we present LinkedIn Salary, a system for collecting compensation information from LinkedIn members and providing compensation insights to job seekers. We present the overall design and architecture, and describe the key components needed for the secure collection, de-identification, and processing of compensation data, focusing on the unique challenges associated with privacy and security. We perform an experimental study with more than one year of compensation submission history data collected from over 1.5 million LinkedIn members, thereby demonstrating the tradeoffs between privacy and modeling needs. We also highlight the lessons learned from the production deployment of this system at LinkedIn. 2 https://www.linkedin.com/salary
No abstract
As companies adopt increasingly experimentation-driven cultures, it is crucial to develop methods for understanding any potential unintended consequences of those experiments. We might have specific questions about those consequences (did a change increase or decrease gender representation equality among content creators?); we might also wonder whether if we have not yet considered the right question (that is, we don't know what we don't know). Hence we address the problem of unintended consequences in experimentation from two perspectives: namely, pre-specified vs. data-driven selection, of dimensions of interest. For a specified dimension, we introduce a statistic to measure deviation from equal representation (DER statistic), give its asymptotic distribution, and evaluate finite-sample performance. We explain how to use this statistic to search across large-scale experimentation systems to alert us to any extreme unintended consequences on group representation. We complement this methodology by discussing a search for heterogeneous treatment effects along a set of dimensions with causal trees, modified slightly for practicalities in our ecosystem, and used here as a way to dive deeper into experiments flagged by the DER statistic alerts. We introduce a method for simulating data that closely mimics observed data at LinkedIn, and evaluate the performance of DER statistics in simulations. Last, we give a case study from LinkedIn, and show how these methodologies empowered us to discover surprising and important insights about group representation. Code for replication is available in an appendix.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.