The language used in job advertisements contains explicit and implicit cues, which signal employers’ preferences for candidates of certain ascribed characteristics, such as gender and ethnicity/race. To capture such biases in language use, existing word inventories have focused predominantly on gender and are based on general perceptions of the ‘masculine’ or ‘feminine’ orientations of specific words and socio-psychological understandings of ‘agentic’ and ‘communal’ traits. Nevertheless, these approaches are limited to gender and they do not consider the specific contexts in which the language is used. To address these limitations, we have developed the first comprehensive word inventory for work and employment diversity, (in)equality, and inclusivity that builds on a number of conceptual and methodological innovations. The BIAS Word Inventory was developed as part of our work in an international, interdisciplinary project – BIAS: Responsible AI for Labour Market Equality – in Canada and the United Kingdom (UK). Conceptually, we rely on a sociological approach that is attuned to various documented causes and correlates of inequalities related to gender, sexuality, ethnicity/race, immigration and family statuses in the labour market context. Methodologically, we rely on ‘expert’ coding of actual job advertisements in Canada and the UK, as well as iterative cycles of inter-rater verification. Our inventory is particularly suited for studying labour market inequalities, as it reflects the language used to describe job postings, and the inventory takes account of cues at various dimensions, including explicit and implicit cues associated with gender, ethnicity, citizenship and immigration statuses, role specifications, equality, equity and inclusivity policies and pledges, work-family policies, and workplace context.
Despite progress toward gender equality in the labor market over the past few decades, gender segregation in labor force composition and labor market outcomes persists. Evidence has shown that job advertisements may express gender preferences, which may selectively attract potential job candidates to apply for a given post and thus reinforce gendered labor force composition and outcomes. Removing gender-explicit words from job advertisements does not fully solve the problem as certain implicit traits are more closely associated with men, such as ambitiousness, while others are more closely associated with women, such as considerateness. However, it is not always possible to find neutral alternatives for these traits, making it hard to search for candidates with desired characteristics without entailing gender discrimination. Existing algorithms mainly focus on the detection of the presence of gender biases in job advertisements without providing a solution to how the text should be (re)worded. To address this problem, we propose an algorithm that evaluates gender bias in the input text and provides guidance on how the text should be debiased by offering alternative wording that is closely related to the original input. Our proposed method promises broad application in the human resources process, ranging from the development of job advertisements to algorithm-assisted screening of job applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.