Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) 2022
DOI: 10.18653/v1/2022.gebnlp-1.22
|View full text |Cite
|
Sign up to set email alerts
|

Looking for a Handsome Carpenter! Debiasing GPT-3 Job Advertisements

Abstract: The growing capability and availability of generative language models has enabled a wide range of new downstream tasks. Academic research has identified, quantified and mitigated biases present in language models but is rarely tailored to downstream tasks where wider impact on individuals and society can be felt. In this work, we leverage one popular generative language model, GPT-3, with the goal of writing unbiased and realistic job advertisements. We first assess the bias and realism of zero-shot generated … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(13 citation statements)
references
References 24 publications
0
10
0
Order By: Relevance
“…Fairness Techniques. Prior work in language [4,20,31], computer vision [17,51], and graphs [1,27,40,54] has primarily focused on debiasing models trained on unimodal data and are limited in scope as they only investigate gender bias, racial bias, or their intersections. In particular, their bias mitigation techniques can be broadly categorized into i) pre-processing, which modifies individual input features and labels [6], modifies the weights of the training samples [24], or obfuscates protected attribute information during the training process [59]; ii) in-processing, which uses adversarial techniques to maximize accuracy and reduce bias for a given protected attribute [61], data aug-mentation [1] or adding a bias-aware regularization term to the training objectives [26], and iii) post-processing, which changes the output predictions from predictive models to make them fairer [21,25,44].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Fairness Techniques. Prior work in language [4,20,31], computer vision [17,51], and graphs [1,27,40,54] has primarily focused on debiasing models trained on unimodal data and are limited in scope as they only investigate gender bias, racial bias, or their intersections. In particular, their bias mitigation techniques can be broadly categorized into i) pre-processing, which modifies individual input features and labels [6], modifies the weights of the training samples [24], or obfuscates protected attribute information during the training process [59]; ii) in-processing, which uses adversarial techniques to maximize accuracy and reduce bias for a given protected attribute [61], data aug-mentation [1] or adding a bias-aware regularization term to the training objectives [26], and iii) post-processing, which changes the output predictions from predictive models to make them fairer [21,25,44].…”
Section: Related Workmentioning
confidence: 99%
“…VLMs are trained with a large amount of data with the aim of matching image and text representations for image-caption pairs to capture diverse visual and linguistic concepts. However, VLMs exhibit societal biases manifesting as the skew in the similarity between their representation of certain textual concepts and kinds of images [2,4,31]. These biases arise from the underlying imbalances in training data [2,3] and flawed training practices [55].…”
Section: Introductionmentioning
confidence: 99%
“…It could arguably also be applied to reword job requirements or specific language used in job ads that could discourage individuals from various groups to apply. There is also preliminary work in computer science showing that conversational AI tools like ChatGPT can be used to reduce biased language in job ads (Borchers et al, 2022). Taken together, examining the effects of technology‐based support to make job ads more attractive to diverse applicants represents a promising avenue for future research.…”
Section: Future Research and Conclusionmentioning
confidence: 99%
“…Park et al (2021) Comparing affect in multilingual Wikipedia pages about LGBT people Lucy and Bamman (2021) Analyzing gender biases in GPT3-generated stories. Gong et al (2022) Quantifying gender biases and power differentials in Japanese light novels Saxena et al (2022) Examining latent power structures in child welfare case notes Borchers et al (2022) Measuring biases in job advertisements and mitigating them with GPT-3 Stahl et al (2022) Joint power-and-agency rewriting to debias sentences. Wiegand et al (2022) Identifying implied prejudice and social biases about minority groups Giorgi et al (2023) Examining the portrayal of narrators in moral and social dilemmas (2016), were the first to model the connotations of verb predicates with respect to an AGENT and THEME's value, sentiment, and effects (henceforth, sentiment connotation frames).…”
Section: Introductionmentioning
confidence: 99%