Findings of the Association for Computational Linguistics: EMNLP 2021 2021
DOI: 10.18653/v1/2021.findings-emnlp.155
|View full text |Cite
|
Sign up to set email alerts
|

WIKIBIAS: Detecting Multi-Span Subjective Biases in Language

Abstract: Biases continue to be prevalent in modern text and media, especially subjective bias -a special type of bias that introduces improper attitudes or presents a statement with the presupposition of truth. To tackle the problem of detecting and further mitigating subjective bias, we introduce a manually annotated parallel corpus WIKIBIAS with more than 4,000 sentence pairs from Wikipedia edits. This corpus contains annotations towards both sentencelevel bias types and token-level biased segments. We present system… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 47 publications
(56 reference statements)
0
5
0
Order By: Relevance
“…Encouragingly, previous research has begun to make progress in identifying prejudiced or emotive language used in factual statements (Recasens et al, 2013;Bhosale et al, 2013;Misra and Basak, 2016;Hube and Fetahu, 2018;Zhong et al, 2021;Madanagopal and Caverlee, 2022) and a few studies have begun to investigate subjective bias correction (Pryzant et al, 2019;Liu et al, 2021;Zhong et al, 2021). These approaches, however, typically face a number of key challenges: Noisy and Limited Training Data.…”
Section: Examples Of Biased and Neutral Statementsmentioning
confidence: 99%
See 3 more Smart Citations
“…Encouragingly, previous research has begun to make progress in identifying prejudiced or emotive language used in factual statements (Recasens et al, 2013;Bhosale et al, 2013;Misra and Basak, 2016;Hube and Fetahu, 2018;Zhong et al, 2021;Madanagopal and Caverlee, 2022) and a few studies have begun to investigate subjective bias correction (Pryzant et al, 2019;Liu et al, 2021;Zhong et al, 2021). These approaches, however, typically face a number of key challenges: Noisy and Limited Training Data.…”
Section: Examples Of Biased and Neutral Statementsmentioning
confidence: 99%
“…Training-Testing Mismatch. Second, existing approaches are primarily trained on maximizing the likelihood of each token of the target sequence (Pryzant et al, 2019;Zhong et al, 2021) by relying on the input sequence and previous ground-truth token, but testing is done on the entire input and output sequence. The training is conducted using token-level objective functions and tested using sentence-level evaluation metrics, such as BLEU.…”
Section: Examples Of Biased and Neutral Statementsmentioning
confidence: 99%
See 2 more Smart Citations
“…In-context finetuning A different class of studies that considers task evaluation in the prompting setup are those that finetune a pretrained model with prompts from one set of tasks and then evaluate them on another set of tasks (e.g. Zhong et al, 2021;Sanh et al, 2022;Wei et al, 2022). Parallel to the term 'in-context learning', this scenario is often referred to with the term in-context finetuning.…”
Section: Generalisation Across Tasksmentioning
confidence: 99%