Companion Proceedings of the Web Conference 2020 2020
DOI: 10.1145/3366424.3382704
|View full text |Cite
|
Sign up to set email alerts
|

Towards Detection of Subjective Bias using Contextualized Word Embeddings

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

2
7

Authors

Journals

citations
Cited by 16 publications
(17 citation statements)
references
References 1 publication
0
17
0
Order By: Relevance
“…Most prior work on bias detection Fetahu, 2018, 2019;Pant et al, 2020) focus on predicting the presence of subjective bias in a sentence. We follow their setup.…”
Section: Binary Classificationmentioning
confidence: 99%
“…Most prior work on bias detection Fetahu, 2018, 2019;Pant et al, 2020) focus on predicting the presence of subjective bias in a sentence. We follow their setup.…”
Section: Binary Classificationmentioning
confidence: 99%
“…tanvidadu (Dadu and Pant, 2020): Fine-tuned RoBERTa-large model (355 Million parameters with over a 50K vocabulary size) on response and its two immediate contexts. They reported results on three different types of inputs: response-only model, concatenation of immediate two context with response, and using an explicit separator token between the response and the final context.…”
Section: Kalaivania (Kalaivani a And D 2020)mentioning
confidence: 99%
“…Recently released models such as BERT and RoBERTa , exploit the use of pre-training and bidirectional transformers to enable efficient solutions obtaining state-of-the-art performance. Pre-trained embeddings significantly outperform the previous state-of-the-art in similar problems such as humor detection , and subjectivity detection (Pant et al, 2020).…”
Section: Introductionmentioning
confidence: 97%
“…Recently released models such as BERT (Devlin et al, 2018) and RoBERTa (Liu et al, 2020), exploit the use of pre-training and bidirectional transformers to enable efficient solutions obtaining state-of-the-art performance. Pre-trained embeddings significantly outperform the previous state-of-the-art in similar problems such as humor detection (Weller and Seppi, 2019), and subjectivity detection (Pant et al, 2020).…”
Section: Introductionmentioning
confidence: 97%