2019
DOI: 10.48550/arxiv.1903.04561
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
11
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(11 citation statements)
references
References 0 publications
0
11
0
Order By: Relevance
“…The toxic class label contains several subtypes of toxic comments such as identity attacks, insults, explicit sexuality, obscenity, insult, and threats. An LSTM model applied to a civil comments dataset [6] has been used. The LSTM model is composed of an embedding 300-dimensional layer, two bidirectional LSTM layers (with 256 units for each direction), and finally, a dense layer with 128 hidden units.…”
Section: Use Casementioning
confidence: 99%
See 3 more Smart Citations
“…The toxic class label contains several subtypes of toxic comments such as identity attacks, insults, explicit sexuality, obscenity, insult, and threats. An LSTM model applied to a civil comments dataset [6] has been used. The LSTM model is composed of an embedding 300-dimensional layer, two bidirectional LSTM layers (with 256 units for each direction), and finally, a dense layer with 128 hidden units.…”
Section: Use Casementioning
confidence: 99%
“…In this section, we evaluate the ability of T-EBAnO to adapt to different architectures and different tasks (use cases [3][4][5][6][7][8]. For this purpose, we defined the following additional tasks.…”
Section: Framework Extendibilitymentioning
confidence: 99%
See 2 more Smart Citations
“…The first task is a binary toxic comment classification, and it consists of predicting whether the input comment is clean or toxic, i.e., it contains inappropriate content such as obscenity, threat, insult, identity attack, and explicit sexual content. An LSTM model applied to a civil comments dataset [33] has been used. The toxic class label contains several subtypes of toxic comments as identity attacks, insults, explicit sexuality, and threats.…”
Section: Use Casesmentioning
confidence: 99%