2020
DOI: 10.1007/978-981-15-1216-2_4
|View full text |Cite
|
Sign up to set email alerts
|

Toxic Comment Detection in Online Discussions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
1
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 41 publications
(18 citation statements)
references
References 42 publications
0
13
1
1
Order By: Relevance
“…Two independent study team members coded the comments to the original posts, in terms of 7 coding categories: (1) tone of the comment (positive, negative, or neutral), (2) nature of contribution (toxic, healthy, or unclear/not applicable), (3) agreement with prevention message (agree, disagree, or seek clarification or advice), (4) mentions of government agency (yes or no), (5) policy/regulation (proregulation, antiregulation, neutral-regulation, or not applicable), (6) promotion/spam (yes or no), and (7) format (text only, meme/sticker/emoji/emoticon only, or both). Toxic contributions were defined as “a rude, disrespectful, or unreasonable comment that is likely to make other users leave a discussion” [ 27 ], whereas healthy contributions were defined as those using non-toxic language, those that were unclear or for which the classification was not applicable, or those using vague terms or emojis/stickers/emoticons, for which toxicity could not be determined.…”
Section: Methodsmentioning
confidence: 99%
“…Two independent study team members coded the comments to the original posts, in terms of 7 coding categories: (1) tone of the comment (positive, negative, or neutral), (2) nature of contribution (toxic, healthy, or unclear/not applicable), (3) agreement with prevention message (agree, disagree, or seek clarification or advice), (4) mentions of government agency (yes or no), (5) policy/regulation (proregulation, antiregulation, neutral-regulation, or not applicable), (6) promotion/spam (yes or no), and (7) format (text only, meme/sticker/emoji/emoticon only, or both). Toxic contributions were defined as “a rude, disrespectful, or unreasonable comment that is likely to make other users leave a discussion” [ 27 ], whereas healthy contributions were defined as those using non-toxic language, those that were unclear or for which the classification was not applicable, or those using vague terms or emojis/stickers/emoticons, for which toxicity could not be determined.…”
Section: Methodsmentioning
confidence: 99%
“…Word2Vec (Mikolov et al, 2013) and GloVe fail to find a good representation of these words because words never occurred in training time. These words are out-of-vocabulary (OOV) (Risch and Krestel, 2020). However, we can take advantage of failing representations to represent toxic tokens that are not in vocabulary by setting their representations to zero.…”
Section: Multi-embedding Layermentioning
confidence: 99%
“…Dalam komentar pada sosial media, seringkali ditemukan sekelompok orang jahat dimana ia menghalangi diskusi yang saling menghormati dengan komentarnya yang beracun (toxic comment) yang didominasi oleh remaja. Komentar beracun (toxic comment) didefinisikan sebagai komentar yang kasar, tidak sopan, tidak masuk akal, atau bahkan sampai mempermalukan seseorang di media sosial yang cenderung membuat pengguna lain merasa tidak nyaman (Risch & Krestel, 2020). Dalam pendeteksian komentar dapat menggunakan pendekatan machine learning yaitu analisis sentimen yang sangat diperlukan dalam menyaring komentar di media sosial.…”
Section: Pendahuluanunclassified