Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaga 2019
DOI: 10.18653/v1/d19-5022
|View full text |Cite
|
Sign up to set email alerts
|

Sentence-Level Propaganda Detection in News Articles with Transfer Learning and BERT-BiLSTM-Capsule Model

Abstract: In recent years, the need for communication increased in online social media. Propaganda is a mechanism which was used throughout history to influence public opinion and it is gaining a new dimension with the rising interest of online social media. This paper presents our submission to NLP4IF-2019 Shared Task SLC: Sentence-level Propaganda Detection in news articles. The challenge of this task is to build a robust binary classifier able to provide corresponding propaganda labels, propaganda or non-propaganda. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(19 citation statements)
references
References 15 publications
0
19
0
Order By: Relevance
“…Almost all teams used some Transformer-based models (especially BERT (Devlin et al, 2018)) either to get embeddings or as a pretrained model (Yoosuf and Yang, 2019) (Hou and Chen, 2019). Other teams often used ensembles with different features and models inside: LSTM-CRF (Gupta et al, 2019), XGBoost (Tayyar Madabushi et al, 2019), BiLSTM (Vlad et al, 2019) Figure 1: Class distribution in the train data, where A is "Loaded Language", B is "Name Calling or Labeling", C is "Repetition", D is "Doubt", E is "Exaggeration or Minimisation", and F represents all the remaining 9 classes.…”
Section: Approachmentioning
confidence: 99%
“…Almost all teams used some Transformer-based models (especially BERT (Devlin et al, 2018)) either to get embeddings or as a pretrained model (Yoosuf and Yang, 2019) (Hou and Chen, 2019). Other teams often used ensembles with different features and models inside: LSTM-CRF (Gupta et al, 2019), XGBoost (Tayyar Madabushi et al, 2019), BiLSTM (Vlad et al, 2019) Figure 1: Class distribution in the train data, where A is "Loaded Language", B is "Name Calling or Labeling", C is "Repetition", D is "Doubt", E is "Exaggeration or Minimisation", and F represents all the remaining 9 classes.…”
Section: Approachmentioning
confidence: 99%
“…Team Mindcoders (Vlad et al, 2019) combined BERT, Bi-LSTM and Capsule networks (Sabour et al, 2017) into a single deep neural network and pre-trained the resulting network on corpora used for related tasks, e.g., emotion classification.…”
Section: Teams Participating In the Sentence-level Classification Onlymentioning
confidence: 99%
“…For example, [30] describes the detection of government propaganda, [15] focuses on analyzing the title's compliance with the contents of the text, and [27] conducts teaching on relatively small data to distinguish between fake news and satire. Attempts to combat fake news using machine learning and artificial intelligence techniques were also undertaken in other works using methods based on deep neural networks, on RNN recursive networks, or using traditional learnings algorithms like: RF (random forest), LR (logistic regression), NB (naive Bayes), MLP (multilayer perceptron) or support vector machine (SVM).…”
Section: Introductionmentioning
confidence: 99%