2022
DOI: 10.1016/j.knosys.2022.108605
|View full text |Cite
|
Sign up to set email alerts
|

Clickbait detection on WeChat: A deep model integrating semantic and syntactic information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…The output layer consists of two nodes (0 and 1) with softmax activation units and 0.00001 L1 regularisation. Moreover, four different settings of BERT [15] was implemented to model clickbait classification from online news. The hyper-parameters of all BERT model are: batch size=32, maximum epochs=10, initial learning with early stopping and learning rate reduction applied.…”
Section: Proposed Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The output layer consists of two nodes (0 and 1) with softmax activation units and 0.00001 L1 regularisation. Moreover, four different settings of BERT [15] was implemented to model clickbait classification from online news. The hyper-parameters of all BERT model are: batch size=32, maximum epochs=10, initial learning with early stopping and learning rate reduction applied.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Thakur does other research about clickbait detection by using deep learning [8], [9] which suggests the recurrent CNN overcomes the heavy feature engineering in clickbait detection. This method has been tested and turns out that the accuracy is better than the other clickbait-detection algorithms such as LSTM [10]- [12], CNN [8], [13], [14], BERT [15], [16], or conventional machine learning algorithms [17]- [21].…”
Section: Recent Workmentioning
confidence: 99%
“…Nevertheless, GNNs have also demonstrated success in modeling textual content based on syntactical dependency graphs. These approaches typically entail using GNN-based methods, like GCN [12,35] and GAT [7,40], to encode the syntax graph predicted by off-the-shelf dependency parsers, subsequently generating textual graph embeddings tailored to specific tasks, and more recent research focuses on synergizing semantic and syntactical components to complement semantic information [17,18,34,46]. However, GNN-based approaches face limitations.…”
Section: Graph Neural Networkmentioning
confidence: 99%
“…Attention-based [20,26,36,41,51] methods, which are particularly popular, utilizing attention mechanism [39] to capture relations within or between text from a global perspective. GNN-based methods focus on textual graph construction within documents [38] or the syntactical dependency relation between words [18,34]. Additionally, methods using external factual verification [15,48,53] contribute to enhanced detection performance.…”
Section: Related Work 21 Fake News Detectionmentioning
confidence: 99%
“…Therefore, the development of tools to detect clickbait automatically has occurred. Detecting clickbait is essential to maintaining the media's credibility (Liu et al , 2022). Clickbait detection has thus become an emerging research field.…”
Section: Introductionmentioning
confidence: 99%