2022
DOI: 10.1155/2022/4840997
|View full text |Cite|
|
Sign up to set email alerts
|

Rumor Detection with Bidirectional Graph Attention Networks

Abstract: In order to extract the relevant features of rumors effectively, this paper proposes a novel rumor detection model with bidirectional graph attention network on the basis of constructing a directed graph, named P-BiGAT. Firstly, this model builds the propagation tree and diffusion tree through the tweet comment and reposting relationship. Secondly, the improved graph attention network (GAT) is used to extract the propagation feature and the diffusion feature through two different directions, and the multihead … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 33 publications
0
6
0
Order By: Relevance
“… Bi-GCN [ 20 ]: A graph convolution neural network model based on the propagation direction and diffusion direction of the propagation tree. P-BiGAT [ 22 ]: A bidirectional graph attention networks based on the propagation tree and diffusion tree through the tweet comment and reposting relationship. …”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“… Bi-GCN [ 20 ]: A graph convolution neural network model based on the propagation direction and diffusion direction of the propagation tree. P-BiGAT [ 22 ]: A bidirectional graph attention networks based on the propagation tree and diffusion tree through the tweet comment and reposting relationship. …”
Section: Methodsmentioning
confidence: 99%
“…P-BiGAT [ 22 ]: A bidirectional graph attention networks based on the propagation tree and diffusion tree through the tweet comment and reposting relationship.…”
Section: Methodsmentioning
confidence: 99%
“…K k�1 represented the concatenation of vectors from x 1 to x k , and W l k was a trainable weight matrix. e a ij represented the attention coefficient of the node j , use the same method as literature [24] calculated as in the following formula:…”
Section: Graph Attention Network(gat)mentioning
confidence: 99%
“…MapReduce [13][14][15] parallel computing framework is a parallel computing model running on HDFS distributed storage system [16]. It can process large PB-level data in parallel in a high fault-tolerant way and realize the parallel task processing function of Hadoop platform [17,18]. The core idea of this design is to divide and conquer the problem, not to push the data to the calculation, but to push the calculation to the data, which can greatly reduce the communication overhead.…”
Section: Improvement Of Dual Threshold Single-pass Algorithmmentioning
confidence: 99%