2020
DOI: 10.1186/s13673-020-00218-w
|View full text |Cite
|
Sign up to set email alerts
|

Information cascades prediction with attention neural network

Abstract: Online social networks are very popular among people, and they are changing the way people communicate, work, and play, mostly for the better. One of the things that fascinates us most about social network sites is the resharing mechanism that has the potential to spread information to millions of users in a matter of few hours or days. For instance, a user can share the content (e.g., videos on YouTube, tweets on Twitter, and photos on Flickr) with her set of friends, who subsequently can potentially reshare … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(7 citation statements)
references
References 39 publications
0
6
0
Order By: Relevance
“…e last layer parameters can be trained directly as they directly impact the loss function but to train the weights of previous layers the feedback needs to travel backwards [27].…”
Section: E Error Back Propagationmentioning
confidence: 99%
See 1 more Smart Citation
“…e last layer parameters can be trained directly as they directly impact the loss function but to train the weights of previous layers the feedback needs to travel backwards [27].…”
Section: E Error Back Propagationmentioning
confidence: 99%
“…Between the input and hidden layer weights are W 1 and between the hidden layer and output layer weights are W 2. The last layer parameters can be trained directly as they directly impact the loss function but to train the weights of previous layers the feedback needs to travel backwards [ 27 ]. The sequence in which weights will be updated is from W 2 to W 1.…”
Section: Introductionmentioning
confidence: 99%
“…Following previous works [11,51], we adopt the mean square log-transformed error (MSLE) and mean absolute error (MAE) as our evaluation metrics to measure the difference between the actual value and the predicted value, which are defined as…”
Section: Datasetsmentioning
confidence: 99%
“…Lit et al [115] proposed an end-to-end framework, which incorporates Bi-GRU and AM to make the coarse-grained macro-level prediction. An attention mechanism, involving intra-attention and inter-gate modules, was designed to efficiently capture and fuse the structural and temporal information from the observed period of the cascade.…”
Section: G Rnn-based Modelsmentioning
confidence: 99%