2020
DOI: 10.1155/2020/3857871
|View full text |Cite
|
Sign up to set email alerts
|

A Tri-Attention Neural Network Model-BasedRecommendation

Abstract: Heterogeneous information network (HIN), which contains various types of nodes and links, has been applied in recommender systems. Although HIN-based recommendation approaches perform better than the traditional recommendation approaches, they still have the following problems: for example, meta-paths are manually selected, not automatically; meta-path representations are rarely explicitly learned; and the global and local information of each node in HIN has not been simultaneously explored. To solve the above… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 37 publications
0
2
0
Order By: Relevance
“…LSTM is an improved RNN (Recurrent Neural Network) model that solves the problems of gradient explosion or gradient disappearance during RNN training. Different from the single tanh loop structure in standard RNN, LSTM is a special network with three "gates" [21,22]. ey are the forget gate, input gate, and output gate.…”
Section: Long-and Short-term Memory Network Layermentioning
confidence: 99%
See 1 more Smart Citation
“…LSTM is an improved RNN (Recurrent Neural Network) model that solves the problems of gradient explosion or gradient disappearance during RNN training. Different from the single tanh loop structure in standard RNN, LSTM is a special network with three "gates" [21,22]. ey are the forget gate, input gate, and output gate.…”
Section: Long-and Short-term Memory Network Layermentioning
confidence: 99%
“…is paper adds an AAM layer [22] to the method, which can better capture the affective information in the movie box office data and grasp the core data information. It overcomes the problem of the standard LSTM model using the same state vector in each step of the prediction, which results in the inability to fully learn the detailed information of the sequence encoding during the prediction.…”
Section: Adaptive Attention Mechanism Layermentioning
confidence: 99%