2022 1st International Conference on Information System &Amp; Information Technology (ICISIT) 2022
DOI: 10.1109/icisit54091.2022.9873070
|View full text |Cite
|
Sign up to set email alerts
|

Recommender System Using Transformer Model: A Systematic Literature Review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…Transformers are a deep learning model or architecture consisting of encoders and decoders that utilize attention retention [25] to generate optimal weighting of data [26]. Unlike recurrent neural networks (RNNs), they do not perform this sequentially, allowing them to parallel process much more efficiently [26]. Large language models are built upon this architecture.…”
Section: Transformers and Large Language Modelsmentioning
confidence: 99%
“…Transformers are a deep learning model or architecture consisting of encoders and decoders that utilize attention retention [25] to generate optimal weighting of data [26]. Unlike recurrent neural networks (RNNs), they do not perform this sequentially, allowing them to parallel process much more efficiently [26]. Large language models are built upon this architecture.…”
Section: Transformers and Large Language Modelsmentioning
confidence: 99%