2023
DOI: 10.1016/j.eswa.2022.118563
|View full text |Cite
|
Sign up to set email alerts
|

Temporal Density-aware Sequential Recommendation Networks with Contrastive Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 44 publications
0
7
0
Order By: Relevance
“…Recently, the emergence of large language models (LLMs), such as GPT-3.5, have garnered significant attention from both the research community and industry. These LLMs have shown remarkable capabilities, achieving human-level performance in benchmark datasets, including SDS [17] and cross-language summarization (CLS) [18]. The introduction of LLMs has sparked a growing interest in harnessing the potential of LLMs for various applications.…”
Section: Contradictionsmentioning
confidence: 99%
See 4 more Smart Citations
“…Recently, the emergence of large language models (LLMs), such as GPT-3.5, have garnered significant attention from both the research community and industry. These LLMs have shown remarkable capabilities, achieving human-level performance in benchmark datasets, including SDS [17] and cross-language summarization (CLS) [18]. The introduction of LLMs has sparked a growing interest in harnessing the potential of LLMs for various applications.…”
Section: Contradictionsmentioning
confidence: 99%
“…This is because the summaries generated by LLMs-based models can sometimes surpass the quality of the standard reference summaries within the dataset. Although some meticulously curated summary datasets are available to address the aforementioned issues, most of them consist of single documents [17]. Consequently, there remains a shortage of curated datasets specifically designed for MDS.…”
Section: Contradictionsmentioning
confidence: 99%
See 3 more Smart Citations