2020
DOI: 10.48550/arxiv.2009.13292
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RecoBERT: A Catalog Language Model for Text-Based Recommendations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Penha and Hauff [38] shows that off-theshelf pretrained BERT has both collaborative-and content-based knowledge stored in its parameters about the content of items to recommend; futhermore, fine-tuned BERT is highly effective in distinguishing relevant responses and irrelevant responses. ReX-Plug [19] exploits pretrained LMs to produce high-quality explainable recommendations by generating synthetic reviews on behalf of the user, and RecoBERT [32] builds upon BERT and introduces a technique for self-supervised pre-training of catalogue-based language models for text-based item recommendations.…”
Section: Crss and Lmsmentioning
confidence: 99%
See 1 more Smart Citation
“…Penha and Hauff [38] shows that off-theshelf pretrained BERT has both collaborative-and content-based knowledge stored in its parameters about the content of items to recommend; futhermore, fine-tuned BERT is highly effective in distinguishing relevant responses and irrelevant responses. ReX-Plug [19] exploits pretrained LMs to produce high-quality explainable recommendations by generating synthetic reviews on behalf of the user, and RecoBERT [32] builds upon BERT and introduces a technique for self-supervised pre-training of catalogue-based language models for text-based item recommendations.…”
Section: Crss and Lmsmentioning
confidence: 99%
“…Numerous studies have shown that these transformer-based LMs such as BERT [12], RoBERTa [30] and GPT [40] pretrained on large corpora can learn universal language representations and are extraordinarily powerful for many downstream tasks via fine-tuning [39]. Recently, CRSs have started to leverage pretrained LMs for their ability to semantically interpret a wide range of preference statement variations and have demonstrated their potential to build a variety of strong CRSs [19,32,38].…”
Section: Introductionmentioning
confidence: 99%