Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.154
|View full text |Cite
|
Sign up to set email alerts
|

RecoBERT: A Catalog Language Model for Text-Based Recommendations

Abstract: Language models that utilize extensive selfsupervised pre-training from unlabeled text, have recently shown to significantly advance the state-of-the-art performance in a variety of language understanding tasks. However, it is yet unclear if and how these recent models can be harnessed for conducting text-based recommendations. In this work, we introduce RecoBERT, a BERT-based approach for learning catalog-specialized language models for text-based item recommendations. We suggest novel training and inference … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
1
1

Relationship

4
5

Authors

Journals

citations
Cited by 15 publications
(14 citation statements)
references
References 40 publications
0
14
0
Order By: Relevance
“…Semantic similarity has been studied in many fields, such as computer vision (Parmar et al, 2018;Huang et al, 2017), recommender systems (Wang and Fu, 2020;Barkan et al, 2020aBarkan et al, , 2021Malkiel et al, 2020), and natural language processing (Devlin et al, 2019;Reimers and Gurevych, 2019;Mikolov et al, 2013). Recently, transformer-based Language Models (LMs) ushered significant performance gains in various natural language understanding tasks, but mainly on relatively short texts (Devlin et al, 2019;Liu et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Semantic similarity has been studied in many fields, such as computer vision (Parmar et al, 2018;Huang et al, 2017), recommender systems (Wang and Fu, 2020;Barkan et al, 2020aBarkan et al, , 2021Malkiel et al, 2020), and natural language processing (Devlin et al, 2019;Reimers and Gurevych, 2019;Mikolov et al, 2013). Recently, transformer-based Language Models (LMs) ushered significant performance gains in various natural language understanding tasks, but mainly on relatively short texts (Devlin et al, 2019;Liu et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…We employed two content analyzers: (1) For categorical features and tags, we used a single linear layer with a 100-dimensional output vector. (2) For textual description, we used a variant of the RecoBERT model [23] that produces an embedding of size 100. Finally, the multiview content analyzer produces a 40-dimensional output vector.…”
Section: Baselines and Hyperparameter Configurationmentioning
confidence: 99%
“…Semantic similarity has been studied in many fields, such as computer vision (Parmar et al, 2018;Huang et al, 2017), recommender systems (Wang and Fu, 2020;Barkan et al, 2020aBarkan et al, , 2021Malkiel et al, 2020), and natural language processing (Devlin et al, 2019;Reimers and Gurevych, 2019;Mikolov et al, 2013). Recently, transformer-based Language Models (LMs) ushered significant performance gains in various natural language understanding tasks, but mainly on relatively short texts (Devlin et al, 2019;Liu et al, 2019).…”
Section: Related Workmentioning
confidence: 99%