Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval 2019
DOI: 10.1145/3331184.3331268
|View full text |Cite
|
Sign up to set email alerts
|

Warm Up Cold-start Advertisements

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 110 publications
(20 citation statements)
references
References 34 publications
0
20
0
Order By: Relevance
“…To validate the recommendation performance of the proposed method in different phases, we preprocess the datasets following [51,26,55]. Specifically, we label items as old or new based on their frequency, where items with more than N labeled instances are old and others are new.…”
Section: Experiments Settingmentioning
confidence: 99%
See 2 more Smart Citations
“…To validate the recommendation performance of the proposed method in different phases, we preprocess the datasets following [51,26,55]. Specifically, we label items as old or new based on their frequency, where items with more than N labeled instances are old and others are new.…”
Section: Experiments Settingmentioning
confidence: 99%
“…-DropoutNet [39] improves the robustness by applying dropout technology to mitigate the model's dependency on item ID. -Meta embedding (Meta-E) [26] learns the ID embedding for new items fast adaption with meta-learning technology.…”
Section: Backbones and Baselinesmentioning
confidence: 99%
See 1 more Smart Citation
“…One promising research line falls into the category of content-based methods, which usually train a generative model to project a cold item's content, e.g. attributes, text, and image, etc., onto a warm item embedding space [2,3,16,19,21,24,27,32,46]. For example, DropoutNet [32] implicitly transforms a cold-start item's content to a warm embedding with randomly dropping the warm item embeddings that are learned during the training process.…”
Section: Introductionmentioning
confidence: 99%
“…There are many papers discussing few-shot learning with meta-learning [12], data augmentation [53] or regularization [50], but most of them don't consider the graph data. Furthermore, there have been several methods for few-shot node classification [20,38,56] and few-shot link prediction [10,23,32,43], but they only focus on node-level embedding. Recently, Chauhan et al [7] proposed few-shot graph classification based on graph spectral measures and got satisfactory performance.…”
Section: Introductionmentioning
confidence: 99%