2023
DOI: 10.1109/tmm.2022.3187556
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Learning for Multimedia Recommendation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 45 publications
(24 citation statements)
references
References 31 publications
0
21
0
Order By: Relevance
“…• MMGCL [49]: It incorporates the graph contrastive learning into recommender through modality edge dropout and masking. • SLMRec [33]: This method designs data augmentation on multimodal content with two components, i.e., noise perturbation over features and multi-modal pattern uncovering augmentation.…”
Section: Evaluation Protocolsmentioning
confidence: 99%
See 1 more Smart Citation
“…• MMGCL [49]: It incorporates the graph contrastive learning into recommender through modality edge dropout and masking. • SLMRec [33]: This method designs data augmentation on multimodal content with two components, i.e., noise perturbation over features and multi-modal pattern uncovering augmentation.…”
Section: Evaluation Protocolsmentioning
confidence: 99%
“…Amazon-Baby Allrecipes Tiktok Metrics R@20 N@20 R@20 N@20 R@20 N@20 SLMRec-w-ASL 0.0835 0.0351 0.0342 0.0125 0.0871 0.0366 SLMRec-w-CL 0.0790 0.0331 0.0327 0.0119 0.0853 0.0359 SLMRec-w-CL directly uses the output of GNNs, without using the spatial transformation in Eq.10 (SLMRec) [33]. SLMRec-ASL adds operations in Eq.2,3,4,5,7 of MMSSL.…”
Section: Datamentioning
confidence: 99%
“…Amazon VBPR [15], VMCF [38], AMR [47], VECF [7], PAMD [14], PMGT [32], LATTICE [67], JRL [69], BM3 [80], GraphCAR [62], DVBPR [19], MAML [27], DMRL [26], MVGAE [64], MMCPR [33], HCGCN [36], FREEDOM [73], DRAGON [71] Kwai MMGCN [59], SLMRec [48], GRCN [58], EliMRec [31], InvRL [10], A2BM2GL [3] Tiktok MMGCN [59], DualGNN [53], MGAT [49], SLMRec [48], GRCN [58], MMGCL [65], EgoGCN [5], EliMRec [31], InvRL [10], A2BM2GL [3], LUDP [24] MovieLens MMGCN [59], DualGNN [53], MGAT [49], SLMRec [48], GRCN [58], MMGCL [65], MKGAT [46], EgoGCN [5], EliMRec [31], InvRL [10], A2BM2GL [3], LUDP [24] Yelp PAMD [...…”
Section: Dataset Modelsmentioning
confidence: 99%
“…Different from the traditional recommendation, those applications have utilized the item multimodal content information like the video frames, audio track and item descriptions. MMGCN [59], MGAT [49], DualGNN [53] and SLMRec [48] are the micro video recommendation models that utilize description, captions, audio, and frames inside the video to model the multimodal user preference on the micro-video. Fashion recommendation faces difficulties to build an efficient recommender cause of the complex features involved and the subjectivity.…”
Section: Introductionmentioning
confidence: 99%
“…In embedding-based approaches, additional user and item features are used to inform either the neural network producing these embeddings or the neural network that performs recommendations based on them; such approaches include the widely used DeepCoNN [83], YouTube recommendations [10], embeddings based on topic models [72], and more. As the primary baseline for this work, we selected the Personalized Content Discovery (PCD) model [29] because it was tailored to a similar problem of content discovery for brands but also note several other works that extend recommender systems with extra features and extra data modalities [8,20,28,34,45,46,68,69,73,77,81].…”
Section: Related Workmentioning
confidence: 99%