2023
DOI: 10.1109/tpami.2022.3153112
|View full text |Cite
|
Sign up to set email alerts
|

Disentangled Representation Learning for Recommendation

Abstract: Disentangled Representation Learning (DRL) aims to learn a model capable of identifying and disentangling the underlying factors hidden in the observable data in representation form. The process of separating underlying factors of variation into variables with semantic meaning benefits in learning explainable representations of data, which imitates the meaningful understanding process of humans when observing an object or relation. As a general learning strategy, DRL has demonstrated its power in improving the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(10 citation statements)
references
References 129 publications
(195 reference statements)
0
10
0
Order By: Relevance
“…Specifically, the learned representation are expected to isolate information about each specific factor in only a few (or a group of) dimensions. Benefiting from separating out the underlying structure of the data into disjoint parts, disentangled representation is inherently more interpretable, robust to adversarial attack and capable of enhancing the gen-eralization ability of learning systems (Wang et al 2023b;Steenkiste et al 2019;Reddy, Godfrey, and Balasubramanian 2022;Ma et al 2019).…”
Section: Disentangled Representation Learningmentioning
confidence: 99%
“…Specifically, the learned representation are expected to isolate information about each specific factor in only a few (or a group of) dimensions. Benefiting from separating out the underlying structure of the data into disjoint parts, disentangled representation is inherently more interpretable, robust to adversarial attack and capable of enhancing the gen-eralization ability of learning systems (Wang et al 2023b;Steenkiste et al 2019;Reddy, Godfrey, and Balasubramanian 2022;Ma et al 2019).…”
Section: Disentangled Representation Learningmentioning
confidence: 99%
“…Existing studies [58,59] have demonstrated the potential of DRL in modeling human learning and understanding of the world, mainly because DRL encourages the representations to carry interpretable semantic information with independent factors, and shows significant potential for representing invariance [60,61], integrity [62,63], and generalization [64,65]. In view of this, CRL is now widely applied in computer vision [66][67][68][69][70], natural language processing [71][72][73][74][75], and graphical learning [76][77][78]. The most typical approaches of DRL are based on generative models such as variational auto-encoder (VAE) [79] or generative adversarial network (GAN) [80].…”
Section: Disentangled Representation Learningmentioning
confidence: 99%
“…The multimodal features are fed into different modality encoders. The modality encoders extract the representations and are general architectures used in other fields, such as ViT [13] for images and General [34] Coarse-grained Attention CL [40] Coarse-grained Attention None [6], [21] Fine-gained Attention None [30], [27], [57] Combined Attention None [44], [39] User-item Graph + Fine-gained Attention None [56] User-item Graph CL [59] Item-item Graph CL [58], [38] Item-item Graph None [33] Item-item Graph + Fine-gained Attention None [50], [45] Knowledge Graph None [2], [46] Knowledge Graph CL [8] Knowledge Graph + Fine-gained Attention None [43] Knowledge Graph + Filtration (graph) None [63], [55], [31] Filtration (graph) None [49], [4] MLP / Concat DRL [15], [28] Fine-gained Attention DRL [61], [36], [48] None DRL…”
Section: Procedures Of Mrsmentioning
confidence: 99%
“…Besides, contrastive learning guarantees the consistency and gap between separated modal representations. Compared with MacridVAE, SEM-MacridVAE [48] considers item semantic information when learning disentangled representations from user behaviors.…”
Section: Disentangled Representation Learningmentioning
confidence: 99%