Recommender systems can automatically recommend users with items that they probably like. The goal of them is to model the user-item interaction by effectively representing the users and items. Existing methods have primarily learned the user’s preferences and item’s features with vectorized embeddings, and modeled the user’s general preferences to items by the interaction of them. In fact, users have their specific preferences to item attributes and different preferences are usually related. Therefore, exploring the fine-grained preferences as well as modeling the relationships among user’s different preferences could improve the recommendation performance. Toward this end, we propose a dual preference distribution learning framework (DUPLE), which aims to jointly learn a general preference distribution and a specific preference distribution for a given user, where the former corresponds to the user’s general preference to items and the latter refers to the user’s specific preference to item attributes. Notably, the mean vector of each Gaussian distribution can capture the user’s preferences, and the covariance matrix can learn their relationship. Moreover, we can summarize a preferred attribute profile for each user, depicting his/her preferred item attributes. We then can provide the explanation for each recommended item by checking the overlap between its attributes and the user’s preferred attribute profile. Extensive quantitative and qualitative experiments on six public datasets demonstrate the effectiveness and explainability of the DUPLE method.
Egocentric early action prediction aims to recognize actions from the first-person view by only observing the partial video segment, which is challenging due to the limited context information of the partial video. In this paper, to tackle the egocentric early action prediction problem, we propose a novel multi-modal adversarial knowledge distillation framework. Particularly, our approach involves a teacher network to learn the enhanced representation of the partial video by considering the future unobserved video segment, and a student network to mimic the teacher network to produce the powerful representation of the partial video and based on that predicting the action label. To promote the knowledge distillation between the teacher and the student network, we seamlessly integrate the adversarial learning with latent and discriminative knowledge regularizations encouraging the learned representations of the partial video to be more informative and discriminative towards the action prediction. Finally, we devise a multi-modal fusion module towards comprehensively predicting the action label. Extensive experiments on two public egocentric datasets validate the superiority of our method over the state-of-the-art methods. We have released the codes, and involved parameters to benefit other researchers
1
.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.