Sequential recommender systems (SRSs) aim to predict the next item interest to a user by learning the users' dynamic preferences over items from the sequential user-item interactions. Most of existing SRSs make recommendations by only modeling a user's main preference towards the functions of items, while ignoring the user's auxiliary visual preference towards the appearances and styles of items. Although visual preference is less significant than the main preference, it may still play an important role in most of users' choice on items. On the one hand, a user often prefers to choose the item which matches her/his visual preference well from multiple items with the same function. For example, a lady may choose one clothes whose style suits her best from multiple clothes with the same function. On the other hand, some particular users (e.g., young girls) are usually very concerned about the appearances of some special items (e.g., clothes, jewelry). Therefore, the overlook of modeling users' visual preference may generate unsatisfied recommendations which can not match a user's various types of preferences and thus reduce the consumption experience. To address this gap, in this paper, we propose modeling users' visual preferences to improve the performance of sequential recommendations. Specifically, we devise a coupled Double-chain Preference learning Network (DPN) to jointly learn a user's main preference and visual preference as well as the interactions between them. In DPN, one chain is for modeling a user's main preference by taking the IDs of items as the input and the other chain is for modeling the user's visual preference by taking appearance images of items as the input. Finally, the two types of preferences are carefully integrated with an attention module for the next item prediction. Extensive experiments on two real-world transaction datasets show the superiority of our proposed DPN over the representative and state-of-the-art SRSs.
INDEX TERMS visual preference, recommendations, deep neural networks
I. INTRODUCTION