2018
DOI: 10.1002/cpe.4507
|View full text |Cite
|
Sign up to set email alerts
|

An efficient method for autoencoder‐based collaborative filtering

Abstract: Summary Collaborative filtering (CF) is a widely used technique in recommender systems. With rapid development in deep learning, neural network‐based CF models have gained great attention in the recent years, especially autoencoder‐based CF model. Although autoencoder‐based CF model is faster compared with some existing neural network‐based models (eg, Deep Restricted Boltzmann Machine‐based CF), it is still impractical to handle extremely large‐scale data. In this paper, we practically verify that most non‐ze… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…Moreover, some methods use both types. The use of implicit data requires some particular processes and methods, such as the use of data mining and machine learning techniques [67]. Therefore, in this research, this metric is used to classify these types of algorithms.…”
Section: Metrics Definitionsmentioning
confidence: 99%
“…Moreover, some methods use both types. The use of implicit data requires some particular processes and methods, such as the use of data mining and machine learning techniques [67]. Therefore, in this research, this metric is used to classify these types of algorithms.…”
Section: Metrics Definitionsmentioning
confidence: 99%
“…More recently, it is often cited, especially in the neural networks literature, that adaptive gradient computations, which borrow ideas from second‐order methods, and using momentum terms can bring superior convergence results . Since some neural networks can be seen as universal approximators and many CF models can be expressed as neural networks, such computations can also be useful in the general LtR setting for CF. Example applications are also supportive of this.…”
Section: Extensions To Pltr Algorithmsmentioning
confidence: 99%
“…For instance, supervised learning techniques, including neural networks (NN) [9][10][11][12][13][14][15][16][17][18][19], convolutional neural networks (CNN) [20][21][22][23][24][25][26], and recurrent neural networks (RNN) [27][28][29][30][31][32], can be adopted for prediction applications and classification applications in the electronics industries. Unsupervised learning techniques, including restricted Boltzmann machine (RBM) [33,34], deep belief networks (DBN) [35], deep Boltzmann machine (DBM) [36], auto-encoders (AE) [37,38], and denoising auto-encoders (DAE) [39], can be used for denoising and generalization. Furthermore, reinforcement learning techniques, including generative adversarial networks (GANs) [40,41] and deep Q-networks (DQNs) [42], can be used to obtain generative networks and discriminative networks for contesting and optimizing in a zero-sum game framework.…”
Section: Introductionmentioning
confidence: 99%