2015
DOI: 10.1002/cpe.3722
|View full text |Cite
|
Sign up to set email alerts
|

GPUSGD: A GPU‐accelerated stochastic gradient descent algorithm for matrix factorization

Abstract: SUMMARYMatrix factorization is one of the leading techniques for many applications such as social network-based recommendation systems. As of today, many parallel stochastic gradient descent (SGD) methods have been proposed to address the matrix factorization issue on shared-memory (multi-core) systems and distributed systems. However, these methods cannot be improved significantly on graphics processing unit (GPU) because the serious over-writing problem and thread divergence may occur. The fundamental reason… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…Compared with previous methods mentioned in the previous section, our method is the fastest one, as it does not require any time for shuffling dataset and/or sorting portions of the dataset before processing it. BSGD processed ratings from first 50,000 users on the first 2,000 movies of Netflix data [19,20]. We used the same dataset to compare the performance of ESGD with BSGD.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared with previous methods mentioned in the previous section, our method is the fastest one, as it does not require any time for shuffling dataset and/or sorting portions of the dataset before processing it. BSGD processed ratings from first 50,000 users on the first 2,000 movies of Netflix data [19,20]. We used the same dataset to compare the performance of ESGD with BSGD.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…First, pipelining the data transfer from/to GPU and kernel execution. Second, using GPU clustering to parallelize the execution [20].…”
Section: Discussionmentioning
confidence: 99%
“…(b) Decompose M into U and I, where U is the user feature matrix and I is the item feature matrix. (c) According to (11), iteratively update U and I to solve the optimal value. Until the stop condition is met, the optimal value is reached.…”
Section: A Improved Funksvd Algorithmmentioning
confidence: 99%
“…Kang [10] runs the non-negative matrix factorization algorithm on the GPU platform, which accelerates the running speed of the algorithm and improves the effectiveness of the algorithm. Jin [11] proposed an efficient GPU algorithm to solve the matrix decomposition problem based on Stochastic Gradient Descent (SGD) method and improve the efficiency of the algorithm.…”
Section: Introductionmentioning
confidence: 99%
“…However, these methods cannot be accelerated significantly on graphics processing unit (GPU) systems because the serious over‐writing problem and thread divergence may occur. The fourth paper, ‘GPUSGD: a GPU‐accelerated stochastic gradient descent algorithm for matrix factorization’ , by Jing Jin, Siyan Lai, Su Hu, Jing Lin, and Xiaola Lin, proposes an efficient GPU algorithm, named GPUSGD, to solve the matrix factorization problem based on SGD method. The proposed GPUSGD not only can handle the over‐writing problem but also can avoid the performance loss caused by the thread divergence.…”
Section: Themes Of This Special Issuementioning
confidence: 99%