2021
DOI: 10.1109/tc.2020.2997051
|View full text |Cite
|
Sign up to set email alerts
|

BaPa: A Novel Approach of Improving Load Balance in Parallel Matrix Factorization for Recommender Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…DSGD, DSGD++ and NOMAD has the same total communication volume during an SGD epoch per processor which is equals to F × M × K as discussed in Section 3.1. The number of messages sent per processor during an q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Amz Books Amz q DSGD P2P H&C (a) F = 16 q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Amz Books Amz [15] proposed a novel framework, BaPa, for improving the nonzero load balance of DSGD through a novel algorithm for balancing per-processor and per-epoch ratings. Their BaPabased DSGD shows a significant runtime improvement on small number of processors (< 16).…”
Section: Related Workmentioning
confidence: 99%
“…DSGD, DSGD++ and NOMAD has the same total communication volume during an SGD epoch per processor which is equals to F × M × K as discussed in Section 3.1. The number of messages sent per processor during an q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Amz Books Amz q DSGD P2P H&C (a) F = 16 q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Amz Books Amz [15] proposed a novel framework, BaPa, for improving the nonzero load balance of DSGD through a novel algorithm for balancing per-processor and per-epoch ratings. Their BaPabased DSGD shows a significant runtime improvement on small number of processors (< 16).…”
Section: Related Workmentioning
confidence: 99%
“…In addition, distributed learning (DistL) also addresses those approaches that focus more on efficient use of computational resources in the presence of big data or large-scale modeling. Recently, some studies focus on developing efficient enablers for distributed learning e.g., matrix factorization techniques, distributed alternating direction method of multipliers [2], [3]. Some of those works shed light on privacy concerns [4].…”
Section: Introductionmentioning
confidence: 99%