Proceedings of the 2022 International Conference on Management of Data 2022
DOI: 10.1145/3514221.3517860
|View full text |Cite
|
Sign up to set email alerts
|

NuPS: A Parameter Server for Machine Learning with Non-Uniform Parameter Access

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
24
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(25 citation statements)
references
References 43 publications
1
24
0
Order By: Relevance
“…This approach is popular for models with dense parameter access [28,31,45]. For sparse workloads, however, it is inefficient because it overcommunicates: it constantly synchronizes updates for all parameters to all nodes, although at each point in time, each node accesses only a small subset of these parameters [41]. A classic PS partitions the parameters among the nodes, and provides global reads and writes to these parameters by transparently communicating with the node that holds the accessed parameter.…”
Section: Model Qualitymentioning
confidence: 99%
See 4 more Smart Citations
“…This approach is popular for models with dense parameter access [28,31,45]. For sparse workloads, however, it is inefficient because it overcommunicates: it constantly synchronizes updates for all parameters to all nodes, although at each point in time, each node accesses only a small subset of these parameters [41]. A classic PS partitions the parameters among the nodes, and provides global reads and writes to these parameters by transparently communicating with the node that holds the accessed parameter.…”
Section: Model Qualitymentioning
confidence: 99%
“…In contrast to static full replication, a classic PS uses network bandwidth only when parameters are actually accessed. However, a classic PS is often inefficient due to access latency [41,42]. Figure 1 depicts the performance of both approaches for a task of training large-scale knowledge graph embeddings.…”
Section: Model Qualitymentioning
confidence: 99%
See 3 more Smart Citations