The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2023 IEEE 30th International Conference on High Performance Computing, Data, and Analytics (HiPC) 2023
DOI: 10.1109/hipc58850.2023.00042
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing the Training of Co-Located Deep Learning Models Using Cache-Aware Staggering

Kevin Assogba,
Bogdan Nicolae,
M. Mustafa Rafique

Abstract: Despite significant advances, training deep learning models remains a time-consuming and resource-intensive task. One of the key challenges in this context is the ingestion of the training data, which involves non-trivial overheads: read the training data from a remote repository, apply augmentations and transformations, shuffle the training samples, and assemble them into mini-batches. Despite the introduction of abstractions such as data pipelines that aim to hide such overheads asynchronously, it is often t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 36 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?