2015 IEEE International Symposium on Multimedia (ISM) 2015
DOI: 10.1109/ism.2015.119
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Multi-training Framework of Image Deep Learning on GPU Cluster

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 13 publications
0
1
0
Order By: Relevance
“…However, the training procedures of deep belief networks are highly serial and dependent, which makes it difficult to convert into parallel form. Chen et al developed a pipelining system for image deep learning on a GPU cluster to leverage the heavy workload of training procedure. They organized the training of multiple deep learning models in parallel, where each stage of the pipeline is managed by a particular GPU with a partition of the train data.…”
Section: Data Mining Tasks and Techniquesmentioning
confidence: 99%
“…However, the training procedures of deep belief networks are highly serial and dependent, which makes it difficult to convert into parallel form. Chen et al developed a pipelining system for image deep learning on a GPU cluster to leverage the heavy workload of training procedure. They organized the training of multiple deep learning models in parallel, where each stage of the pipeline is managed by a particular GPU with a partition of the train data.…”
Section: Data Mining Tasks and Techniquesmentioning
confidence: 99%