2020 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID) 2020
DOI: 10.1109/ccgrid49817.2020.00-13
|View full text |Cite
|
Sign up to set email alerts
|

Performance Analysis of Distributed and Scalable Deep Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 5 publications
0
1
0
Order By: Relevance
“…We resort to distributed training of high-resolution images in the medical domain to handle larger effective batch sizes. While distributed GPU training has shown impressive improvements in [10], there exists limited body of work on distributed training of deep networks in healthcare [11]. In [12], authors introduced a scalable intuitive deep learning toolkit called R2D2 for medical image segmentation by offering novel distributed versions of two well-known and widely used CNN segmentation architectures.…”
Section: Related Workmentioning
confidence: 99%
“…We resort to distributed training of high-resolution images in the medical domain to handle larger effective batch sizes. While distributed GPU training has shown impressive improvements in [10], there exists limited body of work on distributed training of deep networks in healthcare [11]. In [12], authors introduced a scalable intuitive deep learning toolkit called R2D2 for medical image segmentation by offering novel distributed versions of two well-known and widely used CNN segmentation architectures.…”
Section: Related Workmentioning
confidence: 99%