2022
DOI: 10.1109/tsc.2019.2928551
|View full text |Cite
|
Sign up to set email alerts
|

A Comparative Measurement Study of Deep Learning as a Service Framework

Abstract: Big data powered Deep Learning (DL) and its applications have blossomed in recent years, fueled by three technological trends: a large amount of digitized data openly accessible, a growing number of DL software frameworks in open source and commercial markets, and a selection of affordable parallel computing hardware devices. However, no single DL framework, to date, dominates in terms of performance and accuracy even for baseline classification tasks on standard datasets, making the selection of a DL framewor… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
2

Relationship

2
7

Authors

Journals

citations
Cited by 36 publications
(15 citation statements)
references
References 18 publications
0
15
0
Order By: Relevance
“…This is with the exception of an Ensemble of multiple networks which often demonstrated superior results 24,36 . Previous studies examining different computational frameworks in the accuracy at general image classification tasks also showed comparable performance 37,38 . Although there have not been specific studies addressing the effect of compression of retinal images in the context of DL algorithms detection of DR, our study reinforces previous studies that have demonstrated the robustness of DL models with compression of general non-medical images up to a compression threshold 23 .…”
Section: Discussionmentioning
confidence: 84%
“…This is with the exception of an Ensemble of multiple networks which often demonstrated superior results 24,36 . Previous studies examining different computational frameworks in the accuracy at general image classification tasks also showed comparable performance 37,38 . Although there have not been specific studies addressing the effect of compression of retinal images in the context of DL algorithms detection of DR, our study reinforces previous studies that have demonstrated the robustness of DL models with compression of general non-medical images up to a compression threshold 23 .…”
Section: Discussionmentioning
confidence: 84%
“…), or the hyperparameter settings (e.g., λ, training epochs, and random seed for weight initialization), because different ways of generating denoisers can have different effects with respect to the manifold and the convergence of the DNN learning [21]- [23]. The third approach to generate different denoisers is to use different optimization objectives, such as changing the distance function d from a simple perpixel loss to a perceptual loss [24], or using an advanced regularization such as a sparsity constraint [25].…”
Section: B Strategic Teaming Of Multiple Dnn Denoisersmentioning
confidence: 99%
“…First, we construct a set of base models as the verifiers for examining and repairing the target model prediction output, each is trained on the same dataset for the same task as the target model. Several techniques can be used to produce the base candidate models, such as varying neural network structures [23], training hyperparameters, or performing data augmentation [29]. One can also conduct snapshot learning [22] to obtain a set of model verifiers efficiently in a single run of model training.…”
Section: A Model Verification Ensemble Defensementioning
confidence: 99%
“…), the prediction results of deep model can be quite difficult to reproduce. To make things worse, implementations of deep models with different DL frameworks can also induce the variations in accuracy, running time [17,18] etc. In other words, one prerequisite condition for impartial model comparisons is to employ the same DL framework with comparative training settings and scheme.…”
Section: Introductionmentioning
confidence: 99%