2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE) 2019
DOI: 10.1109/ase.2019.00080
|View full text |Cite
|
Sign up to set email alerts
|

An Empirical Study Towards Characterizing Deep Learning Development and Deployment Across Different Frameworks and Platforms

Abstract: Deep Learning (DL) has recently achieved tremendous success. A variety of DL frameworks and platforms play a key role to catalyze such progress. However, the differences in architecture designs and implementations of existing frameworks and platforms bring new challenges for DL software development and deployment. Till now, there is no study on how various mainstream frameworks and platforms influence both DL software development and deployment in practice.To fill this gap, we take the first step towards under… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
74
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
4
1

Relationship

3
6

Authors

Journals

citations
Cited by 94 publications
(75 citation statements)
references
References 37 publications
1
74
0
Order By: Relevance
“…Some recent efforts have been made to debug DL models [31], and to study DL program bugs [60], library bugs [42] and DL software bugs across different frameworks and platforms [16]. The results of this paper provide a new angle to characterize DL model defects, which could be useful for other quality assurance activities besides testing.…”
Section: Related Workmentioning
confidence: 91%
“…Some recent efforts have been made to debug DL models [31], and to study DL program bugs [60], library bugs [42] and DL software bugs across different frameworks and platforms [16]. The results of this paper provide a new angle to characterize DL model defects, which could be useful for other quality assurance activities besides testing.…”
Section: Related Workmentioning
confidence: 91%
“…For training the neural networks, we limit the vocabularies of the two sources to the top 50,000 tokens that are most frequently used in code changes and reviews. For implementation, we use PyTorch [37], an open-source deep learning framework, which is widely-used in previous research [38], [39]. We train our model in a server with one Tesla P40 GPU with 12GB memory.…”
Section: E Model Training and Testingmentioning
confidence: 99%
“…As a result of the current inadequate support for the operators in Tensorflow Lite, the structure of current applied deep neural networks are relatively simple. However, with a more complicated neural network, the quantization technique will definitely provide a performance boost during the prediction phase [46]. By calculating the unzipping, analyzing, and prediction time together, the time is always acceptable for mobile users (i.e., less than 3 seconds on average, less than 1 second in best practice).…”
Section: Effectiveness Evaluation Of Mobitive On Mobile Devicesmentioning
confidence: 99%