2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE) 2019
DOI: 10.1109/icse.2019.00107
|View full text |Cite
|
Sign up to set email alerts
|

CRADLE: Cross-Backend Validation to Detect and Localize Bugs in Deep Learning Libraries

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 117 publications
(55 citation statements)
references
References 46 publications
0
53
0
Order By: Relevance
“…ADS relies on ML models that are trained using various deep learning and machine learning frameworks. Prior studies [13,28] found that different frameworks may result in a model with slightly different performance. As shown in Table 2, Apollo developers use ML models that are trained using various frameworks (i.e., CaffeNet, Paddle, PyTorch, and TensorRT).…”
Section: Discussion and Implicationmentioning
confidence: 99%
See 1 more Smart Citation
“…ADS relies on ML models that are trained using various deep learning and machine learning frameworks. Prior studies [13,28] found that different frameworks may result in a model with slightly different performance. As shown in Table 2, Apollo developers use ML models that are trained using various frameworks (i.e., CaffeNet, Paddle, PyTorch, and TensorRT).…”
Section: Discussion and Implicationmentioning
confidence: 99%
“…This posts great concern and calls for attention that the dependency libraries of an ML system could also be an important factor that impacts its quality. Some recent work [28] starts to effectively detect the potential quality issues in the framework. When it comes to ADS, Joshua et al [18] perform an early step study to investigate the common bug symptoms and causes.…”
Section: Related Workmentioning
confidence: 99%
“…Some recent efforts have been made to debug DL models [31], and to study DL program bugs [60], library bugs [42] and DL software bugs across different frameworks and platforms [16]. The results of this paper provide a new angle to characterize DL model defects, which could be useful for other quality assurance activities besides testing.…”
Section: Related Workmentioning
confidence: 91%
“…They play a more important role in ML development than in traditional software development. ML Framework testing thus checks whether the frameworks of machine learning have bugs that may lead to problems in the final system [48]. Bug Detection in Learning Program.…”
Section: Testing Componentsmentioning
confidence: 99%