2019 6th Swiss Conference on Data Science (SDS) 2019
DOI: 10.1109/sds.2019.00-14
|View full text |Cite
|
Sign up to set email alerts
|

Improving Reproducible Deep Learning Workflows with DeepDIVA

Abstract: The field of deep learning is experiencing a trend towards producing reproducible research. Nevertheless, it is still often a frustrating experience to reproduce scientific results. This is especially true in the machine learning community, where it is considered acceptable to have black boxes in your experiments. We present DeepDIVA, a framework designed to facilitate easy experimentation and their reproduction. This framework allows researchers to share their experiments with others, while providing function… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…Software significantly simplifies creating and documenting training data set composition and size. Tools for computing and recording summary statistics and metadata are available (Alberti et al , 2019; Gundersen et al , 2018; Holland et al , 2018; Isdahl and Gundersen, 2019; Schelter et al , 2017; Wibisono et al , 2014), along with visualization tools (Beg et al , 2021; Souza et al , 2019).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Software significantly simplifies creating and documenting training data set composition and size. Tools for computing and recording summary statistics and metadata are available (Alberti et al , 2019; Gundersen et al , 2018; Holland et al , 2018; Isdahl and Gundersen, 2019; Schelter et al , 2017; Wibisono et al , 2014), along with visualization tools (Beg et al , 2021; Souza et al , 2019).…”
Section: Resultsmentioning
confidence: 99%
“…Additionally, methods for preventing biases in the decisions of the AI can be documented (Arnold et al, 2019;Cobbe et al, 2021;Mitchell et al, 2019;Walsh et al, 2021). In addition, papers acknowledge that AI applications are maintained and changed and state that the version can be documented using the version number of the AI j DIGITAL POLICY, REGULATION AND GOVERNANCE j (Alberti et al, 2018;Alberti et al, 2019;Mitchell et al, 2019) or by making the code used for training the AI publicly accessible (Heil et al, 2021;Stodden and Miguez, 2013;Walsh et al, 2021;Wibisono et al, 2014). Additionally, software tools for automatically documenting the chosen parameters and decisions of the researchers and a discussion of possible alternate workflows have been proposed (Alberti et al, 2019;Beg et al, 2021;Mora-Cantallops et al, 2021;Schelter et al, 2017;Wang et al, 2021).…”
Section: Documenting the Design Decisions Made During Artificial Inte...mentioning
confidence: 99%
See 1 more Smart Citation
“…By making datasets, tasks, workflows, as well as results accessible for the public, OpenML [42] focused on the reproducibility of experiments. The same focus was taken by DeepDIVA [7] for Computer Vision (CV) as well as DeepZensols [31] for Natural Language Processing (NLP). They made DL experiment intuitive and fast to set up by providing different out-of-the-box experiments and visualizations.…”
Section: Related Workmentioning
confidence: 99%
“…For semantic segmentation, we used the deep learning framework DeepDIVA [14,15], in order to ensure reproducibility of our experiments. The network is trained from scratch for 300 epochs using a momentum of 0.9 and an initial learning rate of 0.001 with the adapting policy of decaying it by a factor of 10 at epochs 100 and 200.…”
Section: Setupmentioning
confidence: 99%