2020
DOI: 10.1147/jrd.2019.2960225
|View full text |Cite
|
Sign up to set email alerts
|

Troubleshooting deep-learner training data problems using an evolutionary algorithm on Summit

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…The Dask library was designed to enable parallel computing in Python by distributing scientific calculations such as linear algebra operations and singular-value decomposition [23,24]; it can also be used purely as a distributed workflow manager via Python subprocess calls. On Summit, Dask for the former use case was tested on small numbers of nodes [25], and deployed on larger numbers of nodes for parallel, distributed dataframe and database query processing tasks [19]; it was deployed as a workflow manager for large-scale evolutionary algorithms at the scale of 500 Summit nodes [26,27].…”
Section: Leadership-scale Hpc Workflowsmentioning
confidence: 99%
“…The Dask library was designed to enable parallel computing in Python by distributing scientific calculations such as linear algebra operations and singular-value decomposition [23,24]; it can also be used purely as a distributed workflow manager via Python subprocess calls. On Summit, Dask for the former use case was tested on small numbers of nodes [25], and deployed on larger numbers of nodes for parallel, distributed dataframe and database query processing tasks [19]; it was deployed as a workflow manager for large-scale evolutionary algorithms at the scale of 500 Summit nodes [26,27].…”
Section: Leadership-scale Hpc Workflowsmentioning
confidence: 99%
“…In particular, asynchronous steady-state evolutionary algorithms (ASSEAs) can often reach nearperfect resource utilization throughout an experiment (see Figure 1), particularly when the algorithm's computational cost is dominated by fitness evaluation on the workers as is often the case when, say, tuning the parameters or behaviours of an expensive simulation (D'Auria et al, 2020;Gunaratne & Garibay, 2020). Asynchronous EAs are growing in popularity, and have most recently been applied to a variety computationally challenging problems such as deep neural network hyperparameter tuning (Jaderberg et al, 2017;Coletti et al, 2019), evolutionary reinforcement learning (Lee et al, 2020), and simulation problems in air traffic management (Pellegrini et al, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…1), particularly when the algorithm's computational cost is dominated by fitness evaluation on the workers (as is often the case when, say, tuning the parameters or behaviors of an expensive simulation [7,11]). Asynchronous EAs are growing in popularity, and have most recently been applied to a variety computationally challenging problems such as deep neural network hyperparameter tuning [5,13], evolutionary reinforcement learning [15], and simulation problems in air traffic management [19].…”
Section: Introductionmentioning
confidence: 99%