2016 IEEE International Conference on Big Data (Big Data) 2016
DOI: 10.1109/bigdata.2016.7840870
|View full text |Cite
|
Sign up to set email alerts
|

Making massive computational experiments painless

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
21
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 43 publications
(22 citation statements)
references
References 12 publications
0
21
0
1
Order By: Relevance
“…It is possible that the emerging emphasis on open data will help us detect and characterize instances of nonergodicity broadly in the life sciences. For example, open-data efforts have led to increasing researcher interactions and open conversations about replicability and validity in comparative bioinformatics (40), breast cancer prediction (48), and massive computational experiments (49). At the very least, open-data access would allow researchers to examine the extent to which previously published results may evince varying degrees of nonergodicity.…”
Section: Discussionmentioning
confidence: 99%
“…It is possible that the emerging emphasis on open data will help us detect and characterize instances of nonergodicity broadly in the life sciences. For example, open-data efforts have led to increasing researcher interactions and open conversations about replicability and validity in comparative bioinformatics (40), breast cancer prediction (48), and massive computational experiments (49). At the very least, open-data access would allow researchers to examine the extent to which previously published results may evince varying degrees of nonergodicity.…”
Section: Discussionmentioning
confidence: 99%
“…It is entirely feasible to extend analyses on NeuroCAAS with more sophisticated methods of parallelization as offered by these tools on a per-analysis basis, as an immutable analysis environment contains all of the infrastructure available in typical computing environments, spanning infrastructure ranging from workstation to compute cluster. Lastly, other tools offer reproducible analyses to researchers as a service (Monajemi et al, 2016,Šimko et al, 2019, Brinckman et al, 2019, https://flywheel.io). NeuroCAAS is again complementary to these methods and services because it is designed explicitly for a heterogeneous community of neuroscientists, giving developers access to all the resources required to create powerful, general-purpose analyses, while simultaneously removing all major barriers of entry to these analyses for a diverse population of users.…”
Section: Discussionmentioning
confidence: 99%
“…For example, limitations on hardware capacity have real consequences for the performance of analysis algorithms (Radiuk, 2017), and can interfere with analysis performance in ways that preclude the most careful attempts to reproduce the appropriate infrastructure. Further, the difficulty of configuring analysis infrastructure drives yet more divergence (Demchenko et al, 2013, Monajemi et al, 2016, and can make errors extremely difficult to detect, let alone repair. This challenge persists even if the original developer provides clear and comprehensive instructions for use (which is rarely the case and an ongoing challenge across science (Zhao and Deek, 2005, Stodden et al, 2018, Raff, 2019.…”
Section: Introductionmentioning
confidence: 99%
“…The statistical performance assessment procedure (TruMAP) and results represent an example of massive computational experiments to quantify the effectiveness of a new algorithm. As Monajemi et al (2016) discussed, with advancement of computational power and tools, researchers are expected to present findings that involve such massive computations (order one million CPU-hours) as opposed to deducing conclusions from a limited number of test cases.…”
Section: Discussionmentioning
confidence: 99%