2019
DOI: 10.1038/s41467-019-09406-4
|View full text |Cite
|
Sign up to set email alerts
|

Systematic benchmarking of omics computational tools

Abstract: Computational omics methods packaged as software have become essential to modern biological research. The increasing dependence of scientists on these powerful software tools creates a need for systematic assessment of these methods, known as benchmarking. Adopting a standardized benchmarking practice could help researchers who use omics data to better leverage recent technological innovations. Our review summarizes benchmarking practices from 25 recent studies and discusses the challenges, advantages, and lim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
121
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 121 publications
(124 citation statements)
references
References 62 publications
0
121
0
Order By: Relevance
“…Differences between the reads and the reference genome were consider as errors. Since this approach is unable to differentiate between the real errors and single nucleotide variants it oversimplifies the error correction problem and may bias benchmarking results 12 . In addition, error correction algorithms have undergone significant development since the prior benchmarking studies, and the performance of the newest methods has not yet been evaluated.…”
Section: Introductionmentioning
confidence: 99%
“…Differences between the reads and the reference genome were consider as errors. Since this approach is unable to differentiate between the real errors and single nucleotide variants it oversimplifies the error correction problem and may bias benchmarking results 12 . In addition, error correction algorithms have undergone significant development since the prior benchmarking studies, and the performance of the newest methods has not yet been evaluated.…”
Section: Introductionmentioning
confidence: 99%
“…There is now a growing body of evidence that systematic benchmarking studies stimulate research communities, help to set operating standards for evaluating computational models and methods, and lower the barriers for introducing new ideas in the field (36)(37)(38) .…”
Section: Discussionmentioning
confidence: 99%
“…Conversely, NeuSomatic, which uses a previously trained neural network for variant detection, worked quite well. An issue of general concern in benchmarking studies is whether results from simulated data generalize to empirical data sets [20]. Whenever ground truth information is required for a multitude of data sets, simulations are a useful tool.…”
Section: Discussionmentioning
confidence: 99%
“…To our knowledge, an independent assessment of variant callers using M-seq tumor data has never been carried out. Indeed, third-party benchmarking is important because authors evaluating their own methods against others may be prone to the "self-assessment trap" [20], i.e., implicit biases in the evaluation conditions. Besides, previous comparisons have explored a limited range of rather simple multiregional scenarios.…”
Section: Introductionmentioning
confidence: 99%