2019
DOI: 10.1038/s41467-019-11146-4
|View full text |Cite
|
Sign up to set email alerts
|

Comprehensive evaluation and characterisation of short read general-purpose structural variant calling software

Abstract: In recent years, many software packages for identifying structural variants (SVs) using whole-genome sequencing data have been released. When published, a new method is commonly compared with those already available, but this tends to be selective and incomplete. The lack of comprehensive benchmarking of methods presents challenges for users in selecting methods and for developers in understanding algorithm behaviours and limitations. Here we report the comprehensive evaluation of 10 SV callers, selected follo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

15
293
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 225 publications
(314 citation statements)
references
References 74 publications
15
293
0
1
Order By: Relevance
“…SV calling using SNP genotyping data is notoriously difficult and it has been repeatedly reported that this method can result in a high false positive rate. [6][7][8] Due to this factor, SVs require functional validation, which was not presented for the MIDN deletions described in the Obara and colleagues' studies. Therefore, the lack of validation of the reported SVs, supported by the lack of evidence of these events in both the gno-madSV data and our WGS analysis, suggests that the MIDN deletions reported require further study before they can be unequivocally associated with PD.…”
mentioning
confidence: 99%
“…SV calling using SNP genotyping data is notoriously difficult and it has been repeatedly reported that this method can result in a high false positive rate. [6][7][8] Due to this factor, SVs require functional validation, which was not presented for the MIDN deletions described in the Obara and colleagues' studies. Therefore, the lack of validation of the reported SVs, supported by the lack of evidence of these events in both the gno-madSV data and our WGS analysis, suggests that the MIDN deletions reported require further study before they can be unequivocally associated with PD.…”
mentioning
confidence: 99%
“…Sensitivity and precision of SV callsets were evaluated based on two true positive criteria: (1) the SV type reported for a candidate SV must match the simulated SV, and (2) the genomic position of the reported breakpoints must be within a pre-defined distance from the simulated SV. Unless otherwise stated, evaluation results presented in this study are based on the default breakpointresolution threshold of 200 bp as used in similar studies [8,9].…”
Section: Methodsmentioning
confidence: 99%
“…Aiming to overcome inherent limitations and to take advantage of the different approaches, a number of SV callers have incorporated multiple methods. For example, Delly [20] and Lumpy [21] In general, SV callers leveraging multiple detection methods have the best balance between sensitivity and precision for the detection of germline SVs, though there are notable differences in their performance for different SV types and sizes [8,9]. For somatic SVs, the recent ICGC-TCGA DREAM Somatic Mutation Calling Challenge, which evaluated the performance of 13 SV callers, found the overall sensitivity and precision of somatic SV calling to be highly influenced by lower allelic fractions of subclonal variants, tumour sequencing depth and read-alignment quality at SV breakpoints [10].…”
Section: Structural Variant Detection Methods and Callersmentioning
confidence: 99%
See 1 more Smart Citation
“…To fill this gap, we used a recently generated somatic SV truth set for the COLO829 tumor-normal cell line pair using a combination of Illumina, PacBio, Oxford Nanopore, and 10X Genomics sequencing, followed by targeted capture and PCR-based validations and manual curation (J.E.V., E.C., unpublished results). Based on benchmarking results [16,17] we selected the widely used structural variant caller Manta [5] as a well-performing comparison tool. To test sensitivity and reproducibility, we ran both tools on 3 independent sequencing runs of the COLO829T/COLO829BL tumor/normal cell line pairs and used the truth set for determining false positive and negatives.…”
Section: Somatic Structural Variation (Gridss)mentioning
confidence: 99%