2020
DOI: 10.1101/2020.02.11.944223
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DeepNano-blitz: A Fast Base Caller for MinION Nanopore Sequencers

Abstract: AbstractMotivationOxford Nanopore MinION is a portable DNA sequencer that is marketed as a device that can be deployed anywhere. Current base callers, however, require a powerful GPU to analyze data produced by MinION in real time, which hampers field applications.ResultsWe have developed a fast base caller DeepNano-blitz that can analyze stream from up to two MinION runs in real t… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 16 publications
(26 citation statements)
references
References 9 publications
0
26
0
Order By: Relevance
“…Our software can readily be adapted to work with the output of other neural network basecallers. Application to the recent DeepNano-blitz [ 11 ], showed a similar gain in accuracy from consensus decoding. We also applied our algorithm to the ONT basecaller Bonito [ 12 ], a research basecaller inspired by recent successes of purely convolutional neural networks in speech recognition, and compared results with Guppy, an earlier ONT basecaller which can make use of 1D 2 .…”
Section: Main Textmentioning
confidence: 89%
“…Our software can readily be adapted to work with the output of other neural network basecallers. Application to the recent DeepNano-blitz [ 11 ], showed a similar gain in accuracy from consensus decoding. We also applied our algorithm to the ONT basecaller Bonito [ 12 ], a research basecaller inspired by recent successes of purely convolutional neural networks in speech recognition, and compared results with Guppy, an earlier ONT basecaller which can make use of 1D 2 .…”
Section: Main Textmentioning
confidence: 89%
“…We deployed a Gated Linear Units (GLU) [30] and a fully-connected layer before and after the convolution module respectively and the kernel sizes are [3,5,7,31×3] for the overall six encoder blocks.…”
Section: Methodsmentioning
confidence: 99%
“…Therefore, it is not a trivial work to capture the complicated correlations between each signal and its surroundings. Here, we employed a convolution-augmented transformer architecture instead of opting for the already adopted RNN [7,16] or CNN [6] approaches. As shown in Fig.1, the CATCaller model has two normal convolution layers and N encoder blocks followed by a fully-connected layer to produce the predicted probabilities of each base.…”
Section: Model Architecturementioning
confidence: 99%
See 2 more Smart Citations