2019
DOI: 10.1038/s41591-019-0536-x
|View full text |Cite
|
Sign up to set email alerts
|

Author Correction: End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography

Abstract: In the version of this article originally published, there was an error in the phrase "This dataset contained 1,739 cases (27 cancer-positives)" in the main text. The number 1,739 should have been 1,139. There was also an error in the Fig. 4c legend. In the phrase "comprising n = 1,739 cases", the number 1,739 again should have been 1,139. Additionally, in the Extended Data Fig. 5 legend, the phrase "AUC curve for the independent data test set with n = 1,739 cases" contained the same error. The number should h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 34 publications
(17 citation statements)
references
References 0 publications
0
17
0
Order By: Relevance
“…Second, we trained a CNN model with residual connections and 3D spatiotemporal convolutions across frames to predict ejection fraction. Unlike prior 3D CNN architectures for medical imaging machine learning, our approach integrates spatial as well as temporal information with temporal variation across frames as the third dimension in our network convolutions 25,31,32 . Spatiotemporal convolutions, which incorporate spatial information in two dimensions as well as temporal information in the third dimension has been previously used in non-medical video classification tasks 31,32 , however has not been previously attempted on medical imaging given the relative scarcity of video medical imaging datasets nor used for regression tasks instead of classification tasks.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Second, we trained a CNN model with residual connections and 3D spatiotemporal convolutions across frames to predict ejection fraction. Unlike prior 3D CNN architectures for medical imaging machine learning, our approach integrates spatial as well as temporal information with temporal variation across frames as the third dimension in our network convolutions 25,31,32 . Spatiotemporal convolutions, which incorporate spatial information in two dimensions as well as temporal information in the third dimension has been previously used in non-medical video classification tasks 31,32 , however has not been previously attempted on medical imaging given the relative scarcity of video medical imaging datasets nor used for regression tasks instead of classification tasks.…”
Section: Resultsmentioning
confidence: 99%
“…Being the most common first-line cardiovascular imaging modality, there is great interest in using deep learning techniques to determine ejection fraction [22][23][24] . Limitations in human interpretation, including laborious manual segmentation and inability to perform beat-to-beat quantification may be overcome by sophisticated automated approaches 6,25,26 . Recent advances in deep learning suggest that it can accurately and reproducibly identify human-identifiable phenotypes as well as characteristics unrecognized by human experts 25,[27][28][29] .…”
Section: Introductionmentioning
confidence: 99%
“…Unlike radiomics feature analysis scheme, DL based scheme use the convolutional neural network (CNN) to build an end-toend classification model by learning a hierarchy of internal representations (15)(16)(17). Although DL scheme can improve the classification performance and reduce the workload of hand-craft feature engineering (i.e., tumor boundary delimitation), it needs to be trained with larger dataset than radiomics feature based scheme (18,19). However, under common medical diagnosis conditions, collecting, and building a large uniform image dataset is very difficult because of the inconformity of CT screening standard and lacking surgical pathological confirmed GGNs.…”
Section: Introductionmentioning
confidence: 99%
“…Being the most common first-line cardiovascular imaging modality, there is great interest in using deep learning techniques to determine ejection fraction [22][23][24] . Limitations in human interpretation, including laborious manual segmentation and inability to perform beat-to-beat quantification may be overcome by sophisticated automated approaches 6,25,26 . Recent advances in deep learning suggest that it can accurately and reproducibly identify human-identifiable phenotypes as well as characteristics unrecognized by human experts 25,[27][28][29] .…”
Section: Introductionmentioning
confidence: 99%