2021
DOI: 10.1016/j.mri.2020.10.003
|View full text |Cite
|
Sign up to set email alerts
|

Deep-learning approach with convolutional neural network for classification of maximum intensity projections of dynamic contrast-enhanced breast magnetic resonance imaging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

3
6

Authors

Journals

citations
Cited by 32 publications
(18 citation statements)
references
References 22 publications
1
17
0
Order By: Relevance
“…Under different number of training cycles, the maximum Dice coefficient of CNN-based algorithm reached 0.946, while that of AAM algorithm was 0.843. It indicated that the algorithm of the study had better segmentation effects, which was in line with the research results of Fujioka et al (2021) [ 17 ].…”
Section: Discussionsupporting
confidence: 89%
“…Under different number of training cycles, the maximum Dice coefficient of CNN-based algorithm reached 0.946, while that of AAM algorithm was 0.843. It indicated that the algorithm of the study had better segmentation effects, which was in line with the research results of Fujioka et al (2021) [ 17 ].…”
Section: Discussionsupporting
confidence: 89%
“…This is called the black box problem. It will be necessary to develop and research deep learning systems that can both provide a diagnosis and clarify the reason for the same [64,65].…”
Section: Discussionmentioning
confidence: 99%
“…Research on image classification using the DCNN to solve specific needs has been prominent [13][14][15][16][17][18][19][20][21], as shown in Table 1. In Zhou et al's paper [15], a visual perception technology (VPT) framework based on deep learning was proposed, which relied on the image preprocessing (IP) scheme and the DCNN WR-IPDCNN.…”
Section: Related Workmentioning
confidence: 99%
“…e underlying algorithms in the proposed method were built upon the concept of the DCNN, where the execution time is much less than the time in visual inspection, and the detection and classification process is expected to be significantly less error-prone than that of visual inspection. In Fujioka et al's paper [18], the DCNN was used to distinguish between benign and malignant lesions on the maximum intensity projection of dynamic contrast-enhanced breast magnetic resonance imaging (MRI), and the model showed comparable diagnostic performance. In Alencastre-Miranda et al's paper [19], computer vision and deep learning networks were used to select and plant healthy billets, which increased the plant population and yield per hectare of sugarcane planting.…”
Section: Related Workmentioning
confidence: 99%