2018
DOI: 10.1117/1.jmi.6.1.011007
|View full text |Cite
|
Sign up to set email alerts
|

Breast ultrasound lesions recognition: end-to-end deep learning approaches

Abstract: Multistage processing of automated breast ultrasound lesions recognition is dependent on the performance of prior stages. To improve the current state of the art, we propose the use of end-to-end deep learning approaches using fully convolutional networks (FCNs), namely FCN-AlexNet, FCN-32s, FCN-16s, and FCN-8s for semantic segmentation of breast lesions. We use pretrained models based on ImageNet and transfer learning to overcome the issue of data deficiency. We evaluate our results on two datasets, which con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
26
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 33 publications
(27 citation statements)
references
References 43 publications
1
26
0
Order By: Relevance
“…In [24] and [26], both the feature extraction (AUC = 0.849) and fine-tuning (AUC = 0.895) approaches were used, and the fine-tuning approach exhibited better performance. These results justify the fact that almost all of the previous studies on transfer learning applied to breast ultrasound [24][25][26][27][28][29] used fine-tuning to achieve superior performance (AUC = 0.895). However, in the performance analysis, the above conclusion does not provide sufficient insights into drawing a clear conclusion, because different studies used different methods (see Section 2.5) in terms of pre-processing, which highly affected performance; others even used different performance analysis metrics [23][24][25][26][27][28][29].…”
Section: Feature Extracting Vs Fine-tuningsupporting
confidence: 64%
See 4 more Smart Citations
“…In [24] and [26], both the feature extraction (AUC = 0.849) and fine-tuning (AUC = 0.895) approaches were used, and the fine-tuning approach exhibited better performance. These results justify the fact that almost all of the previous studies on transfer learning applied to breast ultrasound [24][25][26][27][28][29] used fine-tuning to achieve superior performance (AUC = 0.895). However, in the performance analysis, the above conclusion does not provide sufficient insights into drawing a clear conclusion, because different studies used different methods (see Section 2.5) in terms of pre-processing, which highly affected performance; others even used different performance analysis metrics [23][24][25][26][27][28][29].…”
Section: Feature Extracting Vs Fine-tuningsupporting
confidence: 64%
“…For example, the last layer in a network that has been trained for classification would be highly specific to that classification task [49]. If the model was trained to classify tumors, one unit would respond only to the images of a specific tumor [23][24][25][26][27][28]. Transferring all layers except the top layer is the most common type of transfer learning [17][18][19][20].…”
Section: Advantages Of Transfer Learningmentioning
confidence: 99%
See 3 more Smart Citations