2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 2019
DOI: 10.1109/embc.2019.8857218
|View full text |Cite
|
Sign up to set email alerts
|

Ultrasound segmentation using U-Net: learning from simulated data and testing on real data

Abstract: Segmentation of ultrasound images is an essential task in both diagnosis and image-guided interventions given the ease-of-use and low cost of this imaging modality. As manual segmentation is tedious and time consuming, a growing body of research has focused on the development of automatic segmentation algorithms. Deep learning algorithms have shown remarkable achievements in this regard; however, they need large training datasets. Unfortunately, preparing large labeled datasets in ultrasound images is prohibit… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
17
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 25 publications
(17 citation statements)
references
References 12 publications
0
17
0
Order By: Relevance
“…The 2 × 2 convolution layer kernel was the same as that of the contracting path and upsampling layer. The U‐net included 23 total convolutional layers, the last of which exhibited a 1 × 1 convolution kernel, and was trained over 200 epochs with a batch size of 16 40,42 . A binary cross‐entropy loss function was applied with a learning rate of 0.0001, an Adam optimizer, and a He‐normal weight initializer function 40,48,49 .…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…The 2 × 2 convolution layer kernel was the same as that of the contracting path and upsampling layer. The U‐net included 23 total convolutional layers, the last of which exhibited a 1 × 1 convolution kernel, and was trained over 200 epochs with a batch size of 16 40,42 . A binary cross‐entropy loss function was applied with a learning rate of 0.0001, an Adam optimizer, and a He‐normal weight initializer function 40,48,49 .…”
Section: Methodsmentioning
confidence: 99%
“…The U-net included 23 total convolutional layers, the last of which exhibited a 1 × 1 convolution kernel,and was trained over 200 epochs with a batch size of 16. 40,42 A binary cross-entropy loss function was applied with a learning rate of 0.0001, an Adam optimizer, and a Henormal weight initializer function. 40,48,49 All experiments were performed on an Intel Core i5 (12G) computer with a single GeForce GTX 1080 Ti GPU.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations