2010 IEEE International Symposium on Biomedical Imaging: From Nano to Macro 2010
DOI: 10.1109/isbi.2010.5490181
|View full text |Cite
|
Sign up to set email alerts
|

Robust left ventricle segmentation from ultrasound data using deep neural networks and efficient search methods

Abstract: The automatic segmentation of the left ventricle of the heart in ultrasound images has been a core research topic in medical image analysis. Most of the solutions are based on low-level segmentation methods, which uses a prior model of the appearance of the left ventricle, but imaging conditions violating the assumptions present in the prior can damage their performance. Recently, pattern recognition methods have become more robust to imaging conditions by automatically building an appearance model from traini… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
26
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 25 publications
(26 citation statements)
references
References 17 publications
0
26
0
Order By: Relevance
“…This complexity has been reduced in several ways, such as with the branch and bound framework [19], which allows a reduction to O(K R/2 + S) or the marginal space learning (MSL) [4] which partitions the search space into subspaces of increasing complexity, achieving a complexity reduction of O(K + (R − 1) × ♯scales × K fine + S), where ♯scales accounts for the number of scales and K fine represents a reduced number of promising samples (note that in [1,3,4,20], ♯scales = 3 and K fine = O(10 1 )). Coarse-to-fine based derivative search has also been proposed in [1,3,20], which uses GA approach in the space of R dimensions [1,20] achieving a complexity of O(K + ♯scales × K fine × R + S). The use of sparse manifolds in the rigid detection has been implemented in [3] achieving a complexity of O((…”
Section: Running-time Complexity Comparisonsmentioning
confidence: 99%
“…This complexity has been reduced in several ways, such as with the branch and bound framework [19], which allows a reduction to O(K R/2 + S) or the marginal space learning (MSL) [4] which partitions the search space into subspaces of increasing complexity, achieving a complexity reduction of O(K + (R − 1) × ♯scales × K fine + S), where ♯scales accounts for the number of scales and K fine represents a reduced number of promising samples (note that in [1,3,4,20], ♯scales = 3 and K fine = O(10 1 )). Coarse-to-fine based derivative search has also been proposed in [1,3,20], which uses GA approach in the space of R dimensions [1,20] achieving a complexity of O(K + ♯scales × K fine × R + S). The use of sparse manifolds in the rigid detection has been implemented in [3] achieving a complexity of O((…”
Section: Running-time Complexity Comparisonsmentioning
confidence: 99%
“…Table I shows the general characteristics of the following methods: 1) bottom-up approaches [32], [33]; 2) active contours methods [7]; 3) active shape models (ASMs) [22]; 4) deformable templates [14]- [18], [36]; 5) active appearance models (AAMs) [3], [23], [25]; 6) level-set approaches [4]- [6], [8]- [13], [34], [35]; and 7) database-guided (DB-guided) segmentation [19]- [21], [24], [26], [27], [37]. In this table, five properties are used to define each method, where mark indicates the presence of the property and symbol means that, although the property is present in latest developments, it was not part of the original formulation.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The smallest point to contour distance is (19) which is the distance to the closest point (DCP). The AV between and is (20) The HDF is defined as the maximum DCP between and , as in the following:…”
Section: Error Measuresmentioning
confidence: 99%
See 2 more Smart Citations