2020
DOI: 10.1109/tip.2019.2941265
|View full text |Cite
|
Sign up to set email alerts
|

Mumford–Shah Loss Functional for Image Segmentation With Deep Learning

Abstract: Recent state-of-the-art image segmentation algorithms are mostly based on deep neural networks, thanks to their high performance and fast computation time. However, these methods are usually trained in a supervised manner, which requires large number of high quality ground-truth segmentation masks. On the other hand, classical image segmentation approaches such as level-set methods are formulated in a selfsupervised manner by minimizing energy functions such as Mumford-Shah functional, so they are still useful… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
71
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 104 publications
(73 citation statements)
references
References 46 publications
1
71
0
Order By: Relevance
“…We compare the boundary-aware priorinduced loss (L CE + P BA ) with three state-of-the-art segmentation losses that are CE (L CE ) alone, boundary prior-induced loss (L CE + P BD ) [5], and AC prior-induced loss (L CE + P AC ) [2,3], respectively. Table 1 reports the regional accuracy for four losses.…”
Section: Quantitative Evaluationmentioning
confidence: 99%
See 3 more Smart Citations
“…We compare the boundary-aware priorinduced loss (L CE + P BA ) with three state-of-the-art segmentation losses that are CE (L CE ) alone, boundary prior-induced loss (L CE + P BD ) [5], and AC prior-induced loss (L CE + P AC ) [2,3], respectively. Table 1 reports the regional accuracy for four losses.…”
Section: Quantitative Evaluationmentioning
confidence: 99%
“…However, it requires additional computational cost. Recent works [2,3] avoid the computational cost by integrating a smooth prior to the typical losses. Unfortunately, this prior smooths the prediction map everywhere, even across the boundaries resulting in the blank prediction map where the small targets are smoothed out ( Fig.…”
mentioning
confidence: 99%
See 2 more Smart Citations
“…To this end, it is difficult to train a 3D DCNN with limited training data and hardware resources. In addition, some studies have been carried out to improve the segmentation accuracy by adding loss function constraints, such as the boundary loss [20], the Hausdorff distance [21], the Mumford-Shah loss function [22] and signed distance map [23]. These methods contribute to improving the tissue edge segmentation accuracy of 2D slices, but they can hardly consider the continuity between layers.…”
Section: Introductionmentioning
confidence: 99%