2020
DOI: 10.1109/access.2020.3020475
|View full text |Cite
|
Sign up to set email alerts
|

An Improved Dice Loss for Pneumothorax Segmentation by Mining the Information of Negative Areas

Abstract: The lesion regions of a medical image account for only a small part of the image, and a critical imbalance exists in the distribution of the positive and negative samples, which affects the segmentation performance of the lesion regions. Dice loss is beneficial for the image segmentation involving an extreme imbalance of the positive and negative samples but it ignores the background regions, which also contain a large amount of information. In this work, we propose an improved dice loss that can mine the info… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 66 publications
(30 citation statements)
references
References 29 publications
0
30
0
Order By: Relevance
“…It should also be noted that none of these losses penalize the misclassifications of the background information, making it difficult to optimize them for accurate background predictions [ 26 ]. Moreover, these loss functions and their variants [ 21 , 22 , 23 ] have so far been applied to medical segmentation tasks only where the foreground/minority class is characterized by a small but compact RoI as opposed to PL detection tasks where the RoIs are thin and more widespread. Lastly, the class imbalance in PL datasets is more severe, i.e., 2:98 for the minority versus majority class, respectively, as compared to the medical datasets where the class imbalance levels are of the order of 20:80, approximately.…”
Section: Related Work and Theoretical Foundationmentioning
confidence: 99%
See 2 more Smart Citations
“…It should also be noted that none of these losses penalize the misclassifications of the background information, making it difficult to optimize them for accurate background predictions [ 26 ]. Moreover, these loss functions and their variants [ 21 , 22 , 23 ] have so far been applied to medical segmentation tasks only where the foreground/minority class is characterized by a small but compact RoI as opposed to PL detection tasks where the RoIs are thin and more widespread. Lastly, the class imbalance in PL datasets is more severe, i.e., 2:98 for the minority versus majority class, respectively, as compared to the medical datasets where the class imbalance levels are of the order of 20:80, approximately.…”
Section: Related Work and Theoretical Foundationmentioning
confidence: 99%
“…These loss functions also introduce additional weighting parameters to weight the constituent losses and thus, require tuning to achieve optimal performance. Furthermore, these losses have also been applied to medical segmentation tasks only [ 22 , 33 ] where the class imbalance levels are less severe and RoIs are more small and compact than the PL detection tasks. Apart from the tuning of weights, the learning rates (LRs) are also difficult to configure and optimize for the compound losses due to the varying nature of the constituent losses.…”
Section: Related Work and Theoretical Foundationmentioning
confidence: 99%
See 1 more Smart Citation
“…While by compounding with CE loss, IoU-WCE loss is more stable than IoU loss, and the problem of non-convergence is successfully overcome. Finally, The poor effect of Dice loss may be because it directly ignoring the background regions [51], which causes information loss during training.…”
Section: Experiments On αmentioning
confidence: 99%
“…Because we planned to create an ensemble of different models, we trained several models based on U-Net with EfficientNet backbone. Regarding the loss function, the commons are based on group characteristics such as intersect over union, Tversky loss, or Sorensen-Dice loss (Wang et al 2020), or pixel characteristics, like Binary cross-entropy (BCE) or Focal loss (Lin et al 2017b). Each of them has pros and cons.…”
Section: Architecture and Training Of The Derived Modelmentioning
confidence: 99%