2021
DOI: 10.48550/arxiv.2108.04016
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Learning methods for automatic evaluation of delayed enhancement-MRI. The results of the EMIDEC challenge

Abstract: A key factor for assessing the state of the heart after myocardial infarction (MI) is to measure whether the myocardium segment is viable after reperfusion or revascularization therapy. Delayed enhancement-MRI or DE-MRI, which is performed several minutes after injection of the contrast agent, provides high contrast between viable and nonviable myocardium and is therefore a method of choice to evaluate the extent of MI. To automatically assess myocardial status, the results of the EMIDEC challenge that focused… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 41 publications
0
3
0
Order By: Relevance
“…When evaluated in the EMIDEC training dataset with ground-truth labels, the Attri-VAE approach provided accuracy results (0.98) equivalent to the best challenge participants reporting their performance on the same dataset (1.0 (Lourenc ¸o et al, 2021), 0.95 (Shi et al, 2021), 0.94 (Ivantsits et al, 2021) and 0.90 (Sharma et al, 2021)). For the testing EMIDEC dataset (Lalande et al, 2021), the best participant method obtained a decreased accuracy (0.82, (Lourenc ¸o et al, 2021;Girum et al, 2021)), increasing to 0.92 for the challenge organizers (Shi et al, 2021). As for the ACDC dataset, which was tested as an external database (i.e., without considering it in training), classification accuracy was substantially reduced (0.59), being worst than results reported by challenge participants (Bernard et al, 2018) (0.96) to classify between the different pathologies (not only between healthy and myocardial infarction).…”
Section: Discussionmentioning
confidence: 99%
“…When evaluated in the EMIDEC training dataset with ground-truth labels, the Attri-VAE approach provided accuracy results (0.98) equivalent to the best challenge participants reporting their performance on the same dataset (1.0 (Lourenc ¸o et al, 2021), 0.95 (Shi et al, 2021), 0.94 (Ivantsits et al, 2021) and 0.90 (Sharma et al, 2021)). For the testing EMIDEC dataset (Lalande et al, 2021), the best participant method obtained a decreased accuracy (0.82, (Lourenc ¸o et al, 2021;Girum et al, 2021)), increasing to 0.92 for the challenge organizers (Shi et al, 2021). As for the ACDC dataset, which was tested as an external database (i.e., without considering it in training), classification accuracy was substantially reduced (0.59), being worst than results reported by challenge participants (Bernard et al, 2018) (0.96) to classify between the different pathologies (not only between healthy and myocardial infarction).…”
Section: Discussionmentioning
confidence: 99%
“…Based on accurate and robust segmentation results, the morphological attributes of physiological and pathological structures can be quantitatively analyzed, so as to provide useful basis for clinicians to diagnose diseases. Recently, deep learning-based methods have shown significant improvements and achieved state-of-the-art performances in many medical image segmentation tasks like cardiac segmentation [3], [4], [5], abdominal segmentation [6], [7], etc. However, the success of most existing deep learningbased methods relies on a large amount of labeled training data to ease the sub-optimal performance caused by over-fitting and ensure reliable generalization performance on test set, while it is hard and expensive to obtain large-amount well-annotated data in the medical imaging domain where only experts can provide reliable annotations.…”
Section: Introductionmentioning
confidence: 99%
“…Comparative study for EMIDEC myocardial segmentation in LGE-MRI (test leaderboard)[62]. Best values are marked in bold font.…”
mentioning
confidence: 99%