The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2020
DOI: 10.1007/s11265-020-01612-4
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Deep Learning for Multi-Tissue Segmentation from Multi-Contrast MRI

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…found in the literature. Such studies which used deep learning methods to discriminate thigh and leg tissues from MRI scans obtained very high accuracy performances, namely DSC of 0.97, 0.94 and 0.80 [4] and 0.96, 0.92 and 0.93 [3] for muscle, fat and inter-muscular adipose tissue respectively. In our study, however, as in [9] we used a different approach as we started from ground truth segmentation of muscles based on their anatomy, resulting in a network capable of replicating the manual segmentation of muscles ROIs done by hand.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…found in the literature. Such studies which used deep learning methods to discriminate thigh and leg tissues from MRI scans obtained very high accuracy performances, namely DSC of 0.97, 0.94 and 0.80 [4] and 0.96, 0.92 and 0.93 [3] for muscle, fat and inter-muscular adipose tissue respectively. In our study, however, as in [9] we used a different approach as we started from ground truth segmentation of muscles based on their anatomy, resulting in a network capable of replicating the manual segmentation of muscles ROIs done by hand.…”
Section: Discussionmentioning
confidence: 99%
“…In particular, recent studies applied diverse approaches including variational segmentation methods combined with statistical clustering-based techniques on T1-weighted scans [10,22], machine-learning classification techniques on intensity-based features extracted from multi-contrast Dixon scans [29], Deep Neural Networks (DNN) methods based on convolutional architectures combined with variational contour detector on T1-w scans [30] and DNN methods based on an encoder-decoder U-net architecture [27] combined with a clustering algorithm on T2 and proton density (PD) maps from multi spin echo scans [3]. Finally, Anwar et al applied a semi-supervised deep learning approach based on an encoder-decoder architecture on multi-contrast Dixon scans [4]. This latter work provided a unified framework to automatically segment both the multiple tissues regions and the edges of the fascia lata, which separates the adipose tissue domain into subcutaneous and inter-muscular.…”
Section: Introductionmentioning
confidence: 99%
“…Distinction between adipose and healthy muscle tissue was performed using the same networks and the corresponding DSC values were also high, i.e., 0.91 ( 66 ) and 0.94 ± 0.07 ( 65 ) for muscle detection. Recently, impressive DSC scores of 0.97 were obtained with an improved U-Net structure using residual connections and dense blocks ( 67 ). However, such a classification did not allow to distinguish perimuscular and intramuscular adipose tissue.…”
Section: Deep Learning-based Segmentation Methodsmentioning
confidence: 99%
“…In that case, each image does not have to be annotated before the network training phase and one can increase the database without a human intervention for the labeling process. Anwar et al ( 67 ) proposed to use a CED on unlabeled data to create labels and thus enlarge their dataset. However, unlabeled data are not always available especially for the study of rare diseases.…”
Section: Deep Learning-based Segmentation Methodsmentioning
confidence: 99%
“…Several studies have proposed either semi‐automated or fully automated methods in segmenting muscles on MRI images 16–22 . These automations have been mainly applied to whole limb muscles.…”
Section: Introductionmentioning
confidence: 99%