2018
DOI: 10.1002/nbm.3931
|View full text |Cite
|
Sign up to set email alerts
|

Automated pixel‐wise brain tissue segmentation of diffusion‐weighted images via machine learning

Abstract: The diffusion-weighted (DW) MR signal sampled over a wide range of b-values potentially allows for tissue differentiation in terms of cellularity, microstructure, perfusion, and T relaxivity. This study aimed to implement a machine learning algorithm for automatic brain tissue segmentation from DW-MRI datasets, and to determine the optimal sub-set of features for accurate segmentation. DWI was performed at 3 T in eight healthy volunteers using 15 b-values and 20 diffusion-encoding directions. The pixel-wise si… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
17
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(18 citation statements)
references
References 44 publications
1
17
0
Order By: Relevance
“…Our proposed method performed tissue segmentation prediction directly from the dMRI data and thus could avoid obvious segmentation errors when transferring the anatomical T2w-based "ground truth" segmentation to the dMRI space. In the literature, anatomical-MRI-based segmentation, e.g., the one obtained by SPM, is usually used as the "ground truth" data (Ciritsis et al, 2018;Schnell et al, 2009;Cheng et al, 2020), since the segmentation appears in good agreement with the known anatomy. However, transferring T1w-or T2wbased segmentation into the dMRI space is challenging due to the image distortions in dMRI data, which affected inter-modality registration significantly (Albi et al, 2018;Wu et al, 2008;Jones and Cercignani, 2010).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Our proposed method performed tissue segmentation prediction directly from the dMRI data and thus could avoid obvious segmentation errors when transferring the anatomical T2w-based "ground truth" segmentation to the dMRI space. In the literature, anatomical-MRI-based segmentation, e.g., the one obtained by SPM, is usually used as the "ground truth" data (Ciritsis et al, 2018;Schnell et al, 2009;Cheng et al, 2020), since the segmentation appears in good agreement with the known anatomy. However, transferring T1w-or T2wbased segmentation into the dMRI space is challenging due to the image distortions in dMRI data, which affected inter-modality registration significantly (Albi et al, 2018;Wu et al, 2008;Jones and Cercignani, 2010).…”
Section: Discussionmentioning
confidence: 99%
“…The labels were projected into the co-registered dMRI data using nearest neighbor interpolation. We note that SPM is often the method of choice in generating high quality reference brain segmentation to compare with dMRI-based tissue segmentation (Schnell et al, 2009;Ciritsis et al, 2018;Cheng et al, 2020), although other segmentation tools, e.g., FMRIB's Automated Segmentation Tool (FSL FAST) (Jenkinson et al, 2012) could also be used instead.…”
Section: Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, Yap P et al applied l0 sparse-group representation classification on the raw DWI data and achieved a Dice score of ~0.8 for GM and ~0.86 for WM on five subjects when using T1w segmentation as the ground truth [10]. Another method using machine learning achieved an average Dice score of 0.79 by taking the diffusion weighted signal as well as the MD and FA values as features [11].…”
Section: Introductionmentioning
confidence: 99%
“…Related studies show that as algorithms can be improved automatically through experience, machine learning has higher accuracy and scalability than more common methods (Zhang & El‐Gohary, 2020). Moreover, machine learning has been widely used in various classification problems such as image classification and text classification (Ciritsis, Boss, & Rossi, 2018; Zablith & Osman, 2019), which makes it suitable for POI matching.…”
Section: Introductionmentioning
confidence: 99%