2022
DOI: 10.3390/jimaging8100259
|View full text |Cite
|
Sign up to set email alerts
|

DS6, Deformation-Aware Semi-Supervised Learning: Application to Small Vessel Segmentation with Noisy Training Data

Abstract: Blood vessels of the brain provide the human brain with the required nutrients and oxygen. As a vulnerable part of the cerebral blood supply, pathology of small vessels can cause serious problems such as Cerebral Small Vessel Diseases (CSVD). It has also been shown that CSVD is related to neurodegeneration, such as Alzheimer’s disease. With the advancement of 7 Tesla MRI systems, higher spatial image resolution can be achieved, enabling the depiction of very small vessels in the brain. Non-Deep Learning-based … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(15 citation statements)
references
References 53 publications
0
15
0
Order By: Relevance
“…The UNet architecture (Ronneberger et al, 2015), including its 3D version (C ¸ic ¸ek et al, 2016), is a versatile neural network consisting of two paths: contraction and expansion. Originally proposed for image segmentation, different flavours of UNet have been developed and deployed in plenty of applications such as image segmentation (Milletari et al, 2016;Zhou et al, 2018;Oktay et al, 2018;Chatterjee et al, 2020a), audio source separation (Jansson et al, 2017;Stoller et al, 2018;Choi et al, 2019) and image reconstruction (Hyun et al, 2018;Iqbal et al, 2019). 3D UNet and its variants have been used for MR super-resolution as well (Pham et al, 2019;Sarasaen et al, 2021;Chatterjee et al, 2021b).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The UNet architecture (Ronneberger et al, 2015), including its 3D version (C ¸ic ¸ek et al, 2016), is a versatile neural network consisting of two paths: contraction and expansion. Originally proposed for image segmentation, different flavours of UNet have been developed and deployed in plenty of applications such as image segmentation (Milletari et al, 2016;Zhou et al, 2018;Oktay et al, 2018;Chatterjee et al, 2020a), audio source separation (Jansson et al, 2017;Stoller et al, 2018;Choi et al, 2019) and image reconstruction (Hyun et al, 2018;Iqbal et al, 2019). 3D UNet and its variants have been used for MR super-resolution as well (Pham et al, 2019;Sarasaen et al, 2021;Chatterjee et al, 2021b).…”
Section: Related Workmentioning
confidence: 99%
“…Following the hypothesis of using batch size one to be able to learn an exact mapping function between the specific pair of low-high-resolution images (Chatterjee et al, 2021a), batch size during training and inference in this research was also set to one. The loss during training was calculated using perceptual loss (Johnson et al, 2016), with the help of a perceptual loss network Chatterjee et al (2020a), and was minimised using the Adam optimiser with a learning rate of 10 −4 for 100 epochs. The code of the implementation is available on GitHub: https://github.com/soumickmj/DDoS.…”
Section: Implementation Training and Inferencementioning
confidence: 99%
“…The proposed architecture provided posthoc interpretability and explainability methods and incorporates all libraries related to interpretability and explainability like LIME, SHAP and TorchRay and extended to apply on 2D and 3D deep learning models for images. Authors used the segmentation model from DS6 [210] paper and the models were UNet,UNet-MSS(multi-scale supervision) and UNet-MSS with deformation. In order to evaluate proposed architecture for segmentation model, vessel segmentation was chosen.…”
Section: Image Segmentation Using Unet With Xai Modelmentioning
confidence: 99%
“…Unlike typical deep learning models, this model receives the input image in the original scale and in three downsampled scales, which are supplied in the inner encoding blocks. Similarly, the output of the model is compared at different scales as well -known as deep supervision [15] or multi-scale supervision [16]. The final output of the model (output from the final decoding block), along with three more outputs from the inner decoding blocks, are compared against the original ground-truth (used in [7]), as well as three downscaled versions of the ground-truth, respectively.…”
Section: B Turbolift Learningmentioning
confidence: 99%