2019
DOI: 10.1007/978-3-030-32248-9_35
|View full text |Cite
|
Sign up to set email alerts
|

U-ReSNet: Ultimate Coupling of Registration and Segmentation with Deep Nets

Abstract: In this study, we propose a 3D deep neural network called U-ReSNet, a joint framework that can accurately register and segment medical volumes. The proposed network learns to automatically generate linear and elastic deformation models, trained by minimizing the mean square error and the local cross correlation similarity metrics. In parallel, a coupled architecture is integrated, seeking to provide segmentation maps for anatomies or tissue patterns using an additional decoder part trained with the dice coeffi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
37
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 42 publications
(38 citation statements)
references
References 18 publications
1
37
0
Order By: Relevance
“…In particular, deep convolutional neural networks (CNN) have proved to outperform all existent strategies in other fundamental tasks of computer vision, like image segmentation [27] and classification [23]. During the last years, we have witnessed the advent of deep learning-based image registration methods [25,49,35,39,46,45,1,6,11], which achieve state-of-the-art performance, and drastically reduce the required computational time. These works have made a fundamental contribution by setting novel architectures for CNN-based deformable image registration (following supervised, unsupervised and semi-supervised training approaches).…”
Section: Introductionmentioning
confidence: 99%
“…In particular, deep convolutional neural networks (CNN) have proved to outperform all existent strategies in other fundamental tasks of computer vision, like image segmentation [27] and classification [23]. During the last years, we have witnessed the advent of deep learning-based image registration methods [25,49,35,39,46,45,1,6,11], which achieve state-of-the-art performance, and drastically reduce the required computational time. These works have made a fundamental contribution by setting novel architectures for CNN-based deformable image registration (following supervised, unsupervised and semi-supervised training approaches).…”
Section: Introductionmentioning
confidence: 99%
“…Our resulting loss function can be written as We use the (soft) Dice coefficient (DCS) [ 6 ] for structural similarity and the normalized cross-correlation (NCC) [ 7 ] for image similarity. To ensure smooth displacements we regularize the affine displacement field with the L2-loss between the estimated value and an identity displacement field ( ) and the deformable field with the spatial gradient of the displacement field [ 8 ], where and represent downsampled versions of the moving and fixed images at each level and and indicate the estimated affine and deformable registrations (for each level). The hyperparameters and determine the importance of the corresponding terms.…”
Section: Methodsmentioning
confidence: 99%
“…We use the (soft) Dice coefficient (DCS) [6] for structural similarity and the normalized cross-correlation (NCC) [7] for image similarity. To ensure smooth displacements we regularize the affine displacement field with the L2-loss between the estimated value and an identity displacement field (φ (l) 0 ) and the deformable field with the spatial gradient of the displacement field [8],…”
Section: Loss Functionmentioning
confidence: 99%
“…Least square matching is then employed to refine these initial points and finally obtain the transformation coefficients. In recent years, the advances of deep learning have led the computer vision community to registration solutions that are more related to neural networks, especially in the medical field [12][13][14][15]. Despite this progress, little effort has been made to adjust these frameworks for image registration in remote sensing, with semiautomated algorithms still being widely employed [1,16,17].…”
Section: Introductionmentioning
confidence: 99%