2019
DOI: 10.1007/s11548-019-02089-8
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal 3D medical image registration guided by shape encoder–decoder networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 32 publications
(20 citation statements)
references
References 15 publications
0
18
0
Order By: Relevance
“…These approaches, generally called feature-based registration, are of particular interest when the relation between content is unknown or cannot be taken as an assumption (for example for the validation of a new probe or a new imaging modality). The method to find the matching and compute the transformation can be done with two main paradigms: transforming the image data in localizations with potential additional features using point-based registration ( [89,91] for the AutoFINDER part of ec-clem) or shape-based registration, potentially with intensity-based machine learning approaches [99]. Note that a plethora of variants exists for point-cloud registration, some of them sounding particularly promising for feature-based multimodal registration [100].…”
Section: Correlation Softwarementioning
confidence: 99%
“…These approaches, generally called feature-based registration, are of particular interest when the relation between content is unknown or cannot be taken as an assumption (for example for the validation of a new probe or a new imaging modality). The method to find the matching and compute the transformation can be done with two main paradigms: transforming the image data in localizations with potential additional features using point-based registration ( [89,91] for the AutoFINDER part of ec-clem) or shape-based registration, potentially with intensity-based machine learning approaches [99]. Note that a plethora of variants exists for point-cloud registration, some of them sounding particularly promising for feature-based multimodal registration [100].…”
Section: Correlation Softwarementioning
confidence: 99%
“…The experiment was trained on 60 CT volumes with a dice value of 0.914. Max et al [26] conducted multi-modality (i.e., CT and MRI) 3D whole-heart segmentation using supervised deep learning by defining a shape encoder-decoder network. The experiment was trained on 15 CT volumes with a dice value of 0.653.…”
Section: Related Workmentioning
confidence: 99%
“…Later, the deep learning approach showed promising successful performance [7,9,10]. The deep learning approach has two learning manners, including supervised learning [22][23][24][25][26], which requires a ground truth to align the loss function, and unsupervised learning [27][28][29], which learns the features without ground truth labeling.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Since years, multiple independent research groups try to improve current IF visualization [13] and introduce more advanced guidance techniques, such as e.g., automatic crossmodal image registration [14][15][16][17][18], image-based tracking [14,19], automatic compensation of heartbeat and respiratory motion [20][21][22] to overcome these specific issues. To introduce these techniques into a product functionality is, however, not easy due to a proprietary character of commercial XR systems.…”
Section: Introductionmentioning
confidence: 99%