2022
DOI: 10.1109/tcsvt.2022.3178178
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Resolution Distillation for Efficient 3D Medical Image Registration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(4 citation statements)
references
References 51 publications
0
4
0
Order By: Relevance
“…To investigate the impact of various block sizes on the performance of IDwNet during DCT, experimental simulations were conducted utilizing the CIFAR-10 dataset and the ResNet-18 classification network model, and block sizes were configured with four typical combinations: (1, 1), (2, 2), (4,4), and (8,8), where (x, y) denotes the block size in the horizontal and vertical directions. Subsequently, the embedding strength was adjusted to achieve comparable image quality for these combinations.…”
Section: Effect Of Block Size On Performancementioning
confidence: 99%
See 1 more Smart Citation
“…To investigate the impact of various block sizes on the performance of IDwNet during DCT, experimental simulations were conducted utilizing the CIFAR-10 dataset and the ResNet-18 classification network model, and block sizes were configured with four typical combinations: (1, 1), (2, 2), (4,4), and (8,8), where (x, y) denotes the block size in the horizontal and vertical directions. Subsequently, the embedding strength was adjusted to achieve comparable image quality for these combinations.…”
Section: Effect Of Block Size On Performancementioning
confidence: 99%
“…In recent years, deep neural network (DNN) models have garnered significant success across diverse fields. For instance, within conventional computer domains, like image processing [1][2][3], video processing [4,5], and natural language processing [6,7], as well as in interdisciplinary applications, such as medical image processing [8], cross-media retrieval [9], and pedestrian detection [10], deep neural networks (DNNs) have consistently exhibited superior performance when compared to traditional methodologies. Nevertheless, the construction, optimization, and training of deep neural networks (DNNs) demand substantial expertise, extensive training data, and considerable computational resources, thereby transforming the resulting trained DNN models into crucial assets.…”
Section: Introductionmentioning
confidence: 99%
“…DistillFlow [43] further improves above two-stage data distillation by introducing multiple teacher models and confidence mechanism. As for a related medical image registration task, the work CRD [63] distills knowledge from a feature-shifted teacher model with high resolution input to a student model with low resolution input for more efficiency. Different from their works, our MDFlow transfers knowledge between teacher and student networks mutually, so as to decouple matching outliers in augmentation regularization and exploit recent advanced architecture as student for better final prediction, while maintaining real-time inference.…”
Section: Knowledge Distillation and Mutual Distillationmentioning
confidence: 99%
“…Consistently aligning 3D point clouds from different views to the same view is called 3D point cloud registration. As a very important task in computer vision, 3D point cloud registration has a wide range of applications in medical science [22], robotics, and other fields.…”
Section: Introductionmentioning
confidence: 99%