Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multitask medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https:// learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra-and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new stateof-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks, in part because of the lack of availability of such diverse data. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration benchmark for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https:// learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra-and interpatient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, and the results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias.
In this paper, we present our contribution to the learn2reg challenge. We applied the Fraunhofer MEVIS registration library RegLib comprehensively to all 4 tasks of the challenge. For tasks 1–3, we used a classic iterative registration method with NGF distance measure, second order curvature regularizer, and a multi-level optimization scheme. For task 4, a deep learning approach with a weakly supervised trained U-Net was applied using the same cost function as in the iterative approach.
Accurate optic disc (OD) segmentation and fovea detection in retinal fundus images are crucial for diagnosis in ophthalmology. We propose a robust and broadly applicable algorithm for automated, robust, reliable and consistent fovea detection based on OD segmentation. The OD segmentation is performed with morphological operations and Fuzzy C Means Clustering combined with iterative thresholding on a foreground segmentation. The fovea detection is based on a vessel segmentation via morphological operations and uses the resulting OD segmentation to determine multiple regions of interest. The fovea is determined from the largest, vesselfree candidate region. We have tested the novel method on a total of 190 images from three publicly available databases DRIONS, Drive and HRF. Compared to results of two human experts for DRIONS database, our OD segmentation yielded a dice coefficient of 0.83. Note that missing ground truth and expert variability is an issue. The new scheme achieved an overall success rate of 99.44% for OD detection and an overall success rate of 96.25% for fovea detection, which is superior to state-of-the-art approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.