Modern scientific endeavors increasingly require team collaborations to construct and interpret complex computational workflows. This work describes an image-analysis environment that supports the use of computational tools that facilitate reproducible research and support scientists with varying levels of software development skills. The Jupyter notebook web application is the basis of an environment that enables flexible, well-documented, and reproducible workflows via literate programming. Image-analysis software development is made accessible to scientists with varying levels of programming experience via the use of the SimpleITK toolkit, a simplified interface to the Insight Segmentation and Registration Toolkit. Additional features of the development environment include user friendly data sharing using online data repositories and a testing framework that facilitates code maintenance. SimpleITK provides a large number of examples illustrating educational and research-oriented image analysis workflows for free download from GitHub under an Apache 2.0 license: github.com/InsightSoftwareConsortium/SimpleITK-Notebooks .
Background With increasing research on system integration for image-guided therapy (IGT), there has been a strong demand for standardized communication among devices and software to share data such as target positions, images and device status. Method We propose a new, open, simple and extensible network communication protocol for IGT, named OpenIGTLink, to transfer transform, image and status messages. We conducted performance tests and use-case evaluations in five clinical and engineering scenarios. Results The protocol was able to transfer position data with submillisecond latency up to 1024 fps and images with latency of <10 ms at 32 fps. The use-case tests demonstrated that the protocol is feasible for integrating devices and software. Conclusion The protocol proved capable of handling data required in the IGT setting with sufficient time resolution and latency. The protocol not only improves the interoperability of devices and software but also promotes transitions of research prototypes to clinical applications..
The diverse composition of mammalian tissues poses challenges for understanding the cell–cell interactions required for organ homeostasis and how spatial relationships are perturbed during disease. Existing methods such as single-cell genomics, lacking a spatial context, and traditional immunofluorescence, capturing only two to six molecular features, cannot resolve these issues. Imaging technologies have been developed to address these problems, but each possesses limitations that constrain widespread use. Here we report a method that overcomes major impediments to highly multiplex tissue imaging. “Iterative bleaching extends multiplexity” (IBEX) uses an iterative staining and chemical bleaching method to enable high-resolution imaging of >65 parameters in the same tissue section without physical degradation. IBEX can be employed with various types of conventional microscopes and permits use of both commercially available and user-generated antibodies in an “open” system to allow easy adjustment of staining panels based on ongoing marker discovery efforts. We show how IBEX can also be used with amplified staining methods for imaging strongly fixed tissues with limited epitope retention and with oligonucleotide-based staining, allowing potential cross-referencing between flow cytometry, cellular indexing of transcriptomes and epitopes by sequencing, and IBEX analysis of the same tissue. To facilitate data processing, we provide an open-source platform for automated registration of iterative images. IBEX thus represents a technology that can be rapidly integrated into most current laboratory workflows to achieve high-content imaging to reveal the complex cellular landscape of diverse organs and tissues.
When choosing an electromagnetic tracking system ͑EMTS͒ for image-guided procedures several factors must be taken into consideration. Among others these include the system's refresh rate, the number of sensors that need to be tracked, the size of the navigated region, the system interaction with the environment, whether the sensors can be embedded into the tools and provide the desired transformation data, and tracking accuracy and robustness. To date, the only factors that have been studied extensively are the accuracy and the susceptibility of EMTSs to distortions caused by ferromagnetic materials. In this paper the authors shift the focus from analysis of system accuracy and stability to the broader set of factors influencing the utility of EMTS in the clinical environment. The authors provide an analysis based on all of the factors specified above, as assessed in three clinical environments. They evaluate two commercial tracking systems, the Aurora system from Northern Digital Inc., and the 3D Guidance system with three different field generators from Ascension Technology Corp. The authors show that these systems are applicable to specific procedures and specific environments, but that currently, no single system configuration provides a comprehensive solution across procedures and environments.
Abstract-We present a gradient-based method for rigid registration of a patient preoperative computed tomography (CT) to its intraoperative situation with a few fluoroscopic X-ray images obtained with a tracked C-arm. The method is noninvasive, anatomybased, requires simple user interaction, and includes validation. It is generic and easily customizable for a variety of routine clinical uses in orthopaedic surgery. Gradient-based registration consists of three steps: 1) initial pose estimation; 2) coarse geometry-based registration on bone contours, and; 3) fine gradient projection registration (GPR) on edge pixels. It optimizes speed, accuracy, and robustness. Its novelty resides in using volume gradients to eliminate outliers and foreign objects in the fluoroscopic X-ray images, in speeding up computation, and in achieving higher accuracy. It overcomes the drawbacks of intensity-based methods, which are slow and have a limited convergence range, and of geometry-based methods, which depend on the image segmentation quality. Our simulated, in vitro, and cadaver experiments on a human pelvis CT, dry vertebra, dry femur, fresh lamb hip, and human pelvis under realistic conditions show a mean 0.5-1.7 mm (0.5-2.6 mm maximum) target registration accuracy.Index Terms-Fluoroscopic X-ray to CT registration, gradient based, image registration, 2D/3D rigid registration.
Many types of medical and scientific experiments acquire raw data in the form of images. Various forms of image processing and image analysis are used to transform the raw image data into quantitative measures that are the basis of subsequent statistical analysis. In this article we describe the SimpleITK R package. SimpleITK is a simplified interface to the insight segmentation and registration toolkit (ITK). ITK is an open source C++ toolkit that has been actively developed over the past 18 years and is widely used by the medical image analysis community. SimpleITK provides packages for many interpreter environments, including R. Currently, it includes several hundred classes for image analysis including a wide range of image input and output, filtering operations, and higher level components for segmentation and registration. Using SimpleITK, development of complex combinations of image and statistical analysis procedures is feasible. This article includes several examples of computational image analysis tasks implemented using SimpleITK, including spherical marker localization, multi-modal image registration, segmentation evaluation, and cell image analysis.
This article describes FRACAS, a computer-integrated orthopedic system for assisting surgeons in performing closed medullary nailing of long bone fractures. FRACAS's goal is to reduce the surgeon's cumulative exposure to radiation and surgical complications associated with alignment and positioning errors of bone fragments, nail insertion, and distal screw locking. It replaces uncorrelated, static fluoroscopic images with a virtual reality display of three-dimensional bone models created from preoperative computed tomography and tracked intraoperatively in real time. Fluoroscopic images are used to register the bone models to the intraoperative situation and to verify that the registration is maintained. This article describes the system concept, software prototypes of preoperative modules (modeling, nail selection, and visualization), intraoperative modules (fluoroscopic image processing and tracking), and preliminary in vitro experimental results to date. Our experiments suggest that the modeling, nail selection, and visualization modules yield adequate results and that fluoroscopic image processing with submillimetric accuracy is practically feasible on clinical images.
This paper presents a novel image-guided robot-based system to assist orthopedic surgeons in performing distal locking of long bone intramedullary nails. The system consists of a bone-mounted miniature robot fitted with a drill guide that provides rigid mechanical guidance for hand-held drilling of the distal screws' pilot holes. The robot is automatically positioned so that the drill guide and nail distal locking axes coincide, using a single fluoroscopic X-ray image. Since the robot is rigidly attached to the intramedullary nail or bone, no leg immobilization or real-time tracking is required. We describe the system and protocol and present a method for accurate and robust drill guide and nail hole localization and registration. The in vitro system accuracy experiments for fronto-parallel viewing show a mean angular error of 1.3 degrees (std = 0.4 degrees ) between the computed drill guide axes and the actual locking holes axes, and a mean 3.0 mm error (std = 1.1 mm) in the entry and exit drill point, which is adequate for successfully locking the nail.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.