Long-term integration of neuroprosthetic devices is challenged by reactive responses that compromise the brain-device interface. The contribution of physical insertion parameters to immediate damage is not well described. We have developed an ex vivo preparation to capture real-time images of tissue deformation during device insertion using thick tissue slices from rat brains prepared with fluorescently labeled vasculature. Qualitative and quantitative assessments of damage were made for insertions using devices with different tip shapes inserted at different speeds. Direct damage to the vasculature included severing, rupturing and dragging, and was often observed several hundred micrometers from the insertion site. Slower insertions generally resulted in more vascular damage. Cortical surface features greatly affected insertion success; insertions attempted through pial blood vessels resulted in severe tissue compression. Automated image analysis techniques were developed to quantify tissue deformation and calculate mean effective strain. Quantitative measures demonstrated that, within the range of experimental conditions studied, faster insertion of sharp devices resulted in lower mean effective strain. Variability within each insertion condition indicates that multiple biological factors may influence insertion success. Multiple biological factors may contribute to tissue distortion, thus a wide variability was observed among insertions made under the same conditions.
A computational approach is presented for modeling and quantifying the structure and dynamics of the nematode C. elegans observed by time-lapse microscopy. Worm shape and conformations are expressed in a decoupled manner. Complex worm movements are expressed in terms of three primitive patterns--peristaltic progression, deformation, and translation. The model has been incorporated into algorithms for segmentation and simultaneous tracking of multiple worms in a field, some of which may be interacting in complex ways. A recursive Bayesian filter is used for tracking. Unpredictable behaviors associated with interactions are resolved by multiple-hypothesis tracking. Our algorithm can track worms of diverse sizes and conformations (coiled/uncoiled) in the presence of imaging artifacts and clutter, even when worms are overlapping with others. A two-observer performance assessment was conducted over 16 image sequences representing wild-type and uncoordinated mutants as a function of worm size, conformation, presence of clutter, and worm entanglement. Overall detected tracking failures were 1.41%, undetected tracking failures were 0.41%, and segmentation errors were 1.11% of worm length. When worms overlap, our method reduced undetected failures from 12% to 1.75%, and segmentation error from 11% to 5%. Our method provides the basis for reliable morphometric and locomotory analysis of freely behaving worm populations.
This updates an earlier publication by the authors describing a robust framework for detecting vasculature in noisy retinal fundus images. We improved the handling of the "central reflex" phenomenon in which a vessel has a "hollow" appearance. This is particularly pronounced in dual-wavelength images acquired at 570 and 600 nm for retinal oximetry. It is prominent in the 600 nm images that are sensitive to the blood oxygen content. Improved segmentation of these vessels is needed to improve oximetry. We show that the use of a generalized dual-Gaussian model for the vessel intensity profile instead of the Gaussian yields a significant improvement. Our method can account for variations in the strength of the central reflex, the relative contrast, width, orientation, scale, and imaging noise. It also enables the classification of regular and central reflex vessels. The proposed method yielded a sensitivity of 72% compared to 38% by the algorithm of Can et al., and 60% by the robust detection based on a single-Gaussian model. The specificity for the methods were 95%, 97%, and 98%, respectively.
In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.