Quantifying cell behaviors in animal early embryogenesis remains a challenging issue requiring in toto imaging and automated image analysis. We designed a framework for imaging and reconstructing unstained whole zebrafish embryos for their first 10 cell division cycles and report measurements along the cell lineage with micrometer spatial resolution and minute temporal accuracy. Point-scanning multiphoton excitation optimized to preferentially probe the innermost regions of the embryo provided intrinsic signals highlighting all mitotic spindles and cell boundaries. Automated image analysis revealed the phenomenology of cell proliferation. Blastomeres continuously drift out of synchrony. After the 32-cell stage, the cell cycle lengthens according to cell radial position, leading to apparent division waves. Progressive amplification of this process is the rule, contrasting with classical descriptions of abrupt changes in the system dynamics.
SummaryThis work describes a systematic evaluation of several autofocus functions used for analytical fluorescent image cytometry studies of counterstained nuclei. Focusing is the first step in the automatic fluorescence in situ hybridization analysis of cells. Thirteen functions have been evaluated using qualitative and quantitative procedures. For the last of these procedures a figure-of-merit (FOM) is defined and proposed. This new FOM takes into account five important features of the focusing function. Our results show that functions based on correlation measures have the best performance for this type of image.
Respiratory motion in emission tomography leads to reduced image quality. Developed correction methodology has been concentrating on the use of respiratory synchronized acquisitions leading to gated frames. Such frames, however, are of low signal-to-noise ratio as a result of containing reduced statistics. In this work, we describe the implementation of an elastic transformation within a list-mode-based reconstruction for the correction of respiratory motion over the thorax, allowing the use of all data available throughout a respiratory motion average acquisition. The developed algorithm was evaluated using datasets of the NCAT phantom generated at different points throughout the respiratory cycle. List-mode-data-based PET-simulated frames were subsequently produced by combining the NCAT datasets with Monte Carlo simulation. A non-rigid registration algorithm based on B-spline basis functions was employed to derive transformation parameters accounting for the respiratory motion using the NCAT dynamic CT images. The displacement matrices derived were subsequently applied during the image reconstruction of the original emission list mode data. Two different implementations for the incorporation of the elastic transformations within the one-pass list mode EM (OPL-EM) algorithm were developed and evaluated. The corrected images were compared with those produced using an affine transformation of list mode data prior to reconstruction, as well as with uncorrected respiratory motion average images. Results demonstrate that although both correction techniques considered lead to significant improvements in accounting for respiratory motion artefacts in the lung fields, the elastic-transformation-based correction leads to a more uniform improvement across the lungs for different lesion sizes and locations.
Abstract-We propose a new spatio-temporal elastic registration algorithm for motion reconstruction from a series of images. The specific application is to estimate displacement fields from two-dimensional ultrasound sequences of the heart. The basic idea is to find a spatio-temporal deformation field that effectively compensates for the motion by minimizing a difference with respect to a reference frame. The key feature of our method is the use of a semi-local spatio-temporal parametric model for the deformation using splines, and the reformulation of the registration task as a global optimization problem. The scale of the spline model controls the smoothness of the displacement field. Our algorithm uses a multiresolution optimization strategy to obtain a higher speed and robustness.We evaluated the accuracy of our algorithm using a synthetic sequence generated with an ultrasound simulation package, together with a realistic cardiac motion model. We compared our new global multiframe approach with a previous method based on pairwise registration of consecutive frames to demonstrate the benefits of introducing temporal consistency. Finally, we applied the algorithm to the regional analysis of the left ventricle. Displacement and strain parameters were evaluated showing significant differences between the normal and pathological segments, thereby illustrating the clinical applicability of our method.
Magnetic Resonance Imaging (MRI), a reference examination for cardiac morphology and function in humans, allows to image the cardiac right ventricle (RV) with high spatial resolution. The segmentation of the RV is a difficult task due to the variable shape of the RV and its ill-defined borders in these images. The aim of this paper is to evaluate several RV segmentation algorithms on common data. More precisely, we report here the results of the Right Ventricle Segmentation Challenge (RVSC), concretized during the MICCAI'12 Conference with an on-site competition. Seven automated and semi-automated methods have been considered, along them three atlas-based methods, two prior based methods, and two prior-free, image-driven methods that make use of cardiac motion. The obtained contours were compared against a manual tracing by an expert cardiac radiologist, taken as a reference, using Dice metric and Hausdorff distance. We herein describe the cardiac data composed of 48 patients, the evaluation protocol and the results. Best results show that an average 80% Dice accuracy and a 1cm Hausdorff distance can be expected from semi-automated algorithms for this challenging task on the datasets, and that an automated algorithm can reach similar performance, at the expense of a high computational burden. Data are now publicly available and the website remains open for new submissions (http://www.litislab.eu/rvsc/).
Glaucoma detection in color fundus images is a challenging task that requires expertise and years of practice. In this study we exploited the application of different Convolutional Neural Networks (CNN) schemes to show the influence in the performance of relevant factors like the data set size, the architecture and the use of transfer learning vs newly defined architectures. We also compared the performance of the CNN based system with respect to human evaluators and explored the influence of the integration of images and data collected from the clinical history of the patients. We accomplished the best performance using a transfer learning scheme with VGG19 achieving an AUC of 0.94 with sensitivity and specificity ratios similar to the expert evaluators of the study. The experimental results using three different data sets with 2313 images indicate that this solution can be a valuable option for the design of a computer aid system for the detection of glaucoma in large-scale screening programs.
The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.