We present a new shape prior segmentation method using graph cuts capable of segmenting multiple objects. The shape prior energy is based on a shape distance popular with level set approaches. We also present a multiphase graph cut framework to simultaneously segment multiple, possibly overlapping objects. The multiphase formulation differs from multiway cuts in that the former can account for object overlaps by allowing a pixel to have multiple labels. We then extend the shape prior energy to encompass multiple shape priors. Unlike variational methods, a major advantage of our approach is that the segmentation energy is minimized directly without having to compute its gradient, which can be a cumbersome task and often relies on approximations. Experiments demonstrate that our algorithm can cope with image noise and clutter, as well as partial occlusions and affine transformations of the shape.
Identifying pathogens in complex samples such as blood, urine, and wastewater is critical to detect infection and inform optimal treatment. Surface-enhanced Raman spectroscopy (SERS) and machine learning (ML) can distinguish among multiple pathogen species, but processing complex fluid samples to sensitively and specifically detect pathogens remains an outstanding challenge. Here, we develop an acoustic bioprinter to digitize samples into millions of droplets, each containing just a few cells, which are identified with SERS and ML. We demonstrate rapid printing of 2 pL droplets from solutions containing S. epidermidis, E. coli, and blood; when they are mixed with gold nanorods (GNRs), SERS enhancements of up to 1500× are achieved.We then train a ML model and achieve ≥99% classification accuracy from cellularly pure samples and ≥87% accuracy from cellularly mixed samples. We also obtain ≥90% accuracy from droplets with pathogen:blood cell ratios <1. Our combined bioprinting and SERS platform could accelerate rapid, sensitive pathogen detection in clinical, environmental, and industrial settings.
Abstract. Automatic interpretation of Transmission Electron Micrograph (TEM) volumes is central to advancing current understanding of neural circuitry. In the context of TEM image analysis, tracing 3D neuronal structures is a significant problem. This work proposes a new model using the conditional random field (CRF) framework with higher order potentials for tracing multiple neuronal structures in 3D. The model consists of two key features. First, the higher order CRF cost is designed to enforce label smoothness in 3D and capture rich textures inherent in the data. Second, a technique based on semi-supervised edge learning is used to propagate high confidence structural edges during the tracing process. In contrast to predominantly edge based methods in the TEM tracing literature, this work simultaneously combines regional texture and learnt edge features into a single framework. Experimental results show that the proposed method outperforms more traditional models in tracing neuronal structures from TEM stacks.
The expression levels of rod opsin and glial fibrillary acidic protein (GFAP) capture important structural changes in the retina during injury and recovery. Quantitatively measuring these expression levels in confocal micrographs requires identifying the retinal layer boundaries and spatially corresponding the layers across different images. In this paper, a method to segment the retinal layers using a parametric active contour model is presented. Then spatially aligned expression levels across different images are determined by thresholding the solution to a Dirichlet boundary value problem. Our analysis provides quantitative metrics of retinal restructuring that are needed for improving retinal therapies after injury.
Recent advances in bio-molecular imaging have afforded biologists a more thorough understanding of cellular functions in complex tissue structures. For example, high resolution fluorescence images of the retina reveal details about tissue restructuring during detachment experiments. Time sequence imagery of microtubules provides insight into subcellular dynamics in response to cancer treatment drugs. However, technological progress is accompanied by a rapid proliferation of image data. Traditional analysis methods, namely manual measurements and qualitative assessments, become time consuming and are often nonreproducible. Computer vision tools can efficiently analyze these vast amounts of data with promising results. This paper provides an overview of several challenges faced in bioimage processing and our recent progress in addressing these issues.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.