We present a new shape prior segmentation method using graph cuts capable of segmenting multiple objects. The shape prior energy is based on a shape distance popular with level set approaches. We also present a multiphase graph cut framework to simultaneously segment multiple, possibly overlapping objects. The multiphase formulation differs from multiway cuts in that the former can account for object overlaps by allowing a pixel to have multiple labels. We then extend the shape prior energy to encompass multiple shape priors. Unlike variational methods, a major advantage of our approach is that the segmentation energy is minimized directly without having to compute its gradient, which can be a cumbersome task and often relies on approximations. Experiments demonstrate that our algorithm can cope with image noise and clutter, as well as partial occlusions and affine transformations of the shape.
Identifying pathogens in complex samples such as blood, urine, and wastewater is critical to detect infection and inform optimal treatment. Surface-enhanced Raman spectroscopy (SERS) and machine learning (ML) can distinguish among multiple pathogen species, but processing complex fluid samples to sensitively and specifically detect pathogens remains an outstanding challenge. Here, we develop an acoustic bioprinter to digitize samples into millions of droplets, each containing just a few cells, which are identified with SERS and ML. We demonstrate rapid printing of 2 pL droplets from solutions containing S. epidermidis, E. coli, and blood; when they are mixed with gold nanorods (GNRs), SERS enhancements of up to 1500× are achieved.We then train a ML model and achieve ≥99% classification accuracy from cellularly pure samples and ≥87% accuracy from cellularly mixed samples. We also obtain ≥90% accuracy from droplets with pathogen:blood cell ratios <1. Our combined bioprinting and SERS platform could accelerate rapid, sensitive pathogen detection in clinical, environmental, and industrial settings.
Abstract. Automatic interpretation of Transmission Electron Micrograph (TEM) volumes is central to advancing current understanding of neural circuitry. In the context of TEM image analysis, tracing 3D neuronal structures is a significant problem. This work proposes a new model using the conditional random field (CRF) framework with higher order potentials for tracing multiple neuronal structures in 3D. The model consists of two key features. First, the higher order CRF cost is designed to enforce label smoothness in 3D and capture rich textures inherent in the data. Second, a technique based on semi-supervised edge learning is used to propagate high confidence structural edges during the tracing process. In contrast to predominantly edge based methods in the TEM tracing literature, this work simultaneously combines regional texture and learnt edge features into a single framework. Experimental results show that the proposed method outperforms more traditional models in tracing neuronal structures from TEM stacks.
The expression levels of rod opsin and glial fibrillary acidic protein (GFAP) capture important structural changes in the retina during injury and recovery. Quantitatively measuring these expression levels in confocal micrographs requires identifying the retinal layer boundaries and spatially corresponding the layers across different images. In this paper, a method to segment the retinal layers using a parametric active contour model is presented. Then spatially aligned expression levels across different images are determined by thresholding the solution to a Dirichlet boundary value problem. Our analysis provides quantitative metrics of retinal restructuring that are needed for improving retinal therapies after injury.
Recent advances in bio-molecular imaging have afforded biologists a more thorough understanding of cellular functions in complex tissue structures. For example, high resolution fluorescence images of the retina reveal details about tissue restructuring during detachment experiments. Time sequence imagery of microtubules provides insight into subcellular dynamics in response to cancer treatment drugs. However, technological progress is accompanied by a rapid proliferation of image data. Traditional analysis methods, namely manual measurements and qualitative assessments, become time consuming and are often nonreproducible. Computer vision tools can efficiently analyze these vast amounts of data with promising results. This paper provides an overview of several challenges faced in bioimage processing and our recent progress in addressing these issues.
The presence of pathogens in complex, multi-cellular samples such as blood, urine, mucus, and wastewater can serve as indicators of active infection, and their identification can impact how human and environmental health are treated [1][2][3][4][5][6][7]. Surface-enhanced Raman spectroscopy (SERS) and machine learning (ML) can distinguish multiple pathogen species and strains [8][9][10][11], but processing complex fluid samples to sensitively and specifically detect pathogens remains an outstanding challenge. Here, we develop an acoustic bioprinting platform to digitize samples into millions of droplets, each containing just a few cells, which are then identified with SERS and ML. As a proof of concept, we focus on bacterial bloodstream infections. We demonstrate ∼2pL droplet generation from solutions containing S. epidermidis, E. coli, and mouse red blood cells (RBCs) mixed with gold nanorods (GNRs) at 1 kHz ejection rates; use of parallel printing heads would enable processing of mL-volume samples in minutes [12]. Droplets printed with GNRs achieve spectral enhancements of up to 1500x compared to samples printed without GNRs. With this improved signal-to-noise, we train an ML model on droplets consisting of either pure cells with GNRs or mixed, multicellular species with GNRs, using scanning electron microscopy images as our ground truth. We achieve ≥99% classification accuracy of droplets printed from cellularly-pure samples, and ≥87% accuracy in droplets printed from mixtures of S. epidermidis, E. coli, and RBCs. We compute the feature importance at each wavenumber and confirm that the most significant spectral bands for classification correspond to biologically relevant Raman peaks within our cells. This combined acoustic droplet ejection, SERS and ML platform could enable clinical and industrial translation of SERS-based cellular identification. MainReliable detection and identification of microorganisms is crucial for medical diagnostics, environmental monitoring, food production and safety, biodefense, biomanufacturing, and pharmaceutical development. Such samples typically contain as few as 1-100 colony-forming units (CFU)/mL[13-15], necessitating the use of in vitro liquid culturing for pathogen detection. It is estimated that less than 2% of all bacteria can be readily cultured using current laboratory protocols, and even amongst that 2%, culturing can take hours to days depending on the species [16][17][18][19]. In the case of medical diagnostics, broad spectrum antibiotics are often administered while waiting for culture results, leading to an alarming rise in antibiotic resistant bacteria. Antimicrobial resistance currently leads to ∼700,000 deaths per year, and is predicted to become the leading cause of death by 2050 [20]. To combat these trends, it is crucial to develop methods to rapidly detect and identify bacteria in diverse, complex samples.Raman spectroscopy is a label-free, vibrational spectroscopic technique that has recently emerged as a promising platform for bacterial species
We present a novel method to quantitatively analyze confocal microscope images of retinas. We automatically detect nuclei within the outer nuclear layer (ONL) in a retinal image. Based on nuclei detection results, we also automatically measure the thickness of the ONL and the local cell density within the ONL. These measurements provide the first thorough quantitative analysis of retinal images. Our results not only verify previous conclusions about retinal restructuring during detachment, but also provide biologists with significant information about the regional responses in the ONL.
In many neurophysiological studies, understanding the neuronal circuitry of the brain requires detailed 3D models of the nerve cells and their synapses. Typically, researchers build the 3D models by manually tracing the 2D cross-sectional profiles of the 3D structures from serial electron micrograph (EM) stacks and then construct the models from these 2D contours. While current computer-aided techniques can reduce the tracing time, they often require extensive user interaction. We propose a segmentation framework to extract the 2D profiles that is both fast and requires a minimal amount of user interaction. The framework uses graph cuts to minimize an energy defined over the image intensity and the flux of the intensity gradient field. Furthermore, to correct segmentation errors, our framework allows for efficient and intuitive editing of the initial results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.