Generalized nucleus segmentation techniques can contribute greatly to reducing the time to develop and validate visual biomarkers for new digital pathology datasets. We summarize the results of MoNuSeg 2018 Challenge whose objective was to develop generalizable nuclei segmentation techniques in digital pathology. The challenge was an official satellite event of the MICCAI 2018 conference in which 32 teams with more than 80 participants from geographically diverse institutes participated. Contestants were given a training set with 30 images from seven organs with annotations of 21,623 individual nuclei. A test dataset with 14 images taken from seven organs, including two organs that did not appear in the training set was released without annotations. Entries were evaluated based on average aggregated Jaccard index (AJI) on the test set to prioritize accurate instance segmentation as opposed to mere semantic segmentation. More than half the teams that completed the challenge outperformed a previous baseline [1]. Among the trends observed that contributed to increased accuracy were the use of color normalization as well as heavy data augmentation. Additionally, fully convolutional networks inspired by variants of U-Net [2], FCN [3], and Mask- RCNN [4] were popularly used, typically based on ResNet [5] or VGG [6] base architectures. Watershed segmentation on predicted semantic segmentation maps was a popular post-processing strategy. Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics.
Motivation Neural networks have been widely used to analyze high-throughput microscopy images. However, the performance of neural networks can be significantly improved by encoding known invariance for particular tasks. Highly relevant to the goal of automated cell phenotyping from microscopy image data is rotation invariance. Here we consider the application of two schemes for encoding rotation equivariance and invariance in a convolutional neural network, namely, the group-equivariant CNN (G-CNN), and a new architecture with simple, efficient conic convolution, for classifying microscopy images. We additionally integrate the 2D-discrete-Fourier transform (2D-DFT) as an effective means for encoding global rotational invariance. We call our new method the Conic Convolution and DFT Network (CFNet). Results We evaluated the efficacy of CFNet and G-CNN as compared to a standard CNN for several different image classification tasks, including simulated and real microscopy images of subcellular protein localization, and demonstrated improved performance. We believe CFNet has the potential to improve many high-throughput microscopy image analysis applications. Availability and implementation Source code of CFNet is available at: https://github.com/bchidest/CFNet. Supplementary information Supplementary data are available at Bioinformatics online.
Abstract-Estimating dense correspondence or depth information from a pair of stereoscopic images is a fundamental problem in computer vision, which finds a range of important applications. Despite intensive past research efforts in this topic, it still remains challenging to recover the depth information both reliably and efficiently, especially when the input images contain weakly textured regions or are captured under uncontrolled, reallife conditions. Striking a desired balance between computational efficiency and estimation quality, a hybrid minimum spanning tree-based stereo matching method is proposed in this paper. Our method performs efficient nonlocal cost aggregation at pixel-level and region-level, and then adaptively fuses the resulting costs together to leverage their respective strength in handling large textureless regions and fine depth discontinuities. Experiments on the standard Middlebury stereo benchmark show that the proposed stereo method outperforms all prior local and nonlocal aggregation-based methods, achieving particularly noticeable improvements for low texture regions. To further demonstrate the effectiveness of the proposed stereo method, also motivated by the increasing desire to generate expressive depth-induced photo effects, this paper is tasked next to address the emerging application of interactive depth-of-field rendering given a realworld stereo image pair. To this end, we propose an accurate thinlens model for synthetic depth-of-field rendering, which considers the user-stroke placement and camera-specific parameters and performs the pixel-adapted Gaussian blurring in a principled way. Taking ∼1.5 s to process a pair of 640 × 360 images in the off-line step, our system named Scribble2focus allows users to interactively select in-focus regions by simple strokes using the touch screen and returns the synthetically refocused images instantly to the user. Index Terms-Stereo matching, depth estimation, cost aggregation, depth of field, post-capture refocusing.
Recently, fluorescence-based super-resolution techniques such as stimulated emission depletion (STED) and stochastic optical reconstruction microscopy (STORM) have been developed to achieve near molecular-scale resolution. However, such a super-resolution technique for nonlinear label-free microscopy based on second harmonic generation (SHG) is lacking. Since SHG is label-free and does not involve real-energy level transitions, fluorescence-based super-resolution techniques such as STED cannot be applied to improve the resolution. In addition, due to the coherent and non-isotropic emission nature of SHG, single-molecule localization techniques based on isotropic emission of fluorescent molecule such as STORM will not be appropriate. Single molecule SHG microscopy is largely hindered due to the very weak nonlinear optical scattering cross sections of SHG scattering processes. Thus, enhancing SHG using plasmonic nanostructures and nanoantennas has recently gained much attention owing to the potential of various nanoscale geometries to tightly confine electromagnetic fields into small volumes. This confinement provides substantial enhancement of electromagnetic field in nanoscale regions of interest, which can significantly boost the nonlinear signal produced by molecules located in the plasmonic hotspots. However, to date, plasmon-enhanced SHG has been primarily applied for the measurement of bulk properties of the materials/molecules, and single molecule SHG imaging along with its orientation information has not been realized yet. Herein, we achieved simultaneous visualization and three-dimensional (3D) orientation imaging of individual rhodamine 6G (R6G) molecules in the presence of plasmonic silver nanohole arrays. SHG and two-photon fluorescence microscopy experiments together with finite-difference time-domain (FDTD) simulations revealed a ∼106-fold nonlinear enhancement factor at the hot spots on the plasmonic silver nanohole substrate, enabling detection of single molecules using SHG. The position and 3D orientation of R6G molecules were determined using the template matching algorithm by comparing the experimental data with the calculated dipole emission images. These findings could enable SHG-based single molecule detection and orientation imaging of molecules which could lead to a wide range of applications from nanophotonics to super-resolution SHG imaging of biological cells and tissues.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.