Microscopy is a central method in life sciences. Many popular methods, such as antibody labeling, are used to add physical fluorescent labels to specific cellular constituents. However, these approaches have significant drawbacks, including inconsistency; limitations in the number of simultaneous labels because of spectral overlap; and necessary perturbations of the experiment, such as fixing the cells, to generate the measurement. Here, we show that a computational machine-learning approach, which we call "in silico labeling" (ISL), reliably predicts some fluorescent labels from transmitted-light images of unlabeled fixed or live biological samples. ISL predicts a range of labels, such as those for nuclei, cell type (e.g., neural), and cell state (e.g., cell death). Because prediction happens in silico, the method is consistent, is not limited by spectral overlap, and does not disturb the experiment. ISL generates biological measurements that would otherwise be problematic or impossible to acquire.
BackgroundLarge image datasets acquired on automated microscopes typically have some fraction of low quality, out-of-focus images, despite the use of hardware autofocus systems. Identification of these images using automated image analysis with high accuracy is important for obtaining a clean, unbiased image dataset. Complicating this task is the fact that image focus quality is only well-defined in foreground regions of images, and as a result, most previous approaches only enable a computation of the relative difference in quality between two or more images, rather than an absolute measure of quality.ResultsWe present a deep neural network model capable of predicting an absolute measure of image focus on a single image in isolation, without any user-specified parameters. The model operates at the image-patch level, and also outputs a measure of prediction certainty, enabling interpretable predictions. The model was trained on only 384 in-focus Hoechst (nuclei) stain images of U2OS cells, which were synthetically defocused to one of 11 absolute defocus levels during training. The trained model can generalize on previously unseen real Hoechst stain images, identifying the absolute image focus to within one defocus level (approximately 3 pixel blur diameter difference) with 95% accuracy. On a simpler binary in/out-of-focus classification task, the trained model outperforms previous approaches on both Hoechst and Phalloidin (actin) stain images (F-scores of 0.89 and 0.86, respectively over 0.84 and 0.83), despite only having been presented Hoechst stain images during training. Lastly, we observe qualitatively that the model generalizes to two additional stains, Hoechst and Tubulin, of an unseen cell type (Human MCF-7) acquired on a different instrument.ConclusionsOur deep neural network enables classification of out-of-focus microscope images with both higher accuracy and greater precision than previous approaches via interpretable patch-level focus and certainty predictions. The use of synthetically defocused images precludes the need for a manually annotated training dataset. The model also generalizes to different image and cell types. The framework for model training and image prediction is available as a free software library and the pre-trained model is available for immediate use in Fiji (ImageJ) and CellProfiler.
We present a study on grocery detection using our object detection system, ShelfScanner, which seeks to allow a visually impaired user to shop at a grocery store without additional human assistance. ShelfScanner allows online detection of items on a shopping list, in video streams in which some or all items could appear simultaneously. To deal with the scale of the object detection task, the system leverages the approximate planarity of grocery store shelves to build a mosaic in real time using an optical flow algorithm. The system is then free to use any object detection algorithm without incurring a loss of data due to processing time. For purposes of speed we use a multiclass naive-Bayes classifier inspired by NIMBLE, which is trained on enhanced SURF descriptors extracted from images in the GroZi-120 dataset. It is then used to compute per-class probability distributions on video keypoints for final classification. Our results suggest ShelfScanner could be useful in cases where high-quality training data is available.
Research has shown that inverting faces significantly disrupts the processing of configural information, leading to a face inversion effect. We recently used a contextual priming technique to show that the presence or absence of the face inversion effect can be determined via the top-down activation of face versus non-face processing systems [Ge, L., Wang, Z., McCleery, J., & Lee, K. (2006). Activation of face expertise and the inversion effect. Psychological Science, 17(1), 12-16]. In the current study, we replicate these findings using the same technique but under different conditions. We then extend these findings through the application of a neural network model of face and Chinese character expertise systems. Results provide support for the hypothesis that a specialized face expertise system develops through extensive training of the visual system with upright faces, and that top-down mechanisms are capable of influencing when this face expertise system is engaged.
Abstract. We present a method for estimating the distance between a camera and a human head in 2D images from a calibrated camera. Leading head pose estimation algorithms focus mainly on head orientation (yaw, pitch, and roll) and translations perpendicular to the camera principal axis. Our contribution is a system that can estimate head pose under large translations parallel to the camera's principal axis. Our method uses a set of exemplar 3D human heads to estimate the distance between a camera and a previously unseen head. The distance is estimated by solving for the camera pose using Effective Perspective n-Point (EPnP). We present promising experimental results using the Texas 3D Face Recognition Database.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.