Numerous efforts have been made to design various low-level saliency cues for RGBD saliency detection, such as color and depth contrast features as well as background and color compactness priors. However, how these low-level saliency cues interact with each other and how they can be effectively incorporated to generate a master saliency map remain challenging problems. In this paper, we design a new convolutional neural network (CNN) to automatically learn the interaction mechanism for RGBD salient object detection. In contrast to existing works, in which raw image pixels are fed directly to the CNN, the proposed method takes advantage of the knowledge obtained in traditional saliency detection by adopting various flexible and interpretable saliency feature vectors as inputs. This guides the CNN to learn a combination of existing features to predict saliency more effectively, which presents a less complex problem than operating on the pixels directly. We then integrate a superpixel-based Laplacian propagation framework with the trained CNN to extract a spatially consistent saliency map by exploiting the intrinsic structure of the input image. Extensive quantitative and qualitative experimental evaluations on three data sets demonstrate that the proposed method consistently outperforms the state-of-the-art methods.
Deep-tissue three-dimensional (3D) optical imaging of live mammals with high spatiotemporal resolution in non-invasive manners has been challenging due to light scattering. Here, we developed near-infrared II (NIR-II, 1000–1700 nm) light sheet microscopy (LSM) with excitation and emission up to ~ 1320 nm and ~ 1700 nm respectively for optical sectioning through live tissues at ~ 750-μm penetration depth without any invasive surgery. Suppressed light scattering allowed imaging at ~ 2 mm depth in glycerol-cleared brain tissues. NIR-II LSM in normal and oblique configurations enabled
in vivo
imaging of live mice through intact tissue, revealing abnormal blood flow and T cell motion in tumor microcirculation and mapping out programmed-death ligand 1 (PD-L1) and programmed cell death protein 1 (PD-1) in tumors with cellular resolution. 3D imaging through intact mouse head resolved vascular channels between skull and brain cortex, and monitored recruitment of macrophages/microglia to traumatic brain injury site post injury.
3D hand pose tracking/estimation will be very important in the next generation of human-computer interaction. Most of the currently available algorithms rely on low-cost active depth sensors. However, these sensors can be easily interfered by other active sources and require relatively high power consumption. As a result, they are currently not suitable for outdoor environments and mobile devices. This paper aims at tracking/estimating hand poses using passive stereo which avoids these limitations. A benchmark 1 with 18,000 stereo image pairs and 18,000 depth images captured from different scenarios and the ground-truth 3D positions of palm and finger joints (obtained from the manual label) is thus proposed. This paper demonstrates that the performance of the state-of-theart tracking/estimation algorithms can be maintained with most stereo matching algorithms on the proposed benchmark, as long as the hand segmentation is correct. As a result, a novel stereo-based hand segmentation algorithm specially designed for hand tracking/estimation is proposed. The quantitative evaluation demonstrates that the proposed algorithm is suitable for the state-of-the-art hand pose tracking/estimation algorithms and the tracking quality is comparable to the use of active depth sensors under different challenging scenarios.
Light scattering by biological tissues sets a limit to the penetration depth of high-resolution optical microscopy imaging of live mammals in vivo. An effective approach to reduce light scattering and increase imaging depth is by extending the excitation and emission wavelengths to the > 1000 nm second near-infrared (NIR-II), also called the short-wavelength infrared (SWIR) window. Here, we show biocompatible core-shell lead sulfide/cadmium sulfide (PbS/CdS) quantum dots emitting at ~1880 nm and superconducting nanowire single photon detectors (SNSPD) for single-photon detection up to 2000 nm, enabling one-photon excitation fluorescence imaging window in the 1700–2000 nm (NIR-IIc) range with 1650 nm excitation, the longest one-photon excitation and emission for in vivo mouse imaging to date. Confocal fluorescence imaging in NIR-IIc reached an imaging depth of ~ 1100 μm through intact mouse head, and enabled non-invasive cellular-resolution imaging in the inguinal lymph nodes (LNs) of mice without any surgery. We achieve In vivo molecular imaging of high endothelial venules (HEVs) with diameter down to ~ 6.6 μm and CD169+ macrophages and CD3+ T cells in the lymph nodes, opening the possibility of non-invasive intravital imaging of immune trafficking in lymph nodes at the single-cell/vessel level longitudinally.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.