In this paper, we investigate the following question: when performing next best view selection for volumetric 3D reconstruction of an object by a mobile robot equipped with a dense (camera-based) depth sensor, what formulation of information gain is best? To address this question, we propose several new ways to quantify the volumetric information (VI) contained in the voxels of a probabilistic volumetric map, and compare them to the state of the art with extensive simulated experiments. Our proposed formulations incorporate factors such as visibility likelihood and the likelihood of seeing new parts of the object. The results of our experiments allow us to draw some clear conclusions about the VI formulations that are most effective in different mobile-robot reconstruction scenarios. To the best of our knowledge, this is the first comparative survey of VI formulation performance for active 3D object reconstruction. Additionally, our modular software framework is adaptable to other robotic platforms and general reconstruction problems, and we release it open source for autonomous reconstruction tasks. Abstract In this paper, we investigate the following question: when performing next best view selection for volumetric 3D reconstruction of an object by a mobile robot equipped with a dense (camera-based) depth sensor, what formulation of information gain is best? To address this question, we propose several new ways to quantify the volumetric information (VI) contained in the voxels of a probabilistic volumetric map, and compare them to the state of the art with extensive simulated experiments. Our proposed formulations incorporate factors such as visibility likelihood and the likelihood of seeing new parts of the object. The results of our experiments allow us to draw some clear conclusions about the VI formulations that are most effective in different mobile-robot reconstruction scenarios. To the best of our knowledge, this is the first comparative survey of VI formulation performance for active 3D object reconstruction. Additionally, our modular software framework is adaptable to other robotic platforms and general reconstruction problems, and we release it open source for autonomous reconstruction tasks.
This paper addresses the problem of simultaneous estimation of the vehicle ego-motion and motions of multiple moving objects in the scene-called eoru-motions-through a monocular vehiclemounted camera. Localization of multiple moving objects and estimation of their motions is crucial for autonomous vehicles. Conventional localization and mapping techniques (e.g. Visual Odometry and SLAM) can only estimate the ego-motion of the vehicle. The capability of robot localization pipeline to deal with multiple motions has not been widely investigated in the literature. We present a theoretical framework for robust estimation of multiple relative motions in addition to the camera ego-motion. First, the framework for general unconstrained motion is introduced and then, it is adapted to exploit the vehicle kinematic constraints to increase efficiency. The method is based on projective factorization of the multiple-trajectory matrix. First, the ego-motion is segmented and, then, several hypotheses are generated for the eoru-motions. All the hypotheses are evaluated and the one with the smallest reprojection error is selected. The proposed framework does not need any a priori knowledge of the number of motions and is robust to noisy image measurements. The method with constrained motion model is evaluated on a popular street-level image dataset collected in urban environments (KITTI dataset) including several relative ego-motion and eoru-motion scenarios. A benchmark dataset (Hopkins 155) is used to evaluate this method with general motion model. The results are compared with those of the state-of-the-art methods considering a similar problem, referred to as the Multi-Body Structure from Motion in the computer vision community. Abstract-This paper addresses the problem of simultaneous estimation of the vehicle ego-motion and motions of multiple moving objects in the scene-called eoru-motions-through a monocular vehicle-mounted camera. Localization of multiple moving objects and estimation of their motions is crucial for autonomous vehicles. Conventional localization and mapping techniques (e.g. Visual Odometry and SLAM) can only estimate the ego-motion of the vehicle. The capability of robot localization pipeline to deal with multiple motions has not been widely investigated in the literature. We present a theoretical framework for robust estimation of multiple relative motions in addition to the camera ego-motion. First, the framework for general unconstrained motion is introduced and then, it is adapted to exploit the vehicle kinematic constraints to increase efficiency. The method is based on projective factorization of the multiple-trajectory matrix. First, the ego-motion is segmented and, then, several hypotheses are generated for the eoru-motions. All the hypotheses are evaluated and the one with the smallest reprojection error is selected. The proposed framework does not need any a priori knowledge of the number of motions and is robust to noisy image measurements. The method with constrained motion model is ...
Phospholipids in the brain cell membranes contain different polyunsaturated fatty acids (PUFAs), which are critical to nervous system function and structure. In particular, brain function critically depends on the uptake of the so-called “essential” fatty acids such as omega-3 (n-3) and omega-6 (n-6) PUFAs that cannot be readily synthesized by the human body. We extracted natural lecithin rich in various PUFAs from a marine source and transformed it into nanoliposomes. These nanoliposomes increased neurite outgrowth, network complexity and neural activity of cortical rat neurons in vitro. We also observed an upregulation of synapsin I (SYN1), which supports the positive role of lecithin in synaptogenesis, synaptic development and maturation. These findings suggest that lecithin nanoliposomes enhance neuronal development, which may have an impact on devising new lecithin delivery strategies for therapeutic applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.