Abstract:Abstract-We describe CHISEL: a system for real-time housescale (300 square meter or more) dense 3D reconstruction onboard a Google Tango [1] mobile device by using a dynamic spatially-hashed truncated signed distance field [2] for mapping, and visual-inertial odometry for localization. By aggressively culling parts of the scene that do not contain surfaces, we avoid needless computation and wasted memory. Even under very noisy conditions, we produce high-quality reconstructions through the use of space carving… Show more
“…The raw point clouds used in the present study disregard non-surface information and can be susceptible to sensor noise. Future work may therefore seek to improve the quality of 3D reconstruction by experimenting with alternative techniques such as optimised variants of occupancy grid mapping and truncated signed distance fields (TSDF) which use the passthrough data of emanating rays to provide more detailed volumetric information [82,83]. Ideally, the performance of motion tracking and depth sensing would also be tested with a wider range of environments, vegetation types, and movement speeds to closer emulate conditions found in more challenging field deployments.…”
Background: Animal-attached sensors are increasingly used to provide insights on behaviour and physiology. However, such tags usually lack information on the structure of the surrounding environment from the perspective of a study animal and thus may be unable to identify potentially important drivers of behaviour. Recent advances in robotics and computer vision have led to the availability of integrated depth-sensing and motion-tracking mobile devices. These enable the construction of detailed 3D models of an environment within which motion can be tracked without reliance on GPS. The potential of such techniques has yet to be explored in the field of animal biotelemetry. This report trials an animal-attached structured light depth-sensing and visual-inertial odometry motion-tracking device in an outdoor environment (coniferous forest) using the domestic dog (Canis familiaris) as a compliant test species.Results: A 3D model of the forest environment surrounding the subject animal was successfully constructed using point clouds. The forest floor was labelled using a progressive morphological filter. Trees trunks were modelled as cylinders and identified by random sample consensus. The predicted and actual presence of trees matched closely, with an object-level accuracy of 93.3%. Individual points were labelled as belonging to tree trunks with a precision, recall, and F ÎČ score of 1.00, 0.88, and 0.93, respectively. In addition, ground-truth tree trunk radius measurements were not significantly different from random sample consensus model coefficient-derived values. A first-person view of the 3D model was created, illustrating the coupling of both animal movement and environment reconstruction.
Conclusions:Using data collected from an animal-borne device, the present study demonstrates how terrain and objects (in this case, tree trunks) surrounding a subject can be identified by model segmentation. The device pose (position and orientation) also enabled recreation of the animal's movement path within the 3D model. Although some challenges such as device form factor, validation in a wider range of environments, and direct sunlight interference remain before routine field deployment can take place, animal-borne depth sensing and visual-inertial odometry have great potential as visual biologging techniques to provide new insights on how terrestrial animals interact with their environments.
“…The raw point clouds used in the present study disregard non-surface information and can be susceptible to sensor noise. Future work may therefore seek to improve the quality of 3D reconstruction by experimenting with alternative techniques such as optimised variants of occupancy grid mapping and truncated signed distance fields (TSDF) which use the passthrough data of emanating rays to provide more detailed volumetric information [82,83]. Ideally, the performance of motion tracking and depth sensing would also be tested with a wider range of environments, vegetation types, and movement speeds to closer emulate conditions found in more challenging field deployments.…”
Background: Animal-attached sensors are increasingly used to provide insights on behaviour and physiology. However, such tags usually lack information on the structure of the surrounding environment from the perspective of a study animal and thus may be unable to identify potentially important drivers of behaviour. Recent advances in robotics and computer vision have led to the availability of integrated depth-sensing and motion-tracking mobile devices. These enable the construction of detailed 3D models of an environment within which motion can be tracked without reliance on GPS. The potential of such techniques has yet to be explored in the field of animal biotelemetry. This report trials an animal-attached structured light depth-sensing and visual-inertial odometry motion-tracking device in an outdoor environment (coniferous forest) using the domestic dog (Canis familiaris) as a compliant test species.Results: A 3D model of the forest environment surrounding the subject animal was successfully constructed using point clouds. The forest floor was labelled using a progressive morphological filter. Trees trunks were modelled as cylinders and identified by random sample consensus. The predicted and actual presence of trees matched closely, with an object-level accuracy of 93.3%. Individual points were labelled as belonging to tree trunks with a precision, recall, and F ÎČ score of 1.00, 0.88, and 0.93, respectively. In addition, ground-truth tree trunk radius measurements were not significantly different from random sample consensus model coefficient-derived values. A first-person view of the 3D model was created, illustrating the coupling of both animal movement and environment reconstruction.
Conclusions:Using data collected from an animal-borne device, the present study demonstrates how terrain and objects (in this case, tree trunks) surrounding a subject can be identified by model segmentation. The device pose (position and orientation) also enabled recreation of the animal's movement path within the 3D model. Although some challenges such as device form factor, validation in a wider range of environments, and direct sunlight interference remain before routine field deployment can take place, animal-borne depth sensing and visual-inertial odometry have great potential as visual biologging techniques to provide new insights on how terrestrial animals interact with their environments.
“…We fuse all depth images obtained at different camera poses into a global dense map using an uncertaintyâaware truncated signed distance field (TSDF) fusion approach. Our method is developed from the open source CHISEL TSDF implementation . Improvements include uncertaintyâaware depth fusion (Section ) and algorithm parallelization (Section ).…”
Autonomous micro aerial vehicles (MAVs) have cost and mobility benefits, making them ideal robotic platforms for applications including aerial photography, surveillance, and search and rescue. As the platform scales down, MAVs become more capable of operating in confined environments, but it also introduces significant size and payload constraints. A monocular visual-inertial navigation system (VINS), consisting only of an inertial measurement unit (IMU) and a camera, becomes the most suitable sensor suite in this case, thanks to its light weight and small footprint.In fact, it is the minimum sensor suite allowing autonomous flight with sufficient environmental awareness. In this paper, we show that it is possible to achieve reliable online autonomous navigation using monocular VINS. Our system is built on a customized quadrotor testbed equipped with a fisheye camera, a low-cost IMU, and heterogeneous onboard computing resources. The backbone of our system is a highly accurate optimization-based monocular visual-inertial state estimator with online initialization and self-extrinsic calibration. An onboard GPU-based monocular dense mapping module that conditions on the estimated pose provides wide-angle situational awareness. Finally, an online trajectory planner that operates directly on the incrementally built threedimensional map guarantees safe navigation through cluttered environments. Extensive experimental results are provided to validate individual system modules as well as the overall performance in both indoor and outdoor environments.
“…The sparse maps are aligned and refined using visual keypoint based loop closure [24] and batch optimization. The dense 3D reconstruction is based on the Google Tango framework as well, which is closely related to its OpenSource version OpenChisel [25].…”
Abstract-Robots that are operating for extended periods of time need to be able to deal with changes in their environment and represent them adequately in their maps. In this paper, we present a novel 3D reconstruction algorithm based on an extended Truncated Signed Distance Function (TSDF) that enables to continuously refine the static map while simultaneously obtaining 3D reconstructions of dynamic objects in the scene. This is a challenging problem because map updates happen incrementally and are often incomplete. Previous work typically performs change detection on point clouds, surfels or maps, which are not able to distinguish between unexplored and empty space. In contrast, our TSDF-based representation naturally contains this information and thus allows us to more robustly solve the scene differencing problem. We demonstrate the algorithms performance as part of a system for unsupervised object discovery and class recognition. We evaluated our algorithm on challenging datasets that we recorded over several days with RGB-D enabled tablets. To stimulate further research in this area, all of our datasets are publicly available 3 .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citationsâcitations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.