A monocular visual-inertial system (VINS), consisting of a camera and a low-cost inertial measurement unit (IMU), forms the minimum sensor suite for metric six degreesof-freedom (DOF) state estimation. However, the lack of direct distance measurement poses significant challenges in terms of IMU processing, estimator initialization, extrinsic calibration, and nonlinear optimization. In this work, we present VINS-Mono: a robust and versatile monocular visual-inertial state estimator. Our approach starts with a robust procedure for estimator initialization and failure recovery. A tightly-coupled, nonlinear optimization-based method is used to obtain high accuracy visual-inertial odometry by fusing pre-integrated IMU measurements and feature observations. A loop detection module, in combination with our tightly-coupled formulation, enables relocalization with minimum computation overhead. We additionally perform four degrees-of-freedom pose graph optimization to enforce global consistency. We validate the performance of our system on public datasets and real-world experiments and compare against other state-of-the-art algorithms. We also perform onboard closed-loop autonomous flight on the MAV platform and port the algorithm to an iOS-based demonstration. We highlight that the proposed work is a reliable, complete, and versatile system that is applicable for different applications that require high accuracy localization. We open source our implementations for both PCs 1 and iOS mobile devices 2 .
We propose a 3D object detection method for autonomous driving by fully exploiting the sparse and dense, semantic and geometry information in stereo imagery. Our method, called Stereo R-CNN, extends Faster R-CNN for stereo inputs to simultaneously detect and associate object in left and right images. We add extra branches after stereo Region Proposal Network (RPN) to predict sparse keypoints, viewpoints, and object dimensions, which are combined with 2D left-right boxes to calculate a coarse 1 3D object bounding box. We then recover the accurate 3D bounding box by a region-based photometric alignment using left and right RoIs. Our method does not require depth input and 3D position supervision, however, outperforms all existing fully supervised image-based methods. Experiments on the challenging KITTI dataset show that our method outperforms the state-of-the-art stereobased method by around 30% AP on both 3D detection and 3D localization tasks. Code has been released at https://github.com/HKUST-Aerial-Robotics/Stereo-RCNN.
We propose a stereo vision-based approach for tracking the camera ego-motion and 3D semantic objects in dynamic autonomous driving scenarios. Instead of directly regressing the 3D bounding box using end-to-end approaches, we propose to use the easy-to-labeled 2D detection and discrete viewpoint classification together with a light-weight semantic inference method to obtain rough 3D object measurements. Based on the object-aware-aided camera pose tracking which is robust in dynamic environments, in combination with our novel dynamic object bundle adjustment (BA) approach to fuse temporal sparse feature correspondences and the semantic 3D measurement model, we obtain 3D object pose, velocity and anchored dynamic point cloud estimation with instance accuracy and temporal consistency. The performance of our proposed method is demonstrated in diverse scenarios. Both the ego-motion estimation and object localization are compared with the state-of-of-theart solutions.
While modelling studies suggest that mesoscale eddies strengthen the subduction of mode waters, this eddy effect has never been observed in the field. Here we report results from a field campaign from March 2014 that captured the eddy effects on mode-water subduction south of the Kuroshio Extension east of Japan. The experiment deployed 17 Argo floats in an anticyclonic eddy (AC) with enhanced daily sampling. Analysis of over 3,000 hydrographic profiles following the AC reveals that potential vorticity and apparent oxygen utilization distributions are asymmetric outside the AC core, with enhanced subduction near the southeastern rim of the AC. There, the southward eddy flow advects newly ventilated mode water from the north into the main thermocline. Our results show that subduction by eddy lateral advection is comparable in magnitude to that by the mean flow—an effect that needs to be better represented in climate models.
In Spring 2014, two subthermocline eddies (STEs) were observed by rapid‐sampling Argo floats in the subtropical northwestern Pacific (STNWP). The first one is a warm, salty, and oxygen‐poor lens, with its temperature/salinity /dissolved oxygen (T/S/DO) anomalies reaching 1.16°C/0.21 practical salinity unit (psu)/−29.9 µmol/kg, respectively, near the 26.62σ0 surface. The other is a cold, fresh, and oxygen‐rich lens, with its T/S/DO anomalies reaching −1.95°C/−0.34 psu/88.0 µmol/kg, respectively, near the 26.54σ0 surface. The vertical extent of the water mass anomalies in the warm (cold) STE is about 190 m (150 m), and its horizontal length scale is 22 ± 7 km (18 ± 10 km). According to their water mass properties, we speculate that the warm and cold STEs are generated in the North Pacific Subtropical and Subarctic Front region, respectively. The observed STEs may play an important role in modifying the intermediate‐layer water properties in the STNWP, and this needs to be confirmed by more focused observations in the future.
Background and Purpose— High-resolution vessel wall magnetic resonance imaging is a promising technique for assessing wall structures of unruptured intracranial aneurysms (UIAs). However, the relationship between aneurysmal high-resolution vessel wall magnetic resonance imaging features and their histopathologic mechanism remains poorly understood. Methods— From February 2016 to February 2018, a total of 19 men and 28 women with 54 UIAs treated surgically were prospectively enrolled. The intraoperative observed gross pathology of the aneurysmal wall was compared with the enhancement features on high-resolution vessel wall magnetic resonance imaging. Specimens of the UIAs were harvested for histopathologic and immunohistochemistry analysis. Results— An irregular shape and large size was significantly related to UIA wall enhancement. Both uniform and focal wall enhancement may demonstrate the inflammation processes of UIA walls, although the latter may indicate more atherosclerotic plaque formation. Conclusions— Different high-resolution vessel wall magnetic resonance imaging enhancement features may represent variable inflammation status of a UIA wall, which may provide new insights into assessing the UIA wall structure and optimizing treatment.
The monocular visual-inertial system (VINS), which consists one camera and one low-cost inertial measurement unit (IMU), is a popular approach to achieve accurate 6-DOF state estimation. However, such locally accurate visualinertial odometry is prone to drift and cannot provide absolute pose estimation. Leveraging history information to relocalize and correct drift has become a hot topic. In this paper, we propose a monocular visual-inertial SLAM system, which can relocalize camera and get the absolute pose in a previous-built map. Then 4-DOF pose graph optimization is performed to correct drifts and achieve global consistent. The 4-DOF contains x, y, z, and yaw angle, which is the actual drifted direction in the visual-inertial system. Furthermore, the proposed system can reuse a map by saving and loading it in an efficient way. Current map and previous map can be merged together by the global pose graph optimization. We validate the accuracy of our system on public datasets and compare against other state-of-the-art algorithms. We also evaluate the map merging ability of our system in the large-scale outdoor environment. The source code of map reuse is integrated into our public code, VINS-Mono 1 .All authors are with the
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.