2023
DOI: 10.1109/tro.2022.3188121
|View full text |Cite
|
Sign up to set email alerts
|

SOFT2: Stereo Visual Odometry for Road Vehicles Based on a Point-to-Epipolar-Line Metric

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(20 citation statements)
references
References 51 publications
0
20
0
Order By: Relevance
“…The perception methods such as object classification, object tracking, and vehicle localisations make sense of raw data that are brought after the pre-processing step by sensors and V2X communication. AD and ICC can use cameras to estimate velocity using visual odometry, yaw, roll, and pitch rates, detect road lines [ 149 , 150 ], road curve angle and road bank angle, intersections, railroad, and pedestrian crossings, road surface type and conditions [ 118 ], road lanes and boundaries [ 151 , 152 ], road signs including warnings and speed limits [ 153 ], and horizontal marking detection and recognition [ 154 ], road damage, potholes, and distress detection [ 155 ], location [ 156 ], velocity and displacement of the vehicle [ 157 ] and other vehicles on the road even using their taillights [ 158 ]. ICC systems can use previously described advanced sensors to acquire horizon prediction, and longitudinal and lateral road surface slopes, which, together with in-vehicle IMU and GNSS/INS, will enable a more accurate decomposition of linear and gravitation-caused acceleration.…”
Section: Common Controller Layout For Automated Vehiclesmentioning
confidence: 99%
“…The perception methods such as object classification, object tracking, and vehicle localisations make sense of raw data that are brought after the pre-processing step by sensors and V2X communication. AD and ICC can use cameras to estimate velocity using visual odometry, yaw, roll, and pitch rates, detect road lines [ 149 , 150 ], road curve angle and road bank angle, intersections, railroad, and pedestrian crossings, road surface type and conditions [ 118 ], road lanes and boundaries [ 151 , 152 ], road signs including warnings and speed limits [ 153 ], and horizontal marking detection and recognition [ 154 ], road damage, potholes, and distress detection [ 155 ], location [ 156 ], velocity and displacement of the vehicle [ 157 ] and other vehicles on the road even using their taillights [ 158 ]. ICC systems can use previously described advanced sensors to acquire horizon prediction, and longitudinal and lateral road surface slopes, which, together with in-vehicle IMU and GNSS/INS, will enable a more accurate decomposition of linear and gravitation-caused acceleration.…”
Section: Common Controller Layout For Automated Vehiclesmentioning
confidence: 99%
“…Other researchers [26,27,28] used deep neural networks to eliminate the scale ambiguity of monocular cameras and extract high-level semantic features to enhance the system robustness and accuracy. The classic variants of stereo SLAM include ORBSLAM2 [29], ORBSLAM3 [30], PL-SLAM [31], and SOFT2 [32]. An event camera [33] was used to address the problems of high dynamics and low light, and the depth estimation of multiple viewpoints was merged in a probabilistic manner to build a semidense point cloud map.…”
Section: Related Workmentioning
confidence: 99%
“…In the front-end, two common methods are the feature-based method [ 21 , 23 , 24 , 25 , 26 ] and the direct method (including the semi-direct method) [ 27 , 28 , 29 ]. ORB-SLAM2 [ 24 ] is a classic feature-point-based Visual SLAM system that estimates camera motion based on feature point extraction, matching, and optimization of the reprojection error.…”
Section: Related Workmentioning
confidence: 99%
“…During the depth recovery of road feature points using stereo disparity, compared to non-road feature points, there is greater depth uncertainty. Consequently, the spatial accuracy of ground feature points is lower, making them unsuitable for direct use in vehicle pose estimation and road model fitting [ 18 , 21 , 22 ].…”
Section: Introductionmentioning
confidence: 99%