2022
DOI: 10.1109/tmm.2021.3087017
|View full text |Cite
|
Sign up to set email alerts
|

ToF and Stereo Data Fusion Using Dynamic Search Range Stereo Matching

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 46 publications
0
6
0
Order By: Relevance
“…The fusion of at least two types of depth-sensing devices has been studied in the past. Notably, the fusion of raw depth maps from two different sensors, such as RGB stereo and time-of-flight (ToF) [70,13,2,21,16,38,3,17], RGB stereo and Lidar [36], RGB and Lidar [55,48,50], RGB stereo and monocular depth [40] and the fusion of multiple RGB stereo algorithms [53] is well-studied and explored. Yet, these methods study specific sensors and are not inherently equipped with 3D reasoning.…”
Section: Multi-sensor Depth Fusionmentioning
confidence: 99%
“…The fusion of at least two types of depth-sensing devices has been studied in the past. Notably, the fusion of raw depth maps from two different sensors, such as RGB stereo and time-of-flight (ToF) [70,13,2,21,16,38,3,17], RGB stereo and Lidar [36], RGB and Lidar [55,48,50], RGB stereo and monocular depth [40] and the fusion of multiple RGB stereo algorithms [53] is well-studied and explored. Yet, these methods study specific sensors and are not inherently equipped with 3D reasoning.…”
Section: Multi-sensor Depth Fusionmentioning
confidence: 99%
“…On the contrary, pre-fusion is the fusion of Lidar data as a priori information into the stereo vision calculation method to improve the parallax map obtained by stereo vision. This fusion uses the Lidar a priori information to constrain the stereo matching algorithm, the data volume is small and can improve the speed of stereo matching [63][64][65]. Figure 9 shows data fusion flowchart.…”
Section: Lidar + Stereo Visionmentioning
confidence: 99%
“…The structured light is applied to the surface of the object to be measured, and the distorted image of the object surface is captured by two cameras, which are then processed to obtain the information of the corresponding spatial points of the light bars, and finally combined with the intrinsic and extrinsic parameters of the two cameras to recover the 3D spatial information of the object surface projected by the structured light. The structured light binocular vision system consists of two optical cameras, a structured light source, a computer, and other hardware [69]. Structural light binocular visual measurement model is shown in Figure 11.…”
Section: Structure Light + Stereo Visionmentioning
confidence: 99%
“…In indoor scenarios where firefighting robots perform tasks, GPS signals are often unavailable [5]. The current indoor positioning methods have many methods, including ultra wide band (UWB), inertial measurement unit (IMU), infrared depth sensor (IDS), camera [4,[6][7][8][9][10][11]. UWB has high temporal resolution but is susceptible to indoor environments [4,6].…”
Section: Introductionmentioning
confidence: 99%