2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00512
|View full text |Cite
|
Sign up to set email alerts
|

An Extensible Multi-Sensor Fusion Framework for 3D Imaging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…(Cao et al, 2018) show that the collaborative fusion of the different sources via a CNN and the use of semantic features makes it possible to obtain better image semantic segmentation results. (Siddiqui et al, 2020) use three data sources (lidar, single photon lidar and camera) to improve data semantic segmentation for autonomous cars. use two data sources (lidar and hyperspectral imagery) to better classify the points and pixels in their data.…”
Section: Multi-source Semantic Segmentationmentioning
confidence: 99%
“…(Cao et al, 2018) show that the collaborative fusion of the different sources via a CNN and the use of semantic features makes it possible to obtain better image semantic segmentation results. (Siddiqui et al, 2020) use three data sources (lidar, single photon lidar and camera) to improve data semantic segmentation for autonomous cars. use two data sources (lidar and hyperspectral imagery) to better classify the points and pixels in their data.…”
Section: Multi-source Semantic Segmentationmentioning
confidence: 99%
“…Interpreting spatio-temporal data visualizations is an example of a larger research problem in vehicle telemetry analytics and the development of multi-sensor autonomous vehicles. Research on data visualization techniques for developing multi-sensor vehicles involves both autonomous vehicle operation systems [6] [7] and human analyst systems [8] [9] where one goal is to minimize the visual interference that occurs when working with real-time multi-sensor data. In the context of human analyst systems there are two forms of visual interference, occlusion [10] [11] and loss of context when zooming in to reveal detail [12].…”
Section: Introductionmentioning
confidence: 99%
“…The challenge of depth recovery is being tackled in many ways with data fusion techniques being used to combine single pixel [19,26,24,11], dual pixel [11] and SPAD arrays [23,15]. However, we are approaching this challenge in a different way with an emphasis on a compact solution with new applications for wearable tech, drones and mobility vehicles.…”
mentioning
confidence: 99%