2020
DOI: 10.3390/s20041110
|View full text |Cite
|
Sign up to set email alerts
|

Stabilization and Validation of 3D Object Position Using Multimodal Sensor Fusion and Semantic Segmentation

Abstract: The stabilization and validation process of the measured position of objects is an important step for high-level perception functions and for the correct processing of sensory data. The goal of this process is to detect and handle inconsistencies between different sensor measurements, which result from the perception system. The aggregation of the detections from different sensors consists in the combination of the sensorial data in one common reference frame for each identified object, leading to the creation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 63 publications
(30 citation statements)
references
References 41 publications
0
30
0
Order By: Relevance
“…This means that in case of camera failure their solution would not function properly. The authors in [28] and [29] address this issue by ensuring that each sensor is able to perform its role reliably and independently. The overall system performance is improved when all sensors are functioning, however in case one sensor is not working, the whole system does not crash.…”
Section: B Pedestrian Trackingmentioning
confidence: 99%
See 1 more Smart Citation
“…This means that in case of camera failure their solution would not function properly. The authors in [28] and [29] address this issue by ensuring that each sensor is able to perform its role reliably and independently. The overall system performance is improved when all sensors are functioning, however in case one sensor is not working, the whole system does not crash.…”
Section: B Pedestrian Trackingmentioning
confidence: 99%
“…The overall system performance is improved when all sensors are functioning, however in case one sensor is not working, the whole system does not crash. The solution in [28] uses deep learning to fuse the different modalities, while the method presented in [29] uses a combination between an Unscented Kalman Filter (UKF) and single layer perceptron to fuse the data. In another approach [30] the authors use deep neural networks to jointly detect and track 3D objects using a stereo camera system.…”
Section: B Pedestrian Trackingmentioning
confidence: 99%
“…where ⊗ is a convolution operator in (6). For improving calculation speed fast Fourier transform (FFT) is used and calculated as presented in (7).…”
Section: Learning Spatial Context Modelmentioning
confidence: 99%
“…Visual Object Tracking (VOT) is an active research topic in computer vision and machine learning due to extensive applications in areas including gesture recognition [1], sports analysis [2], visual surveillance [3], medical diagnosis [4], autonomous vehicles [5,6] and radar navigation systems [7][8][9]. Various factors such as partial or full occlusion, background clutter, illumination variation, deformation and other factors in the environment complicate a tracking problem [10][11][12].…”
Section: Introductionmentioning
confidence: 99%
“…In this work, we assume the use of video cameras. Second, the perception module [34], which processes and fuses the measurements coming from the sensors in order to provide the vehicle with relevant information about the driving context (e.g. velocities, free drivable areas, obstacles' locations, etc.).…”
Section: B Visual Attention For Autonomous Vehiclesmentioning
confidence: 99%