2019 IEEE Intelligent Vehicles Symposium (IV) 2019
DOI: 10.1109/ivs.2019.8814065
|View full text |Cite
|
Sign up to set email alerts
|

Occlusion aware sensor fusion for early crossing pedestrian detection

Abstract: Early and accurate detection of crossing pedestrians is crucial in automated driving to execute emergency manoeuvres in time. This is a challenging task in urban scenarios however, where people are often occluded (not visible) behind objects, e.g. other parked vehicles. In this paper, an occlusion aware multi-modal sensor fusion system is proposed to address scenarios with crossing pedestrians behind parked vehicles. Our proposed method adjusts the detection rate in different areas based on sensor visibility. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 24 publications
(18 citation statements)
references
References 29 publications
0
18
0
Order By: Relevance
“…Note that even VRUs in occlusion (see Fig. 5a, 5b, 5g) are often classified correctly caused by the multi-path propagation of radar [8]. This, and its uniform performance in darkness/shadows/bright environments makes radar a useful complementary sensor for camera.…”
Section: Discussionmentioning
confidence: 95%
See 1 more Smart Citation
“…Note that even VRUs in occlusion (see Fig. 5a, 5b, 5g) are often classified correctly caused by the multi-path propagation of radar [8]. This, and its uniform performance in darkness/shadows/bright environments makes radar a useful complementary sensor for camera.…”
Section: Discussionmentioning
confidence: 95%
“…Target-level class labels are valuable for sensor fusion operating on intermediate-level, i.e. handling multiple measurements per object [8], [9]. Our targetlevel classification is more robust than cluster-wise classification where the initial clustering step must manage to separate radar targets from different objects, and keep those coming from the same object together, see Fig.…”
Section: Introductionmentioning
confidence: 99%
“…In the past years, many sensor fusion methods have been proposed for autonomous driving applications [13][14][15][16]. In addition, [17][18][19][20][21] examine the existing problems of the current sensor fusion algorithm. According to different data processing methods, sensor fusion can be divided into three levels: the data layer, feature layer and decision layer.…”
Section: Introductionmentioning
confidence: 99%
“…Another approach for addressing the issues of pedestrian recognition in blind areas is a remote sensing-based scheme using cameras and lidars [ 9 ]. Because these sensors can recognize different measurement areas, the sensor fusion approach to detect pedestrians in the blind area (occluded pedestrians) has been studied [ 10 ]. However, these techniques assumed that the pedestrians are partially visible and almost all studies do not assume the pedestrians that are completely invisible.…”
Section: Introductionmentioning
confidence: 99%