The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00115
|View full text |Cite
|
Sign up to set email alerts
|

Road Scene Understanding by Occupancy Grid Learning from Sparse Radar Clusters using Semantic Segmentation

Abstract: Occupancy grid mapping is an important component in road scene understanding for autonomous driving. It encapsulates information of the drivable area, road obstacles and enables safe autonomous driving. Radars are an emerging sensor in autonomous vehicle vision, becoming more widely used due to their long range sensing, low cost, and robustness to severe weather conditions. Despite recent advances in deep learning technology, occupancy grid mapping from radar data is still mostly done using classical filtering… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
37
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 56 publications
(41 citation statements)
references
References 26 publications
(61 reference statements)
0
37
0
Order By: Relevance
“…Even though CNNs function extraordinarily well on images, they can also be tried and applied to other sensors that can yield image-like data [ 108 ]. The two-dimensional radar grid representations accumulated according to different occupancy grid map algorithms have already been exploited in deep learning domains for various autonomous system tasks, such as static object classification [ 109 , 110 , 111 , 112 , 113 , 114 ] and dynamic object classification [ 115 , 116 , 117 ]. In this case, the objects denote any road user within an autonomous system environment, like the pedestrian, vehicles, motorcyclists, etc.…”
Section: Detection and Classification Of Radar Signals Using Deep mentioning
confidence: 99%
“…Even though CNNs function extraordinarily well on images, they can also be tried and applied to other sensors that can yield image-like data [ 108 ]. The two-dimensional radar grid representations accumulated according to different occupancy grid map algorithms have already been exploited in deep learning domains for various autonomous system tasks, such as static object classification [ 109 , 110 , 111 , 112 , 113 , 114 ] and dynamic object classification [ 115 , 116 , 117 ]. In this case, the objects denote any road user within an autonomous system environment, like the pedestrian, vehicles, motorcyclists, etc.…”
Section: Detection and Classification Of Radar Signals Using Deep mentioning
confidence: 99%
“…One is the occupancy-based grid-mapping, and the other is the amplitude-based grid-mapping [ 13 ]. Traditionally, the most widely used method to perform grid-mapping is using an inverse sensor model (ISM) and Bayesian filtering techniques [ 14 ].…”
Section: Data Models and Representations From Mmw Radarmentioning
confidence: 99%
“…Besides the methods of building grid maps listed above, new studies try to use deep learning to solve the same problem. They use ground truth from LIDAR and supervised learning to realize occupancy grid-mapping for static obstacles, from radar data on nuScenes [ 14 ].…”
Section: Data Models and Representations From Mmw Radarmentioning
confidence: 99%
See 1 more Smart Citation
“…Driving scene understanding is a crucial task for autonomous cars, and it has taken a big leap with recent advances in artificial intelligence [1]. Collision-free space (or simply freespace) detection is a fundamental component of driving scene understanding [27]. Freespace detection approaches generally classify each pixel in an RGB or depth/disparity image as drivable or undrivable.…”
Section: Introductionmentioning
confidence: 99%