2019
DOI: 10.48550/arxiv.1906.00208
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RGB and LiDAR fusion based 3D Semantic Segmentation for Autonomous Driving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…(2) Ant species that aggressively trim and/or consume acacia leaves (i.e. C. nigriceps and T. penzigi) make their host trees less verdant, which would likely be detected with infrared or RGB colour sensors (Madawy et al, 2019). (3) Tetraponera penzigi is the best disperser among the four ant species (Stanton et al, 2002).…”
Section: Limitations and Future Improvementsmentioning
confidence: 99%
“…(2) Ant species that aggressively trim and/or consume acacia leaves (i.e. C. nigriceps and T. penzigi) make their host trees less verdant, which would likely be detected with infrared or RGB colour sensors (Madawy et al, 2019). (3) Tetraponera penzigi is the best disperser among the four ant species (Stanton et al, 2002).…”
Section: Limitations and Future Improvementsmentioning
confidence: 99%
“…Another approach [21] used a network consisting of three CNN encoder-decoder sub-networks to fuse RGB images, lidar, and radar for road detection. Additionally, Khaled et al in [23] use two networks, SqueezeSeg and PointSeg, for semantic segmentation and to apply a feature fusion level to fuse a 3D lidar with an RGB camera for pedestrian detection.…”
Section: Introductionmentioning
confidence: 99%
“…Semantic segmentation is a well-developed algorithm based on image data [23] and that leverages recent work on sensor fusion carried out in [21] to handle multi-modal fusion. In this work, an asymmetrical CNN architecture was used that consisted of three encoder sub-networks, where each sub-network was assigned to a particular sensor stream: camera, lidar, and radar.…”
Section: Introductionmentioning
confidence: 99%
“…Panoramic camera and multi-sensor fusion are good solutions [6] [7]. For example, by installing multiple cameras on the vehicle, or adding additional ultrasonic radar and LiDAR sensors, we can increase the amount of acquired information [8] [9]. However, these methods require additional calibration and matching of multiple sensors, and repeated calibration and matching are required for different designed hardware collocation methods [10].…”
Section: Introductionmentioning
confidence: 99%