2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00404
|View full text |Cite
|
Sign up to set email alerts
|

Domain Adaptation for Vehicle Detection from Bird's Eye View LiDAR Point Cloud Data

Abstract: Point cloud data from 3D LiDAR sensors are one of the most crucial sensor modalities for versatile safety-critical applications such as self-driving vehicles. Since the annotations of point cloud data is an expensive and time-consuming process, therefore recently the utilisation of simulated environments and 3D LiDAR sensors for this task started to get some popularity. With simulated sensors and environments, the process for obtaining an annotated synthetic point cloud data became much easier. However, the ge… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 49 publications
(28 citation statements)
references
References 29 publications
0
25
0
Order By: Relevance
“…Moreover, upsampling the entire point cloud will lead to a significantly higher latency. A third approach is to leverage style transfer techniques: [80,40,12,20,48,21,47] render point clouds as 2D pseudo images and enforce the renderings from different domains to be resemblant in style. However, these methods introduce an information bottleneck during rasterization [79] and they are not applicable to modern pointbased 3D detectors [49].…”
Section: Previous Methods To Address the Domain Gapmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, upsampling the entire point cloud will lead to a significantly higher latency. A third approach is to leverage style transfer techniques: [80,40,12,20,48,21,47] render point clouds as 2D pseudo images and enforce the renderings from different domains to be resemblant in style. However, these methods introduce an information bottleneck during rasterization [79] and they are not applicable to modern pointbased 3D detectors [49].…”
Section: Previous Methods To Address the Domain Gapmentioning
confidence: 99%
“…[77,45] align the global and local features for object-level tasks. To reduce the sparsity, [59] projects the point cloud to 2D view, while [47] projects the point cloud to birds-eye view (BEV). [15] creates a car model set and adapts their features to the detection object features.…”
Section: Unsupervised Domain Adaptation For 2d Detectionmentioning
confidence: 99%
“…Apart from UDA on object point clouds, several methods are proposed to address specific domain gaps on LiDAR point clouds, where the common factors are depth missing and sampling difference between sensors. Both [33] and [23] use CycleGAN [34] to generate more realistic Li-DAR point clouds from synthetic data, i.e., sim2real. Complete & Label [31] leverages segmentation on completed surface reconstructed from sparse point cloud for better adaptation.…”
Section: D and 3d Unsupervised Domain Adaptationmentioning
confidence: 99%
“…A more recent LiDAR-focused domain adaptation survey [12] classifies methods into: 1) Domain-invariant data representation methods [13], [14], mainly based on hand-crafted data preprocessing to move different domains into a common representation (e.g. LiDAR data rotation and normalization), 2) Domain-invariant feature learning for finding a common representation space for the source and target domains [15], [16], 3) Normalization statistics that attempt to align the domain distributions by a normalization of the mean and variance of activations, and 4) Domain mapping, where source data is transformed, usually using GANs or adversarial training to appear like target data [17], [18], [19].…”
Section: Introduction and Prior Workmentioning
confidence: 99%
“…A few examples of domain mapping methods in the context of training data generation from simulated synthetic frames are [17], [18], [20], where a CycleGAN [21] is trained using unpaired simulated and real projected bird's eye view (BEV) images to generate pseudo-labeled simulated data for off-line training of a BEV YOLOv3 [22] object detection network. SqueezeSegV2 [23] is another example that uses simulators to generate large quantities of labeled spherical projections of synthetic LiDAR data to train perception models.…”
Section: Introduction and Prior Workmentioning
confidence: 99%