2016
DOI: 10.48550/arxiv.1611.07759
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multi-View 3D Object Detection Network for Autonomous Driving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
28
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 35 publications
(28 citation statements)
references
References 24 publications
0
28
0
Order By: Relevance
“…Li [15] used 3D point cloud data and proposed to use 3D convolutions on a voxelized representation of point clouds. Chen et al [3] combined image and 3D point clouds with a fusion network. They exploited 2D convolutions in BEV, however, they used hand-crafted height features as input.…”
Section: D Object Detectionmentioning
confidence: 99%
“…Li [15] used 3D point cloud data and proposed to use 3D convolutions on a voxelized representation of point clouds. Chen et al [3] combined image and 3D point clouds with a fusion network. They exploited 2D convolutions in BEV, however, they used hand-crafted height features as input.…”
Section: D Object Detectionmentioning
confidence: 99%
“…LiDARs have been widely used for 3D object detection and tracking in autonomous driving applications in recent years. The majority of LiDAR-based methods either use 3D voxels [12,35] or 2D projections [13,5,29,31] for point cloud representation. Voxel-based methods are usually slow as a result of the voxel grid's high dimensionality, and projection-based methods might suffer from large variances in object shapes and sizes depending on the projection plane.…”
Section: Single-modality Methodsmentioning
confidence: 99%
“…Sensory fusion approaches have been widely used in computer vision. LIDAR and camera are popular sensor sets employed in detection and tracking [21], [22], [23]. Other papers also exploit radar [24].…”
Section: Related Workmentioning
confidence: 99%
“…1) Detection θ det W (x): We exploit object proposals in order to reduce the search space over all possible detections. In particular, we employ the MV3D detector [22] to produce oriented 3D object proposals from LIDAR and RGB data (i.e., regions in 3D where there is a high probability that a vehicle is present). To make sure that the tracker produces accurate trajectories, we need a classifier that decides whether or not an object proposal is a true positive (i.e., actually represents a vehicle).…”
Section: Deep Scoring and Matchingmentioning
confidence: 99%
See 1 more Smart Citation