2019 IEEE Intelligent Vehicles Symposium (IV) 2019
DOI: 10.1109/ivs.2019.8814058
|View full text |Cite
|
Sign up to set email alerts
|

BoxNet: A Deep Learning Method for 2D Bounding Box Estimation from Bird's-Eye View Point Cloud

Abstract: We present a learning-based method to estimate the object bounding box from its 2D bird's-eye view (BEV) LiDAR points. Our method, entitled BoxNet, exploits a simple deep neural network that can efficiently handle unordered points. The method takes as input the 2D coordinates of all the points and the output is a vector consisting of both the box pose (position and orientation in LiDAR coordinate system) and its size (width and length). In order to deal with the angle discontinuity problem, we propose to estim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…Application of pose estimation To demonstrate the utility of accumulated radar points for downstream applications, we apply a pose estimation method, i.e., BoxNet [25], on the accumulated 2D radar points via our full velocity and Table 3: Comparison of pose estimation performance: average error in center and orientation as well as Intersection over Union (IoU), by using BoxNet [25] on radar points accumulated using our velocity and the radial velocity as a baseline.…”
Section: Quantitative Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Application of pose estimation To demonstrate the utility of accumulated radar points for downstream applications, we apply a pose estimation method, i.e., BoxNet [25], on the accumulated 2D radar points via our full velocity and Table 3: Comparison of pose estimation performance: average error in center and orientation as well as Intersection over Union (IoU), by using BoxNet [25] on radar points accumulated using our velocity and the radial velocity as a baseline.…”
Section: Quantitative Resultsmentioning
confidence: 99%
“…In Fig. 2, each heat map shows point-wise velocity error under different depth ranges, i.e [0, 25), [25,50) and [50, ∞) meters as well as various α ranges, i.e. [0, 30), [30,60) and [60, 90] degrees, where α is the angle between actual moving direction and radial direction of a radar point and ranges from 0 to 90 degrees.…”
Section: Velocity Estimation Error For Different Depths and αmentioning
confidence: 99%
“…In such a way, it makes a great demand on the tooth centroid prediction in our method. Previous detection methods for 3D point clouds [3,13,18] generally use the furthest point sampling (FPS) method to uniformly select the sampling points for generating proposals. For tooth centroid detection, the sampled points by FPS method generally contain irrelevant points, such as one located on the tooth crown and gingiva, which may lead to inaccurate proposals for tooth centroids.…”
Section: Introductionmentioning
confidence: 99%