2016 Fourth International Conference on 3D Vision (3DV) 2016
DOI: 10.1109/3dv.2016.51
|View full text |Cite
|
Sign up to set email alerts
|

CNN-Based Object Segmentation in Urban LIDAR with Missing Points

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
21
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(21 citation statements)
references
References 29 publications
0
21
0
Order By: Relevance
“…The study results performed well on the segmentation of cars, trees, pedestrians, and so forth [16]. CNN-based object segmentation for LiDAR data was conducted on the large-area LiDAR data of New York city and indicated good results even with missing points in vehicle object detection and segmentation [17]. In 2019, Shaoshuai Shi et al proposed the PointRCNN model that generate and detect objects used 3D point cloud data, This work conclude two stages of bottom-up 3D generation from the point cloud data and refines the 3D proposals in the canonical coordinate.…”
Section: Related Workmentioning
confidence: 81%
“…The study results performed well on the segmentation of cars, trees, pedestrians, and so forth [16]. CNN-based object segmentation for LiDAR data was conducted on the large-area LiDAR data of New York city and indicated good results even with missing points in vehicle object detection and segmentation [17]. In 2019, Shaoshuai Shi et al proposed the PointRCNN model that generate and detect objects used 3D point cloud data, This work conclude two stages of bottom-up 3D generation from the point cloud data and refines the 3D proposals in the canonical coordinate.…”
Section: Related Workmentioning
confidence: 81%
“…The signed angle feature described in [ 24 ] measures the elevation of the vector formed by two consecutive points and indicates the convexity or concavity of three consecutive points. Input features converted from depth images of normalized depth (D), normalized relative height (H), angle with up-axis (A), signed angle (S), and missing mask (M) were used in [ 11 ]. We are using DHS in this work to project 3D depth image to 2D since as shown in [ 11 ] adding more channels did not affect classification accuracy significantly.…”
Section: Related Workmentioning
confidence: 99%
“…Input features converted from depth images of normalized depth (D), normalized relative height (H), angle with up-axis (A), signed angle (S), and missing mask (M) were used in [ 11 ]. We are using DHS in this work to project 3D depth image to 2D since as shown in [ 11 ] adding more channels did not affect classification accuracy significantly. Keeping the number of total channels to three, allow us to use networks with pre-trained weights for starting our training.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations