2017 IEEE Intelligent Vehicles Symposium (IV) 2017
DOI: 10.1109/ivs.2017.7995901
|View full text |Cite
|
Sign up to set email alerts
|

Mastering data complexity for autonomous driving with adaptive point clouds for urban environments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 6 publications
0
5
0
Order By: Relevance
“…Directly converting 3D point cloud data into a 2D image will inevitably result in the loss of information, therefore some methods have utilized the LiDAR scanner's operating principle and targeted the raw data from the LiDAR sensor, which is known as packet data. Yin et al [15] focused on this raw packet data and on the LiDAR system's data format. In a previous study [3], we proposed converting LiDAR packet data into a 2D matrix and then using an existing image compression method to compress the data.…”
Section: Related Workmentioning
confidence: 99%
“…Directly converting 3D point cloud data into a 2D image will inevitably result in the loss of information, therefore some methods have utilized the LiDAR scanner's operating principle and targeted the raw data from the LiDAR sensor, which is known as packet data. Yin et al [15] focused on this raw packet data and on the LiDAR system's data format. In a previous study [3], we proposed converting LiDAR packet data into a 2D matrix and then using an existing image compression method to compress the data.…”
Section: Related Workmentioning
confidence: 99%
“…This way, obstacle detection and tracking steps in the perception pipeline can be performed faster. The computational speedup is also dependent on the data structures adapted and used to store the data, we refer the readers to work done in [44]. Updatability.…”
Section: Motivationmentioning
confidence: 99%
“…Such shapes can only be effectively represented using 3D models [ 39 ], as it is impossible to reflect all the necessary details of such an environment using merely 2D and 2.5D maps, which are used in many terrestrial applications [ 40 ]. Unfortunately, the precise representation of 3D models using point clouds is a highly resource-consuming proposition, especially for real-time applications [ 41 ]. We use a triangular-mesh-based map representation and localize our robot relative to this triangular mesh map because it allows for much more compact representation without significant loss of accuracy [ 42 , 43 ], is more effective for vehicle navigation than other map representations [ 44 ] such as Elevation Mapping [ 45 ], OctoMap [ 46 ] or VoxBlox [ 47 ] and is more convenient for texture mapping [ 48 ].…”
Section: Introductionmentioning
confidence: 99%