2021
DOI: 10.1016/j.robot.2020.103662
|View full text |Cite
|
Sign up to set email alerts
|

An integrated algorithm for ego-vehicle and obstacles state estimation for autonomous driving

Abstract: This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

3
6

Authors

Journals

citations
Cited by 33 publications
(10 citation statements)
references
References 49 publications
0
10
0
Order By: Relevance
“…Ego-vehicle state and obstacles positions and velocities are directly computed by the simulator and assumed as known. In [53] a real implementation of and estimator for these quantities based on a kalman filter algorithm that combines lidars and radars data is presented. Simulations are performed on a laptop featuring an Intel i7-3610QM CPU and 8 GB of RAM.…”
Section: Simulation Resultsmentioning
confidence: 99%
“…Ego-vehicle state and obstacles positions and velocities are directly computed by the simulator and assumed as known. In [53] a real implementation of and estimator for these quantities based on a kalman filter algorithm that combines lidars and radars data is presented. Simulations are performed on a laptop featuring an Intel i7-3610QM CPU and 8 GB of RAM.…”
Section: Simulation Resultsmentioning
confidence: 99%
“…The first two are natural evolutions of the robotics approach; projection methods perform a projection of the 3D pointcloud on a 2D plane and then proceed to identify obstacles on the 2D grid. Two class of solution has been identified for this processing phase; the first one is based on geometry and computer vision [13], [8]. While the second one leverages on the increased available computational power, employing deep learning techniques to process the 2D grid with convolutional neural network [14], [15].…”
Section: Related Workmentioning
confidence: 99%
“…Despite using similar sensor suits, autonomous vehicles and logistics robots have some significant differences. Indoor robots can navigate the environment with low-level representations such as grid-map [7]; contrarily, planning algorithms for autonomous vehicles, which move at high speed in dynamic environments, require a more highlevel representation, like a list of 3D bounding boxes [8]. Because of these differences, it is impossible to directly employ classical robotics solutions, which work well with limited plane lidars, on autonomous vehicles.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, the solution adopted makes use of the road reference frame both for obstacle tracking and ego vehicle state estimation instead of a more traditional Cartesian reference frame on which sensors provide data. This allows an accurate estimate of the position of each obstacle and ego vehicle in the same reference frame used by the planning algorithm, removing the need for an intermediate conversion which would introduce approximation errors and delays [15]. A common approach in literature to deal with the overall driving problem is to decouple it into a sequence of different tasks by means of a hierarchical structure composed of different layers [16], [17] as reported in fig.…”
Section: Software Architecturementioning
confidence: 99%