2023
DOI: 10.3390/s23052845
|View full text |Cite
|
Sign up to set email alerts
|

LiDAR-as-Camera for End-to-End Driving

Abstract: The core task of any autonomous driving system is to transform sensory inputs into driving commands. In end-to-end driving, this is achieved via a neural network, with one or multiple cameras as the most commonly used input and low-level driving commands, e.g., steering angle, as output. However, simulation studies have shown that depth-sensing can make the end-to-end driving task easier. On a real car, combining depth and visual information can be challenging due to the difficulty of obtaining good spatial an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(12 citation statements)
references
References 24 publications
0
12
0
Order By: Relevance
“…While we did not study invariability of object detection or segmentation of the same class of objects across different environments (e.g., indoors and outdoors) or environmental conditions (e.g., light rain or fog, day or night), we could assume, based on the works in the literature [ 16 ], that the data characteristics did not change significantly. Indeed, one of the key benefits of lidar-generated images is that they are not affected by environmental conditions.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…While we did not study invariability of object detection or segmentation of the same class of objects across different environments (e.g., indoors and outdoors) or environmental conditions (e.g., light rain or fog, day or night), we could assume, based on the works in the literature [ 16 ], that the data characteristics did not change significantly. Indeed, one of the key benefits of lidar-generated images is that they are not affected by environmental conditions.…”
Section: Methodsmentioning
confidence: 99%
“…A relevant recent work in the literature is [ 16 ], where the authors presented a novel dataset of lidar-generated images with the same lidar-as-a-camera sensor that we used in this paper. The work in [ 16 ] showed the potential of these images, as they remained almost invariant across seasonal changes and environmental conditions. For example, unpaved roads could be perceived in very similar ways in summer weather, snow cover, or light rain.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In our previous work [ 29 ], we noticed that the driving speed differed between the data collection drives (including data for off-policy evaluation) and deployment (on-policy testing). In particular, we deployed steering-only models on a real vehicle at speeds 0.5 to 0.8 times the speed the human used in the given GPS location.…”
Section: Introductionmentioning
confidence: 99%
“…Our proposed methods show robust performance in long corridor environments and survive in narrow spaces where 180°U-turn occurs. LiDAR technology has advanced rapidly in recent years, with new sensors able of generating image-like data in addition to point clouds [7], [8], and new solid-state LiDAR sensors that offer dense 3D point clouds with non-repetitive scan patterns, while at the same time lowering the cost [9], [10]. Despite the clear advantages of solid-state LiDARs, the naturally narrow horizontal FoV leads, in a similar way to monocular pinhole camera systems, to the sensing volume being blocked by objects, or even entirely occupied by a near wall, resulting in an insufficient number of feature points to estimate a 6degree-of-freedom (6-DoF) pose.…”
Section: Introductionmentioning
confidence: 99%