2022
DOI: 10.1109/tits.2020.3013234
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal End-to-End Autonomous Driving

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
59
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 149 publications
(71 citation statements)
references
References 59 publications
0
59
0
Order By: Relevance
“…3, the textual representation (i.e., a word embedding) is very sparse when compared to the image one, which makes it very challenging to combine these two different representations into a unified model. As another example, when the driver of a car is driving autonomously, he probably has a LiDAR camera and other embedded sensors (e.g., depth sensors, etc) [81] to perceive his surroundings. Here, poor weather conditions can affect the visual perception of the environment.…”
Section: Data Acquisition and Samplingmentioning
confidence: 99%
“…3, the textual representation (i.e., a word embedding) is very sparse when compared to the image one, which makes it very challenging to combine these two different representations into a unified model. As another example, when the driver of a car is driving autonomously, he probably has a LiDAR camera and other embedded sensors (e.g., depth sensors, etc) [81] to perceive his surroundings. Here, poor weather conditions can affect the visual perception of the environment.…”
Section: Data Acquisition and Samplingmentioning
confidence: 99%
“…In autonomous driving, end-to-end trained controllers learn from raw perception data, as opposed to maps [15] or other object representations [16]- [18]. Previous works have explored learning with expert information for lane following [1], [2], [19], [20], full point-to-point navigation [3], [8], [21], and shared human-robot control [22], [23], as well as in the context of RL by allowing the vehicle to repeatedly drive off the road [4]. However, when trained using state-of-the-art model-based simulation engines, these techniques are unable to be directly deployed in real-world driving conditions.…”
Section: Related Workmentioning
confidence: 99%
“…However, they used the World Rally Championship 6 (WRC6) environment for their experiments, which is not for research use and not open source; thus, it is unsuitable to use it for benchmark generalization. Using multimodal inputs can improve end-to-end driving [23]. They trained their end-to-end driving agent with RGB images, depth images and measurements.…”
Section: Background a End-to-end Drivingmentioning
confidence: 99%