2021
DOI: 10.3390/s21238152
|View full text |Cite
|
Sign up to set email alerts
|

Vehicle Trajectory Prediction with Lane Stream Attention-Based LSTMs and Road Geometry Linearization

Abstract: It is essential for autonomous vehicles at level 3 or higher to have the ability to predict the trajectories of surrounding vehicles to safely and effectively plan and drive along trajectories in complex traffic situations. However, predicting the future behavior of vehicles is a challenging issue because traffic vehicles each have different drivers with different driving tendencies and intentions and they interact with each other. This paper presents a Long Short-Term Memory (LSTM) encoder–decoder model that … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 61 publications
0
5
0
Order By: Relevance
“…LSTM addresses the problem of vanishing gradients in RNNs and can identify long-term dependencies in sequential data by incorporating a memory state and gate mechanism to update the cell state's information retention. Each LSTM unit consists of three gates, the forget gate, the input gate, and the output gate, which selectively control the flow of information into the unit [52], Figure 4a depicts the structure of LSTM.…”
Section: Bilstmmentioning
confidence: 99%
“…LSTM addresses the problem of vanishing gradients in RNNs and can identify long-term dependencies in sequential data by incorporating a memory state and gate mechanism to update the cell state's information retention. Each LSTM unit consists of three gates, the forget gate, the input gate, and the output gate, which selectively control the flow of information into the unit [52], Figure 4a depicts the structure of LSTM.…”
Section: Bilstmmentioning
confidence: 99%
“…To prevent overfitting in the RNN model training, a larger data set was used, and a layer of dropout was added between the input layer and the hidden layer, and between the hidden layer and the output layer, respectively [49,50], randomly dropping some neurons during training. This allowed the model to not be too dependent on any one neuron, thus avoiding overfitting.…”
Section: Training and Testingmentioning
confidence: 99%
“…As shown in Figure 4, the social tensor is added to the convolution module to further improve interaction information extraction and is used to record the main vehicle. As soon as the convolution operation is performed, what is extracted from the grid far from the main body can be considered observation information, and what is extracted from the grid close to the main body can be considered interaction information [8]. In addition, to extract and fuse the comprehensive relationship between the subject and the adjacent vehicles, avg-pooling is especially added, and finally, FC is used to fuse the above information.…”
Section: Convolution Module Designmentioning
confidence: 99%