2021 IEEE International Intelligent Transportation Systems Conference (ITSC) 2021
DOI: 10.1109/itsc48978.2021.9564883
|View full text |Cite
|
Sign up to set email alerts
|

Towards Data-Driven GRU based ETA Prediction Approach for Vessels on both Inland Natural and Artificial Waterways

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 19 publications
0
1
0
Order By: Relevance
“…Collectively, these studies hold significance and provide valuable references for the application of data mining in predicting ship arrival times at specific ports. Noman et al [46] investigated the use of Gradient Boosting Decision Trees (GBDT), Multi-Layer Perceptron Neural Networks (MLP), and Gated Recurrent Unit Neural Networks (GRU) for predicting vessel ETA in inland waterways. It used historical AIS data for training and compared the accuracy of these methods.…”
Section: Waterway Applicationmentioning
confidence: 99%
“…Collectively, these studies hold significance and provide valuable references for the application of data mining in predicting ship arrival times at specific ports. Noman et al [46] investigated the use of Gradient Boosting Decision Trees (GBDT), Multi-Layer Perceptron Neural Networks (MLP), and Gated Recurrent Unit Neural Networks (GRU) for predicting vessel ETA in inland waterways. It used historical AIS data for training and compared the accuracy of these methods.…”
Section: Waterway Applicationmentioning
confidence: 99%
“…Wide and deep model does not fit with sequential data study, some researchers provides different types of recurrent neural network as the solution. For example LSTM [1] , which is the widely used RNN model for nature language sequential to sequential translation with forget, input and output gates, the model is described in Figure 8; GRU [10] (Gated Recurrent Unit) is the optimized version of LSTM combined forget and input gate to an update gate (refer to Figure 9 for gate micro structure), this variation is preferred for study of smaller dataset compared with LSTM, and proved to have better performance. RNN has its nature weakness on supporting the parallelism computing capability which modern GPU and accelerator hardware provide, since input of state in RNN hn relies on output of previous states (h1, h2…hn-1).…”
Section: Figure 6 Google Wide and Deep Learning For Recommendation Sy...mentioning
confidence: 99%