2017 International Joint Conference on Neural Networks (IJCNN) 2017
DOI: 10.1109/ijcnn.2017.7966138
|View full text |Cite
|
Sign up to set email alerts
|

State initialization for recurrent neural network modeling of time-series data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 17 publications
0
11
0
Order By: Relevance
“…In other words, at time t = 0 the recurrent layer can use a valid input (x 0 ) and a valid previous hidden state (h −1 ) that should reduce reconstruction error of the next frames. We train the additional parameters of this sub-network jointly with the rest of the system, without any modi cation to the loss function, as opposed to Mohajerin and Waslander [2017] who design a more complex hidden state initialization procedure that requires a separate loss function and a sub-sequence of timesteps as inputs for the state initializer.…”
Section: Hidden State Initializermentioning
confidence: 99%
“…In other words, at time t = 0 the recurrent layer can use a valid input (x 0 ) and a valid previous hidden state (h −1 ) that should reduce reconstruction error of the next frames. We train the additional parameters of this sub-network jointly with the rest of the system, without any modi cation to the loss function, as opposed to Mohajerin and Waslander [2017] who design a more complex hidden state initialization procedure that requires a separate loss function and a sub-sequence of timesteps as inputs for the state initializer.…”
Section: Hidden State Initializermentioning
confidence: 99%
“…The network initializes the hidden and cell states to zero for the first cell. Depending on the learning task, other initialization mechanisms can be adopted for improving the learning performance or accelerating the training process [21].…”
Section: Network Architecturementioning
confidence: 99%
“…During the first phase, where k = 1, ..., τ I , the observed OGMs are given to the model. The first phase is mainly intended for RNN state initialization [16], and therefore, we will refer to it as the initialization phase, or init-phase for short. The second phase, namely the prediction phase, starts at k = τ + 1 and is intended for multi-step prediction, as the input OGMs are blank.…”
Section: Ogm Prediction In Autonomous Drivingmentioning
confidence: 99%