The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00572
|View full text |Cite
|
Sign up to set email alerts
|

Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN

Abstract: Recurrent neural networks (RNNs) are known to be difficult to train due to the gradient vanishing and exploding problems and thus difficult to learn long-term patterns and construct deep networks. To address these problems, this paper proposes a new type of RNNs with the recurrent connection formulated as Hadamard product, referred to as independently recurrent neural network (IndRNN), where neurons in the same layer are independent of each other and connected across layers. The gradient vanishing and explodin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
421
1
5

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 699 publications
(473 citation statements)
references
References 40 publications
0
421
1
5
Order By: Relevance
“…We aim to obtain z t+1 , which can generate the flow information at the next time step, by advancing z t in a low dimension. Here, IndyLSTM [34,35], a kind of recurrent neural networks (RNNs), is used to update z t to z t+1 . A similar approach that applies RNNs to the time advancement in a low dimension was used for the prediction of unsteady flows [36,37,14].…”
Section: Rnn-gan For Time-varying Flow Generationmentioning
confidence: 99%
“…We aim to obtain z t+1 , which can generate the flow information at the next time step, by advancing z t in a low dimension. Here, IndyLSTM [34,35], a kind of recurrent neural networks (RNNs), is used to update z t to z t+1 . A similar approach that applies RNNs to the time advancement in a low dimension was used for the prediction of unsteady flows [36,37,14].…”
Section: Rnn-gan For Time-varying Flow Generationmentioning
confidence: 99%
“…For protein representation, we have chosen SSE as the resolution for interpretability due to the known sequence-size limitation of RNN models (Li et al, 2018). One can easily increase the resolution to residuelevel by simply feeding to our models amino-acid sequences (preferentially of length below 1,000) instead of SPS sequences, but needs to be aware of the much increased computational burden and much worse convergence when training RNNs.…”
Section: Resultsmentioning
confidence: 99%
“…All these are achieved with a much smaller alphabet of size 76 , which leads to around 100-times more compact representation of a protein sequence than the baseline. In addition, the SPS sequences are much shorter than aminoacid sequences and prevents convergence issues when training RNN and LSTM for sequences longer than 1,000 (Li et al, 2018).…”
Section: Protein Data Representationmentioning
confidence: 99%
“…However, this naïve method may suffer from long-term memory loss. Li et al [27] showed that LSTM models could only memorize less than 1,000 steps. Our experiments shown in Sec.…”
Section: Pooling On Featurementioning
confidence: 99%
“…4) were calculated on 1.7M mixtures synthesized with the setup described in Sec.3.3. Memory retaining Li et al [27] showed that LSTM can only keep a mid-range memory (about 500-1,000 time steps). To check if LSTM models have a similar memory forgetting issue on AEC, we can look at the red curves of 'LastFrame' in Fig.…”
Section: Dynamics Of Lstm Models On Aecmentioning
confidence: 99%