1999
DOI: 10.1023/a:1008320413168
|View full text |Cite
|
Sign up to set email alerts
|

Untitled

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2002
2002
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(2 citation statements)
references
References 55 publications
0
2
0
Order By: Relevance
“…RNNs can be modified to generate adaptive neural network structures like long–short-term memory (LSTM), which enhances the performance of the RNNs by storing information for long or short durations of time and preventing the problems of vanishing gradients during parameter estimation . While existing adaptive NN models such as adaptive bidirectional associative memory (ABAM) and transversal/recursive filters , have found applications in signal processing and communication, they may be inadequate for many chemical engineering applications where learning must be accomplished with data that may be noisy, limited, and time-varying.…”
Section: Introductionmentioning
confidence: 99%
“…RNNs can be modified to generate adaptive neural network structures like long–short-term memory (LSTM), which enhances the performance of the RNNs by storing information for long or short durations of time and preventing the problems of vanishing gradients during parameter estimation . While existing adaptive NN models such as adaptive bidirectional associative memory (ABAM) and transversal/recursive filters , have found applications in signal processing and communication, they may be inadequate for many chemical engineering applications where learning must be accomplished with data that may be noisy, limited, and time-varying.…”
Section: Introductionmentioning
confidence: 99%
“…Related approaches have been proposed in the literature (see Kireev and Schädler and Wysotzki); however, no successful methods for training these networks have been proposed. In this work, we propose to employ the well-known back-propagation procedure in combination with stochastic gradient descent to train the proposed network structure in order to perform supervised learning on data sets comprised of molecules and corresponding output properties.…”
Section: Introductionmentioning
confidence: 99%