2008
DOI: 10.1016/j.neucom.2008.04.030
|View full text |Cite
|
Sign up to set email alerts
|

Modeling word perception using the Elman network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0

Year Published

2011
2011
2022
2022

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 120 publications
(58 citation statements)
references
References 16 publications
0
53
0
Order By: Relevance
“…An auto-encoder is an artificial neural network used for learning efficient codings [4], which aims to learn a compressed representation (encoding) from a data set. It is a powerful tool for dimension reduction, and a basic tool in deep learning.…”
Section: Auto-encodermentioning
confidence: 99%
“…An auto-encoder is an artificial neural network used for learning efficient codings [4], which aims to learn a compressed representation (encoding) from a data set. It is a powerful tool for dimension reduction, and a basic tool in deep learning.…”
Section: Auto-encodermentioning
confidence: 99%
“…where f (1) (·), f (2) (·) and f (3) (·) are activation functions of the first hidden layer, the second hidden layer, and the output layer, respectively (all the activation functions are sigmoid type). w…”
Section: Context Layermentioning
confidence: 99%
“…For example, w (3) jh (k) is the weight of the connection between the jth neuron in the output layer and the hth output vector in the second hidden layer. v (2) j (k) and v (3) j (k) are the activity level of the second hidden layer and the output layer, respectively. β j (k) and α j (k) are the reversal parameter and the quantum phase of jth qubit neuron in the first hidden layer, respectively.…”
Section: Context Layermentioning
confidence: 99%
See 2 more Smart Citations