2014
DOI: 10.1109/tnnls.2013.2281217
|View full text |Cite
|
Sign up to set email alerts
|

Local Stability Analysis of Discrete-Time, Continuous-State, Complex-Valued Recurrent Neural Networks With Inner State Feedback

Abstract: Recurrent neural networks (RNNs) are well known for their capability to minimize suitable cost functions without the need for a training phase. This is possible because they can be Lyapunov stable. Although the global stability analysis has attracted a lot of interest, local stability is desirable for specific applications. In this brief, we investigate the local asymptotical stability of two classes of discrete-time, continuous-state, complex-valued RNNs with parallel update and inner state feedback. We show … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 19 publications
(13 citation statements)
references
References 24 publications
0
13
0
Order By: Relevance
“…11. If we consider only the equalization loop in Figure 19, we notice that it describes exactly the dynamical behavior of discrete-time recurrent neural networks [19,25,33,[54][55][56] u½l þ 1¼½ I−…”
Section: Joint Equalization and Decodingmentioning
confidence: 99%
“…11. If we consider only the equalization loop in Figure 19, we notice that it describes exactly the dynamical behavior of discrete-time recurrent neural networks [19,25,33,[54][55][56] u½l þ 1¼½ I−…”
Section: Joint Equalization and Decodingmentioning
confidence: 99%
“…3 shows a continuous-time RNN. The dynamical behavior is given by the following pair of state space equations time cases [9].…”
Section: Vector-valued Transmission Modelmentioning
confidence: 99%
“…Reviewing the conditions, which an activation function in an RNN must fulfill to guarantee the Lyapunov stability [8], [9] and comparing them with the above listed properties, we notice that the optimum activation function for vector equalization based on RNNs fulfills these conditions. The importance of this approximation for analog implementation lies in the fact that the tanh can be easily realized in analog by differential amplifiers.…”
Section: A Properties Of the Optimum Activation Functionmentioning
confidence: 99%
“…Recently, neural networks have been electronically implemented and they have been used in real-time applications. However in electronic implementation of neural networks, some essential parameters of neural networks such as release rate of neurons, connection weights between the neurons and transmission delays might be subject to some deviations due to the tolerances of electronic components employed in the design of neural networks (Aizenberg, Paliy, Zurada, & Astola, 2008;Hu & Wang, 2012;Mostafa, Teich, & Lindner, 2013;Wang, Xue, Fei, & Li, 2013;Wu, Shi, Su, & Chu, 2011). As we know, time delays commonly exist in the neural networks because of the network traffic congestions and the finite speed of information transmission in networks.…”
Section: Introductionmentioning
confidence: 99%
“…Over the past decades, some work has been done to analyze the dynamic behavior of the equilibrium points of the various CVNNs. In Mostafa et al (2013), local stability analysis of discrete-time, continuous-state, complex-valued recurrent neural networks with inner state feedback was presented. In Zhou and Song (2013), the authors studied boundedness and complete stability of complex-valued neural networks with time delay by using free weighting matrices.…”
Section: Introductionmentioning
confidence: 99%