“…Up until now, many researchers have deeply studied the stability of stochastic delayed recurrent neural networks (1.1), and the mainstream results can fall into two categories. One is the traditional stability, such as asymptotic stability [4][5][6], exponential stability [7][8][9][10][11][12][13][14][15][16], and almost sure exponential stability [17,18], and the other is the untraditional stability: input-to-state stability [19][20][21][22][23][24][25][26][27][28][29], where the states of stochastic neural networks do not converge to the equilibrium point as time goes to infinity. Unfortunately, both stability results of stochastic delayed recurrent neural networks (1.1) depend on the conditions that the activation functions satisfy uniform Lipschitz condition, which is somewhat restrictive.…”