Abstract:We study how individual memory items are stored assuming that situations given in the environment can be represented in the form of synaptic-like couplings in recurrent neural networks. Previous numerical investigations have shown that specific architectures based on suppression or max units can successfully learn static or dynamic stimuli (situations). Here we provide a theoretical basis concerning the learning process convergence and the network response to a novel stimulus. We show that, besides learning "s… Show more
“…As we shall show further the network can learn different static and time-evolving situations by an appropriate adjustment of the coupling matrix W. Learning rules can be described as teacher forcing based on the classical delta rule using the mismatch between the internal and external inputs (K'uhn et al, 2007;Makarov et al, 2008):…”
Section: Universal Network Modelmentioning
confidence: 99%
“…In (Makarov et al, 2008) we have shown that such a dynamic situation can be learned by using the following learning rule:…”
Section: Learning Phase: Universal Structure Of W ∞ During Learning Tmentioning
confidence: 99%
“…Thus the minimal training sequence of vectors consists of four time steps: ξ(1),...,ξ(4). Then the limit matrix is given by (Makarov et al, 2008;Villacorta-Atienza et al, 2010):…”
Section: Learning Phase: Universal Structure Of W ∞ During Learning Tmentioning
confidence: 99%
“…6). At each learning step k the network is exposed to one of the vectors and the learning follows the rule for static cases (K'uhn et al, 2007;Makarov et al, 2008):…”
Section: Learning and Retrieval Of Cirsmentioning
confidence: 99%
“…It has been shown (Makarov et al, 2008) that the learning of static situations (i.e. of CIRs) can be always achieved by using a small enough learning rate satisfying to:…”
Section: Convergence Of the Network Training Proceduresmentioning
“…As we shall show further the network can learn different static and time-evolving situations by an appropriate adjustment of the coupling matrix W. Learning rules can be described as teacher forcing based on the classical delta rule using the mismatch between the internal and external inputs (K'uhn et al, 2007;Makarov et al, 2008):…”
Section: Universal Network Modelmentioning
confidence: 99%
“…In (Makarov et al, 2008) we have shown that such a dynamic situation can be learned by using the following learning rule:…”
Section: Learning Phase: Universal Structure Of W ∞ During Learning Tmentioning
confidence: 99%
“…Thus the minimal training sequence of vectors consists of four time steps: ξ(1),...,ξ(4). Then the limit matrix is given by (Makarov et al, 2008;Villacorta-Atienza et al, 2010):…”
Section: Learning Phase: Universal Structure Of W ∞ During Learning Tmentioning
confidence: 99%
“…6). At each learning step k the network is exposed to one of the vectors and the learning follows the rule for static cases (K'uhn et al, 2007;Makarov et al, 2008):…”
Section: Learning and Retrieval Of Cirsmentioning
confidence: 99%
“…It has been shown (Makarov et al, 2008) that the learning of static situations (i.e. of CIRs) can be always achieved by using a small enough learning rate satisfying to:…”
Section: Convergence Of the Network Training Proceduresmentioning
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.