2014 International Joint Conference on Neural Networks (IJCNN) 2014
DOI: 10.1109/ijcnn.2014.6889658
|View full text |Cite
|
Sign up to set email alerts
|

Long-term learning behavior in a recurrent neural network for sound recognition

Abstract: Abstract-In this paper, the long-term learning properties of an artificial neural network model, designed for sound recognition and computational auditory scene analysis in general, are investigated. The model is designed to run for long periods of time (weeks to months) on low-cost hardware, used in a noise monitoring network, and builds upon previous work by the same authors. It consists of three neural layers, connected to each other by feedforward and feedback excitatory connections. It is shown that the d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 17 publications
(21 reference statements)
0
5
0
Order By: Relevance
“…The neural network builds on previous work by the same authors, and many of its mechanics are the same as described in detail in [21] [22], but for clarity the essential elements and differences are described in this paragraph. The network consists of a first layer, called the input layer of 768 neurons as mentioned before.…”
Section: Modelmentioning
confidence: 99%
See 3 more Smart Citations
“…The neural network builds on previous work by the same authors, and many of its mechanics are the same as described in detail in [21] [22], but for clarity the essential elements and differences are described in this paragraph. The network consists of a first layer, called the input layer of 768 neurons as mentioned before.…”
Section: Modelmentioning
confidence: 99%
“…The output layer has excitatory feedback connections to the middle layer, with a time delay of one timestep, making the excitation pattern of the middle layer dependent on both the current input layer activation and the output layer activation on the previous timestep. Excitation of a neuron is calculated as the sum of the exciting inputs weighed by their respective neural connection weights, after which a normalization and saturation procedure is applied, as described in [22]. Final activation of the neuron is then calculated by means of a biologically inspired competitive selection procedure as will be explained in more detail below.…”
Section: Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…The same technique was recently used in a wearable device to detect sound from a heartbeat [40] and to monitor a person's health. In hardware, Boes et al [41] investigated NN learning models on low-cost devices to detect volatile events by learning signalling patterns. In their study, the authors discovered that better results could be achieved by providing greater flexibility and accommodation (in the learning model) but without losing prior knowledge.…”
Section: Neural Network Classifier Descriptionmentioning
confidence: 99%