A stepwise procedure' for building and training a neural network intended to perform classification tasks, based on single layer learning rules, is presented. This procedure breaks up the classification task into subtasks of increasing complexity in order to make learning easier.The network structure is not fixed in advance: it is subject to a growth process during learning.Therefore, after training, the architecture of the network is guaranteed to be well adapted for the classification problem.
2014 Dans la perspective de la réalisation de mémoires associatives à l'aide de réseaux de neurones, nous étudions la relation entre la structure d'un réseau et ses états attracteurs; nous montrons que, quel que soit l'ensemble des états que l'on désire mémoriser, il est généralement possible de calculer tous les paramètres du réseau de façon à assurer la stabilité de ces états. Le formalisme des verres de spins conduit à des résultats particulièrement simples qui permettent, dans certains cas, d'evaluer analytiquement leur attractivité. Abstract. 2014 The link between the structure of a neural network and its attractor states is investigated, with a view to designing associative memories based on such networks. It is shown that, for any preassigned set of states to be memorized, the parameters of the network can be completely calculated in most cases so as to guaranteee the stability of these states. The spin glass formulation of the neural network problem leads to particularly simple results which, in some cases, allow an analytical evaluation of the attractivity of the memorized states.
We present an original initialization procedure for the parameters of feedforward wavelet networks, prior to training by gradient-based techniques. It takes advantage of wavelet frames stemming from the discrete wavelet transform, and uses a selection method to determine a set of best wavelets whose centers and dilation parameters are used as initial values for subsequent training. Results obtained for the modeling of two simulated processes are compared to those obtained with a heuristic initialization procedure, and demonstrate the effectiveness of the proposed method.
This article presents a segmental vocoder driven by ultrasound and optical images (standard CCD camera) of the tongue and lips for a "silent speech interface" application, usable either by a laryngectomized patient or for silent communication. The system is built around an audiovisual dictionary which associates visual to acoustic observations for each phonetic class. Visual features are extracted from ultrasound images of the tongue and from video images of the lips using a PCA-based image coding technique. Visual observations of each phonetic class are modeled by continuous HMMs. The system then combines a phone recognition stage with corpus-based synthesis. In the recognition stage, the visual HMMs are used to identify phonetic targets in a sequence of visual features. In the synthesis stage, these phonetic targets constrain the dictionary search for the sequence of diphones that maximizes similarity to the input test data in the visual space, subject to a concatenation cost in the acoustic domain. A prosody template is extracted from the training corpus, and the final speech waveform is generated using "Harmonic plus Noise Model" concatenative synthesis techniques. Experimental results are based on an audiovisual database containing one hour of continuous speech from each of two speakers.
In the framework of nonlinear process modeling, we propose training algorithms for feedback wavelet networks used as nonlinear dynamic models. An original initialization procedure is presented, that takes the locality of the wavelet functions into account. Results obtained for the modeling of several processes are presented; a comparison with networks of neurons with sigmoidal functions is performed.
The paper proposes a general framework which encompasses the training of neural networks and the adaptation of filters. We show that neural networks can be considered as general non-linear filters which can be trained adaptively, i. e. which can undergo continual training with a possibly infinite number of time-ordered examples. We introduce the canonical form of a neural network. This canonical form permits a unified presentation of network architectures and of gradient-based training algorithms for both feedforward networks (transversal filters) and feedback networks (recursive filters). We show that several algorithms used classically in linear adaptive filtering, and some algorithms suggested by other authors for training neural networks, are special cases in a general classification of training algorithms for feedback networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.