Channel identification of a time-varying channel is considered using superimposed training. A sequence of known symbols with lower power is arithmetically added to the information symbols before modulation and transmission. The channel estimation is done exploiting the known superimposed data in the transmitted signal. Two iterative algorithms are considered in this paper: recursive least squares (RLS) and the expectation maximization (EM). Performance of the proposed algorithms is compared with a simple avergaing scheme and the LMS algorihm. For short data blocks RLS outperforms EM, but with large blocks EM is superior.
This paper presents expressions for the optimal step length to use when training a vector quantizer by stochastic approximation. By treating each update as an estimation problem, it provides a unified framework covering both batch and incremental training, which were previously treated separately, and extends existing results to the semibatch case. In addition, the new results presented here provide a measurable improvement over results which were previously thought to be optimal.
This paper describes two improvements on a recently proposed winner-take-all (WTA) architecture with linear circuit complexity based on the cellular neural network paradigm. The general design technique originally used to select parameter values is extended to allow values to be optimized for robustness against relative parameter variations as well as absolute variations. In addition, a modified architecture, called clipped total feedback winner-take-all (CTF-WTA) is proposed. This architecture is shown to share most properties of standard cellular neural networks, but is shown to be better suited to the WTA application. It is shown to be less sensitive to parameter variations and under some conditions to converge faster than the standard cellular version. In addition, the effect of asymmetry between the neurons on the reliability of the circuit is examined, and CTF-WTA is found to be superior.
This paper presents a novel adaptive vector quantisation scheme based on the SOFM neural network. All adaptatio'n is performed directly from the quantised image with no explicit adaptation information transimitted or stored. Thus the network learns an input distribution it has never actually seen. Training sets are generated from the received image by scaling the image t o approximate the statistics of the original image and selecting blocks in such a way as to capture edges and other image features. This data is fed t o a SOFM neural network t o update the codebook. A new method is also presented for ensuring that all neurons are well used, by estimating directly from the quantised image how much distortion each neuron introduces. The ability of thiis scheme to adapt successfully is verified by simulation.
Abstruct-B y the use of noise robust compression, separate error correction can be reduced. This paper studies a number of neighbourhood functions for the SOFM for designing image vector quantiser codebooks for noisy channels. They include a neighbourhood recently proposed for the scalar coding of speech and a novel neighbourhood which makes the S O F M functionally equivalent to the popular LBG algorithm. The simulation results of these neighbourhood functions on t w o images provide insight into the problem of selecting an appropriate topology for the design of vector quantiser codebooks for noisy channels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.