This paper introduces a new method of analysis for Delta-Sigma Modulators based on modeling the nonlinear quantizer with a linerized gain, obtained by minimizing a mean-square-error criterion[7] , followed by an additive noise source representing distortion components. In the paper, input-signal amplitude dependencies of Delta-Sigma Modulator stability and signal to noise ratio are analyzed. It is shown that due to the nonlinearity of the quantizer, the signal-to-noise ratio of the modulator may decrease as the input amplitude increases prior to saturation. Also, a stable third-order Delta-Sigma Modulator may become unstable by increasing the input amplitude beyond a certain threshold. Both of these phenomenon are explained by the nonlinear analysis of this paper. The analysis is carried out for both DC and sinusoidal excitations. 3 3.2 Sinusoidal Input and Nonlinearity Modeling (, 4. Noise Spectra and Signal to Noise Ratio 9 4.1 Exact Numerical Calculation of Signal to Noise Ratio 10 5.
This paper describes elements necessary for a general-purpose low-cost very large scale integration (VLSI) neural network. By choosing a learning algorithm that is tolerant of analog nonidealities, the promise of high-density analog VLSI is realized. A 64-synapse, 8-neuron proof-of-concept chip is described. The synapse, which occupies only 4900 mum(2) in a 2-mum technology, includes a hybrid of nonvolatile and dynamic weight storage that provides fast and accurate learning as well as reliable long-term storage with no refreshing. The architecture is user-configurable in any one-hidden-layer topology. The user-interface is fully microprocessor compatible. Learning is accomplished with minimal external support; the user need only present inputs, targets, and a clock. Learning is fast and reliable. The chip solves four-bit parity in an average of 680 ms and is successful in about 96% of the trials.
This paper describes concepts that optimize an on-chip learning algorithm for implementation of VLSI neural networks with conventional technologies. The network considered comprises an analog feedforward network with digital weights and update circuitry, although many of the concepts are also valid for analog weights. A general, semi-parallel form of perturbation learning is used to accelerate hidden-layer update while the infinity-norm error measure greatly simplifies error detection. Dynamic gain adaption, coupled with an annealed learning rate, produces consistent convergence and maximizes the effective resolution of the bounded weights. The use of logarithmic analog-to-digital conversion, during the backpropagation phase, obviates the need for digital multipliers in the update circuitry without compromising learning quality. These concepts have been validated through network simulations of continuous mapping problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.