The most popular neural network strategy is back propagation. This strategy initiated general interest in neural networks among researchers. Whileback propagation can solve nonlinear problems, it is considered to be a poor example of neuron functioning. Recently, Gardner (1993) has made a strong case for a back propagating phenomenon in networks of livingneurons. In this paper, we present a few simple computational examples that investigate another component of the typical back propagation network. The effects of varying transfer functions are illustrated along with the resulting variations in possible synaptic weights. Graphic presentations in 3-D space of the relationship between transfer functions and synaptic weights suggest neural analogies of cell-firingrate and network control.The back propagation strategy used in neural modeling is generally considered to be a poor approximation ofthe real biological events that change neurons during learning (Harvey, 1994;MacGregor, 1993). In particular, the strategy of retroflexively affecting a presynaptic membrane, which is the sine qua non of back propagation, is contrary to long-standing biological evidence. Nevertheless, evidenceelegantlypresented by Gardner (1993) makes a strong case for a retrograde effect of the postsynaptic cell membrane on the presynaptic membrane in a way analogous to back propagation. Inthe light ofthis new evidence, it seems that back propagation is more neuromorphic than was originally believed.Gardner's (1993) insight into the neural analogy ofback propagation has prompted us to reconsider the neuromorphology of other mathematical components used by the back propagation strategy. This paper focuses on the transfer function and the relationship of synaptic weights generated via back propagation. The transfer function bounds the output of the simulated neuron by adjusting the activation function. In the typical back propagation computer program, the activation function is a multiplication of the incoming signal by the potency of the synapses (the weight, Wj;)' and the addition of a value generated by a limiting function (the threshold). The resultant of these equations is then squashed by the transfer function, which sets upper, lower, and rate-of-change limits on the transmitted signal (Mpitsos, Burton, & Creech, 1988). Similarly, the biological neuron is bounded by its rate of firing, changes in firing rate, and its pattern. Although the maximum firing rate of a neuron is considered to be 1000 Hz, few neurons, if any, fire at their maximum rate (Bremner & Denham, 1993;Hinton & Anderson, 1981).The purpose of this experiment was to investigate two transfer functions representing the neuron's manipulation ofthe incoming signal to produce the biological outCorrespondenceshould be addressed to F.1.Bremner,Department of Psychology, Trinity University, 715 Stadium Dr., San Antonio, TX 78212-7200. put. One of these transfer functions was linear and the other was sigmoid. These two transfer functions were applied to two chaotic data sets, one in...