Recent research on multiple schedule interactions is reviewed. Contrary to formulations that view contrast as the result of elicited behavior controlled by the stimulus-reinforcer contingency (e.g., additivity theory), the major controlling variable is the relative rate of reinforcement, which cannot be reduced to some combination of stimulus-reinforcer and response-reinforcer effects. Other recent theoretical formulations are also reviewed and all are found to face serious counterevidence. The best description of the available data continues to be in terms of the "context of reinforcement," but Herrnstein's (1970) formulation of the basis of such context effects appears to be inadequate. An alternative conception is provided by Catania's concept of "inhibition by reinforcement," by which rate of responding is inversely related to the average rate of reinforcement in the situation. Such a conception is related to Gibbon's recent scalar-expectancy account of autoshaping and Fantino's delay-reduction model of conditioned reinforcement, suggesting that a common set of principles determines several diverse conditioning phenomena. However, the empirical status of such a description remains uncertain, because recent evidence shows that schedule interactions are temporally asymmetric, depending primarily upon the conditions of reinforcement that follow a schedule component.
Pigeons chose between fixed-interval schedules of different durations presented in the terminal links of concurrent-chains schedules. The pair of schedules was always in the ratio of 2:1, but the absolute duration of the fixed intervals varied. In one set of conditions, the different terminal-link schedules were associated with different keylight stimuli (cued conditions). In a second set of conditions, the different terminal-link schedules were associated with the same stimulus (uncued conditions). Results from the cued conditions replicated previous findings that preference for the shorter fixed-interval schedule increased with fixed-interval duration. Preferences in the uncued conditions were lower than in the corresponding cued conditions but also increased with fixed-interval length. In addition, the degree of control under the uncued conditions was correlated with the extent to which the schedule during the terminal link was discriminated immediately upon entry into the terminal link. The pattern of results in both conditions was inconsistent with the notion that choice behavior matches relative immediacy of reinforcement. Reanalysis of previous evidence for matching (Chung and Herrnstein, 1967) showed that matching in fact did not occur, as the preferences of their subjects for the shorter of two delays also increased with the absolute size of the delays.
Haptic perception is an active process that provides an awareness of objects that are encountered as an organism scans its environment. In contrast to the sensation of touch produced by contact with an object, the perception of object location arises from the interpretation of tactile signals in the context of the changing configuration of the body. A discrete sensory representation and a low number of degrees of freedom in the motor plant make the ethologically prominent rat vibrissa system an ideal model for the study of the neuronal computations that underlie this perception. We found that rats with only a single vibrissa can combine touch and movement to distinguish the location of objects that vary in angle along the sweep of vibrissa motion. The patterns of this motion and of the corresponding behavioral responses show that rats can scan potential locations and decide which location contains a stimulus within 150 ms. This interval is consistent with just one to two whisk cycles and provides constraints on the underlying perceptual computation. Our data argue against strategies that do not require the integration of sensory and motor modalities. The ability to judge angular position with a single vibrissa thus connects previously described, motion-sensitive neurophysiological signals to perception in the behaving animal.
The concept of conditioned reinforcement has received decreased attention in learning textbooks over the past decade, in part because of criticisms of its validity by major behavior theorists and in part because its explanatory function in a variety of different conditioning procedures has become uncertain. Critical data from the major procedures that have been used to investigate the concept (second-order schedules, chain schedules, concurrent chains, observing responses, delay-of-reinforcement procedures) are reviewed, along with the major issues of interpretation. Although the role played by conditioned reinforcement in some procedures remains unresolved, the results taken together leave little doubt that the underlying idea of conditioned value is a critical component of behavior theory that is necessary to explain many different types of data. Other processes (marking, bridging) may also operate to produce effects similar to those of conditioned reinforcement, but these clearly cannot explain the full domain of experimental effects ascribed to conditioned reinforcement and should be regarded as complements to the concept rather than theoretical competitors. Examples of practical and theoretical applications of the concept of conditioned reinforcement are also considered.Key words: conditioned reinforcement, behavior theory, observing behavior, chain schedules, delay of reinforcement, concurrent chains A general assumption in contemporary behavior analysis is that human behavior is best understood in terms of the contingencies of reinforcement operating on that behavior. Yet much, if not most, human behavior has little immediate impact on satisfying the biological motives that underlie the reinforcement contingencies commonly studied in the laboratory. People are not born with a tendency to work for money, to like the taste of alcohol or coffee, or to discover laws of behavior. We are also not born with the motivation to engage in compulsive hand washing or to be fearful ofspeaking in public. Such motives, both positive and negative, are learned,'and a major task of any behavior theory is to specify how such learning occurs, both in order to have a complete theory of behavior
Pigeons' pecks were reinforced according to a variable-interval schedule. A delay-of-reinforcement procedure was then added to the schedule, or a yoked-control procedure was arranged where the reinforcers occurred independently of responding according to the same variable-interval schedule. During the delay-of-reinforcement procedure, the first peck after a reinforcer was scheduled began a delay timer and the reinforcer was delivered at the end of the interval. No stimulus change signalled the delay interval and responses could occur during it, so that the obtained delays were often shorter than those scheduled. Responding under this procedure was highly variable but, in general, behavior was substantially reduced even with the shortest delay used, 3 sec. In addition, the rates maintained by delayed reinforcement were only slightly greater than those maintained by the yoked-control procedure, suggesting that adventitious pairings of response and reinforcer were responsible for some of the maintenance of behavior that did occur. The results challenge recent conceptions of reinforcement as involving response-reinforcer correlations and re-emphasize the role of temporal proximity between response and reinforcer.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.