1991
DOI: 10.1901/jeab.1991.56-455
|View full text |Cite
|
Sign up to set email alerts
|

Choice as a Function of Local Versus Molar Reinforcement Contingencies

Abstract: Rats were trained on a discrete-trial probability learning task. In Experiment 1, the molar reinforcement probabilities for the two response alternatives were equal, and the local contingencies of reinforcement differentially reinforced a win-stay, lose-shift response pattern. The win-stay portion was learned substantially more easily and appeared from the outset of training, suggesting that its occurrence did not depend upon discrimination of the local contingencies but rather only upon simple strengthening e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
28
0

Year Published

1993
1993
2021
2021

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(31 citation statements)
references
References 25 publications
3
28
0
Order By: Relevance
“…One question to ask is whether the opposed primacy of each of these complementary factors is absolute (e.g., Williams, 1991Williams, , 1993. That is, is IRT reinforcement irrelevant to measures of preference and preference irrelevant to the determination of local response rates to a schedule?…”
Section: Primacy Of Molecular and Molar Variablesmentioning
confidence: 99%
“…One question to ask is whether the opposed primacy of each of these complementary factors is absolute (e.g., Williams, 1991Williams, , 1993. That is, is IRT reinforcement irrelevant to measures of preference and preference irrelevant to the determination of local response rates to a schedule?…”
Section: Primacy Of Molecular and Molar Variablesmentioning
confidence: 99%
“…A possible answer is suggested by experiments that show that the magnitude of the difference in reinforcement probability across competing response patterns determines whether subjects learn the momentary maximizing sequence of the schedule (e.g., Williams, 1972Williams, , 1991. Then consider how, in the three items listed above, a response pattern changes the momentary payoff probabilities.…”
Section: Operant Variabilitymentioning
confidence: 99%
“…Another possibility is that the pigeons did not learn the stable patterns because frequency-dependent schedules are intrinsically challenging. To see this, consider the discrete-trials schedules commonly designed to test momentary maximizing (e.g., Hiraoka, 1984;Shimp, 1966;Silberberg & Williams, 1974;Williams, 1972Williams, , 1991. In most of these schedules, a salient event such as food or a response occurrence resets the reinforcement contingencies (Staddon, Hinson, & Kram, 1981), whereas in frequency-dependent schedules there is no such event.…”
Section: Choice Behaviormentioning
confidence: 99%
“…I partly make this recommendation in recognition of the fact that one's own personal belief about a paradigm, no matter how strongly held, may be sadly mistaken (Shimp, 1999). Perhaps, for example, we should consider the possibility that local and global analyses complement each other, and that it makes perfectly good sense to ask, not which of the two analyses is universally correct for all important problems, but what each means and in what contexts each applies (see also Hawkes & Shimp, 1998;Hineline, 2001;Williams, 1991). My reading of Staddon's book suggests he, too, sees potential value in both kinds of analysis.…”
Section: Two Recommendations For Peer Reviewmentioning
confidence: 99%