Ab8tractMethods used previously to test a Markov model for leadership selection in small groups were applied to a special set of reinforcement conditions for the decisions of the leader. In earlier runs, members were reinforced on a partial schedule, but in the present experiment one member was reinforced with probability 1 for his decisions. Partly, it appeared, because of an experimental artifact, fit for predictions from the model were obtained for only one of two sets of Ss.
IntroductionRecent work by Binder, Wolin, & Terebinski (1965 a, b, in press) has been directed at developing and testing a Markov model for a leader selection game in teams containing three members. The game consists of a sequence of trials, on each of which the members of a team vote for a leader and the designated leader then makes a decision for the team. The members of a team are not informed of the actual decision of the leader on trial, only whether his decision was right or wrong. This made it possible to control the reinforcement probability (% right) for the decisions of each member when he acted as the team's leader.The parameters in the mathematical model consisted of the reinforcement probabilities (fixed experimentally) of each member and the probability (estimated from data) of a shift in voting choice from trial n to trial n+l for those contingenCies where this probability was not assumed to be zero. Reinforcement probabilities were run in various combinations of the values .9, .7, .5, .3, .1; thus, for example, five teams were run in the combination .9.5.1, meaning that one member in the team of three was informed that he was right for 90% of the decisions he made when leader, another was so reinforced 50% of the time, and the third 10%. Despite certain weaknesses, the model provided generally good fits between obtained and expected results, especially over asymptotic trials. Matching and maximizing models were clearly inferior.