Your search

In authors or contributors
Resource language
  • Parallel experiments with rats and pigeons examined whether the size of a pre-trial ratio requirement would affect choices in a self-control situation. In different conditions, either 1 response or 40 responses were required before each trial. In the first half of each experiment, an adjusting-ratio schedule was used, in which subjects could choose a fixed-ratio schedule leading to a small reinforcer, or an adjusting-ratio schedule leading to a larger reinforcer. The size of the adjusting ratio requirement was increased and decreased over trials based on the subject's responses, in order to estimate an indifference point-a ratio at which the two alternatives were chosen about equally often. The second half of each experiment used an adjusting-delay procedure-fixed and adjusting delays to the small and large reinforcers were used instead of ratio requirements. In some conditions, particularly with the reinforcer delays, the rats had consistently longer adjusting delays with the larger pre-trial ratios, reflecting a greater tendency to choose the larger, delayed reinforcer when more responding was required to reach the choice point. No consistent effects of the pre-trial ratio were found for the pigeons in any of the conditions. These results may indicate that rats are more sensitive to the long-term reinforcement rates of the two alternatives, or they may result from a shallower temporal discounting rate for rats than for pigeons, a difference that has been observed in previous studies.

  • Two experiments on discrete-trial choice examined the conditions under which pigeons would exhibit exclusive preference for the better of two alternatives as opposed to distributed preference (making some choices for each alternative). In Experiment 1, pigeons chose between red and green response keys that delivered food after delays of different durations, and in Experiment 2 they chose between red and green keys that delivered food with different probabilities. Some conditions of Experiment 1 had fixed delays to food and other conditions had variable delays. In both experiments, exclusive or nearly exclusive preference for the better alternative was found in some conditions, but distributed preference was found in other conditions, especially in Experiment 2 when key location varied randomly over trials. The results were used to evaluate several different theories about discrete-trial choice. The results suggest that exclusive preference for one alternative is a frequent outcome in discrete-trial choice. When distributed preference does occur, it is not the result of inherent tendencies to sample alternatives or to match response percentages to the values of the alternatives. Rather, distributed preference may occur when two factors (such as reinforcer delay and position bias) compete for the control of choice, or when the consequences for the two alternatives are similar and difficult to discriminate.

  • The rise of mathematical models in the experimental analysis of behavior has increased over the years, and they offer several advantages. Mathematical models require theorists to be precise and unambiguous, often allowing comparisons of competing theories that sound similar when stated in words. Sometimes different mathematical models may make equally accurate predictions for a large body of data. In Such cases, it is important to find and investigate situations for which the competing models make different predictions because, unless two models are actually mathematically equivalent, they are based on different assumptions about the psychological processes that underlie ail observed behavior. Mathematical models developed in basic behavioral research have been used to predict and control behavior in applied settings, and they have guided research in other areas of psychology. A good mathematical model call provide a common framework for understanding what might otherwise appear to be diverse and unrelated behavioral phenomena. Because psychologists vary in their quantitative skills and in their tolerance for mathematical equations, it is important for those who develop mathematical models of behavior to find ways (such as verbal analogies, pictorial representations, or concrete examples) to communicate the key premises of their models to nonspecialists.

  • Rats chose between alternatives that differed in the number of reinforcers and in the delay to each reinforcer. A left leverpress led to two reinforcers, each delivered after a fixed delay. A right leverpress led to one reinforcer after an adjusting delay. The adjusting delay was increased or decreased many times in a session, depending on the rat's choices, in order to estimate an indifference point-a delay at which the two alternatives were chosen about equally often. Both the number of reinforcers and their individual delays affected the indifference points. The overall pattern of results was well described by the hyperbolic-decay model, which states that each additional reinforcer delivered by an alternative increases preference for that alternative but that a reinforcer's effect is inversely related to its delay. Two other possible delay-discounting equations, an exponential equation and a reciprocal equation, did not produce satisfactory predictions for these data. Adding an additional free parameter to the hyperbolic equation as an exponent for delay did not appreciably improve the predictions, suggesting that raising delay to some power other than 1.0 was unnecessary. The results were qualitatively similar to those from a previous experiment with pigeons (Mazur, 1986), but quantitative differences suggested that the rates of delay discounting were several times slower for rats than for pigeons.

  • Pigeons responded in a successive-encounters procedure that consisted of a search period, a choice period, and a handling period. The search period was either a fixed-interval or a mixed-interval schedule presented on the center key of a three-key chamber. Upon completion of the search period, the center key was turned off and the two side keys were lit. A pigeon could either accept a delay followed by food (by pecking the right key) or reject this option and return to the search period (by pecking the left key). During the choice period, a red right key represented the long alternative (a long handling delay followed by food), and a green right key represented the short alternative (a short handling delay followed by food). The experiment consisted of a series of comparisons for which optimal diet theory predicted no changes in preference for the long alternative (because the overall rates of reinforcement were unchanged), whereas the hyperbolic-decay model predicted changes in preference (because the delays to the next possible reinforcer were varied). In all comparisons, the results supported the predictions of the hyperbolic-decay model, which states that the value of a reinforcer is inversely related to the delay between a choice response and reinforcer delivery.

  • Pigeons responded on concurrent-chains schedules with equal variable-interval schedules as initial links. One terminal link delivered a single reinforcer after a fixed delay, and the other terminal link delivered either three or five reinforcers, each preceded by a fixed delay. Some conditions included a postreinforcer delay after the single reinforcer to equate the total durations of the two terminal links, but other conditions did not include such a postreinforcer delay. With short initial links, preference for the single-reinforcer alternative decreased when a postreinforcer delay was present, but with long initial links, the postreinforcer delays had no significant effect on preference. In conditions with a postreinforcer delay, preference for the single-reinforcer alternative frequently switched from above 50% to below 50% as the initial links were lengthened. This pattern of results was consistent with delay-reduction theory (Squires & Fantino, 1971), but not with the contextual-choice model (Grace, 1994) or the hyperbolic value-added model (Mazur, 2001) as they have usually been applied. However, the hyperbolic value-added model could account for the results if its calculations were expanded to include reinforcers delivered in later terminal links. The implications of these findings for models of concurrent-chains performance are discussed.

  • Pigeons responded in a successive-encounters procedure that consisted of a search state, a choice state, and a handling state. The search state was either a fixed-interval or mixed-interval schedule presented on the center key of a three-key chamber. Upon completion of the search state, the choice state was presented, in which the center key was off and the two side keys were lit. A pigeon could either accept a delay followed by food (by pecking the right key) or reject this option and return to the search state (by pecking the left key). During the choice state, a red right key represented the long alternative (a long handling delay followed by food), and a green right key represented the short alternative (a short handling delay followed by food). In some conditions, both the short and long alternatives were fixed-time schedules, and in other conditions both were mixed-time schedules. Contrary to the predictions of both optimal foraging theory and delay-reduction theory, the percentage of trials on which pigeons accepted the long alternative depended on whether the search and handling schedules were fixed or mixed. They were more likely to accept the long alternative when the search states were fixed-interval rather than mixed-interval schedules, and more likely to reject the long alternative when the handling states were fixed-time rather than mixed-time schedules. This pattern of results was in qualitative agreement with the predictions of the hyperbolic-decay model, which states that the value of a reinforcer is inversely related to the delay between a choice response and reinforcer delivery.

  • An adjusting-delay procedure was used to study rats' choices with probabilistic and delayed reinforcers, and to compare them with previous results from pigeons. A left lever press led to a 5-s delay signaled by a light and a tone, followed by a food pellet on 50% of the trials. A right lever press led to an adjusting delay signaled by a light followed by a food pellet on 100% of the trials. In some conditions, the light and tone for the probabilistic reinforcer were present only on trials that delivered food. In other conditions, the light and tone were present on all trials that the left lever was chosen. Similar studies with pigeons [Mazur, J.E., 1989. Theories of probabilistic reinforcement. J. Exp. Anal. Behav. 51, 87-99; Mazur, J.E., 1991. Conditioned reinforcement and choice with delayed and uncertain primary reinforcers. J. Exp. Anal. Behav. 63, 139-150] found that choice of the probabilistic reinforcer increased dramatically when the delay-interval stimuli were omitted on no-food trials, but this study found no such effect with the rats. In other conditions, the probability of food was varied, and comparisons to previous studies with pigeons indicated that rats showed greater sensitivity to decreasing reinforcer probabilities. The results support the hypothesis that rats' choices in these situations depend on the total time between a choice response and a reinforcer, whereas pigeons' choices are strongly influenced by the presence of delay-interval stimuli. (c) 2007 Elsevier B.V. All rights reserved.

  • In Experiment 1 with rats, a left lever press led to a 5-s delay and then a possible reinforcer. A right lever press led to an adjusting delay and then a certain reinforcer. This delay was adjusted over trials to estimate an indifference point, or a delay at which the two alternatives were chosen about equally often. Indifference points increased as the probability of reinforcement for the left lever decreased. In some conditions with a 20% chance of food, a light above the left lever was lit during the 5-s delay on all trials, but in other conditions, the light was only lit on those trials that ended with food. Unlike previous results with pigeons, the presence or absence of the delay light on no-food trials had no effect on the rats' indifference points. In other conditions, the rats showed less preference for the 20% alternative when the time between trials was longer. In Experiment 2 with rats, fixed-interval schedules were used instead of simple delays, and the presence or absence of the fixed-interval requirement on no-food trials had no effect on the indifference points. In Experiment 3 with rats and Experiment 4 with pigeons, the animals chose between a fixed-ratio 8 schedule that led to food on 33% of the trials and an adjusting-ratio schedule with food on 100% of the trials. Surprisingly, the rats showed less preference for the 33% alternative in conditions in which the ratio requirement was omitted on no-food trials. For the pigeons, the presence or absence of the ratio requirement on no-food trials had little effect. The results suggest that there may be differences between rats and pigeons in how they respond in choice situations involving delayed and probabilistic reinforcers.

  • In Experiment I, an adjusting-delay procedure was used to measure pigeons' choices between a single delayed reinforcer and a range of different variable-time schedules. Indifference points showed an inverse relation between rate of reinforcement and delay that was well described by a hyperbolic equation. An adjusting-amount procedure was used in Experiment 2, in which pigeons chose between an adjusting amount of food delivered after a 0.5-s delay and 3 s of food delivered after a range of different delays, and the effects of delay were similar to those found in Experiment 1. The results from both experiments indicated that, for pigeons, the strength of a reinforcer decreased rapidly with increasing delay. Estimates of a decay rate parameter in the hyperbolic equation were similar to those found in other studies with pigeons, but the rates of temporal discounting were three or four times faster than those found in studies with rats, suggesting a possible species difference. (C) 2000 Published by Elsevier Science B.V. All rights reserved.

  • In Experiment 1, pigeons responded on concurrent-chains schedules with equal variable-interval schedules as initial links and fixed delays to food as terminal links. One terminal-link delay was always three times as long as the other. As terminal-link delays increased, response percentages on the key with the shorter terminal link increased according to a curvilinear function. This result supported the predictions of the hyperbolic value-added model and the contextual-choice theory but not delay-reduction theory. In Experiment 2, the terminal links were always delays of 2 s and 12 s, followed by food, and the durations of the initial links varied across conditions. As initial-link durations increased, pigeons' response percentages on the key with the shorter terminal link decreased, but toward an asymptote greater than 50%, indicating a continued preference for the shorter terminal link with very long initial links. This result was more consistent with the predictions of the hyperbolic-value added model than with those of the contextual-choice model or of delay-reduction theory. (C) 2004 Elsevier B.V. All rights reserved.

  • Two experiments with pigeons used concurrent-chain procedures with variable-interval schedules as initial links and different delays to food as terminal links. Two schedules were present in all sessions. but a 3rd schedule was alternately present and absent in successive sessions. When the 3rd schedule delivered food with no terminal-link delay, the presence of this schedule led to an increase in preference for the schedule with the shorter terminal link of the 2 unchanged schedules. When the terminal-link delay for the 3rd schedule was 30 s, the presence of this schedule led to a decrease in preference for the schedule with the shorter terminal link of the 2 unchanged schedules. These results are inconsistent with the predictions of R. Grace's (1994) contextual-choice model, but they are consistent with 2 other mathematical models-delay-reduction theory and the hyperbolic value-added model.

  • Three mathematical models of choice-the contextual-choice model (R. Grace, 1994), delay-reduction theory (N. Squires & E. Fantino, 1971). and a new model called the hyperbolic value-added model-were compared in their ability to predict the results from a wide variety of experiments with animal subjects. When supplied with 2 or 3 free parameters, all 3 models made fairly accurate predictions for a large set of experiments that used concurrent-chain procedures. One advantage of the hyperbolic value-added model is that it is derived from a simpler model that makes accurate predictions for many experiments using discrete-trial adjusting-delay procedures. Some results favor the hyperbolic value-added model and delay-reduction theory over the contextual-choice model, but more data are needed from choice situations For which the models make distinctly different predictions.

  • Savastano and Fantino (1996) reported that in concurrent-chains schedules, initial-link choice proportions remained constant as terminal-link durations increased as long as the subtractive difference between the two terminal-link schedules remained constant. Two experiments with pigeons were conducted to examine this constant-difference effect. Both experiments used equal variable-interval schedules as initial links. The terminal links were fixed delays to reinforcement in Experiment 1 and variable delays to reinforcement in Experiment 2. The durations of the terminal links were varied across conditions, but the difference between pairs of terminal links was always 10 s. In both experiments, preference for the shorter terminal link became less extreme as terminal-link, durations increased, so a constant-difference effect was not found, It is argued, however, that this choice situation does not provide clear evidence for or against delay-reduction theory versus other theories of choice.

  • Experiments with pigeons and rats on concurrent-chains schedules examined a paradoxical effect reported by R. A. Preston and E. Fantino (1991). One schedule in the concurrent chain had a variable-interval (VI) 60-s initial link, and its terminal link was a 10-s delay to food. The other schedule had an initial link that ranged from VI 60 s to VI 2 s, and its terminal link was a 20-s delay to food. The paradoxical effect--a decrease in preference for the 20-s delay as its initial link was shortened--was found in some conditions but not in others. An analysis of response-reinforcer delays suggested that the paradoxical effect occurred in conditions in which responding on the short VI schedule almost always led to the 20-s delay, eliminating the possibility of switching to the alternative with the shorter delay. Copyright 2005 APA.

  • Pigeons responded on concurrent-chain schedules with variable-interval initial links and equal delays as terminal links. The terminal-link delays were 1 sec in some conditions and 20 sec in other conditions. The percentages of reinforcers delivered for responses on the left key were 10%, 30%, 70%, or 90%, and this percentage was switched every five to nine sessions. The rate of change in the pigeons' response percentages after a switch was the same whether the terminal-link delays were 1 sec or 20 sec. Analysis of the effects of individual reinforcers showed that after a response on one key had been reinforced, response percentages on that key were higher for at least the next 100 responses. Small effects of individual reinforcers were evident after eight or nine additional reinforcers had been delivered. The effects of individual reinforcers were about equally large during times of transition and during periods in which overall response percentages were relatively stable.

  • Pigeons responded on a concurrent-chains schedule with two equal variable-interval (VI) schedules as initial links and delays to food of 3 and 12 s as the two terminal links. In even-numbered sessions, no other reinforcement schedule was present, and all pigeons showed a strong preference for the response key that had the shorter, 3-s terminal-link delay. In odd-numbered sessions, the initial links were interrupted at random times by one of three different types of events. When the interruptions were immediate food deliveries, the response percentages increased on the key that had the 3-s delay. When the interruptions were 30-s delays followed by food, the response percentages remained approximately unchanged. When the interruptions were 30-s delays with no food, the response percentages decreased. The results were used to compare the predictions of different mathematical models of concurrent-chains performance. The results favored models that assume that preference is determined by the relative amount of advantage that is gained when a terminal link is entered. (C) 2003 Elsevier B.V. All rights reserved.

  • This research on decision-making heuristics is similar to research on animal learning in at least two ways. First, optimality modeling has not proven to be very useful for either research area. Second, both of these research areas seek to find general principles (or heuristics) that are applicable to different species in different settings. However, the basic principles of classical and operant conditioning seem to be more uniform across species and situations, whereas decision-making heuristics can vary for different species and different situations, even for tasks with very similar characteristics.

  • In Experiment 1, pigeons' pecks on a green key led to a 5-s delay with green houselights, and then food was delivered on 20% (or, in other conditions, 50%) of the trials. Pecks on a red key led to an adjusting delay with red houselights, and then food was delivered on every trial. The adjusting delay was used to estimate indifference points: delays at which the two alternatives were chosen about equally often. Varying the presence or absence of green houselights during the delays that preceded possible food deliveries had large effects on choice. In contrast, varying the presence of the gr een or red houselights in the intertrial intervals had no effects on choice. In Experiment 2, pecks on the green key led to delays of either 5 s or 30 s with green houselights, and then food was delivered on 20% of the trials. Varying the duration of the green houselights on nonreinforced trials had no effect on choice. The results suggest that the green houselights served as a conditioned reinforcer at some rimes but not at others, depending on whether or nor there was a possibility that a primary reinforcer might be delivered. Given this interpretation of what constitutes a conditioned reinforcer, most of the results were consistent with the view that the strength of a conditioned reinforcer is inversely related to its duration.

  • Pigeons pecked on two response keys that delivered reinforcers on a variable-interval schedule. The proportion of reinforcers delivered by one key was constant for a few sessions and then changed, and subjects' choice responses were recorded during these periods of transition. In Experiment 1, response proportions approached a new asymptote slightly more slowly when the switch in reinforcement proportions was more extreme. In Experiment 2, slightly faster transitions were found with higher overall rates of reinforcement. The results from the first session, after a switch in the reinforcement proportions, were generally consistent with a mathematical model that assumes that the strength of each response is increased by reinforcement and decreased by nonreinforcement. However, neither this model nor other similar models; predicted the `'spontaneous recovery” observed in later sessions: At the start of these sessions, response proportions reverted toward their preswitch levels. Computer simulations could mimic the spontaneous recovery by assuming that subjects store separate representations of response strength for each session, which are averaged at the start of each new session.

Last update from database: 3/13/26, 4:15 PM (UTC)