Your search

Resource language
  • In Experiment 1, pigeons' pecks on a green key led to a 5-s delay with green houselights, and then food was delivered on 20% (or, in other conditions, 50%) of the trials. Pecks on a red key led to an adjusting delay with red houselights, and then food was delivered on every trial. The adjusting delay was used to estimate indifference points: delays at which the two alternatives were chosen about equally often. Varying the presence or absence of green houselights during the delays that preceded possible food deliveries had large effects on choice. In contrast, varying the presence of the gr een or red houselights in the intertrial intervals had no effects on choice. In Experiment 2, pecks on the green key led to delays of either 5 s or 30 s with green houselights, and then food was delivered on 20% of the trials. Varying the duration of the green houselights on nonreinforced trials had no effect on choice. The results suggest that the green houselights served as a conditioned reinforcer at some rimes but not at others, depending on whether or nor there was a possibility that a primary reinforcer might be delivered. Given this interpretation of what constitutes a conditioned reinforcer, most of the results were consistent with the view that the strength of a conditioned reinforcer is inversely related to its duration.

  • Two experiments studied the phenomenon of procrastination, in which pigeons chose a larger, more delayed response requirement over a smaller, more immediate response requirement. The response requirements were fixed-interval schedules that did not lead to an immediate food reinforcer, but that interrupted a 55-s period in which food was delivered at random times. The experiments used an adjusting-delay procedure in which the delay to the start of one fixed-interval requirement was varied over trials to estimate an indifference point-a delay at which the two alternatives were chosen about equally often. Experiment 1 found that as the delay to a shorter fixed-interval requirement was increased, the adjusting delay to a longer fixed-interval requirement also increased, and the rate of increase depended on the duration of the longer fixed-interval requirement. Experiment 2 found a strong preference for a fixed delay of 10 s to the start of a fixed-interval requirement compared to a mixed delay of either 0 or 20 s. The results help to distinguish among different equations that might describe the decreasing effectiveness of a response requirement with increasing delay, and they suggest that delayed reinforcers and delayed response requirements have symmetrical but opposite effects on choice.

  • For several decades, choice has been the focus of considerable research by those who study operant behavior. This is not surprising, because the topics of choice and operant behavior are intimately intertwined. In everyday life, people can choose among a large, almost infinite set of operant behaviors, and they can choose not only which behaviors to perform, but under what conditions, at what rate, and for how long. Because choice is an essential part of human (and animal) life, it has been studied with great interest not only by behavioral psychologists, but also by decision theorists, economists, political scientists, biologists, and others. The research methods used in these different disciplines vary widely, and a review of all of the different methods for studying choice is well beyond the scope and purpose of this chapter. Instead, the chapter will focus on the techniques most frequently used in operant research—techniques that involve single-subject designs, that allow precise control of the reinforcement contingencies, and that produce (in most cases) large and clear effects on each subject’s behavior.

  • In two experiments with pigeons, a single variable-interval schedule assigned reinforcers to two response keys on a percentage basis. The percentage of reinforcers assigned to each key was changed every few sessions, and subjects' choice responses were recorded before and after each change. In Experiment 1, the overall rate of reinforcement was varied across conditions The pigeons' choice responses adapted more quickly to a change in the reinforcement percentages when the overall reinforcement rates were higher, but acquisition rates varied by only about a factor of 3, whereas reinforcement rates were varied by about a factor of 9. In Experiment 2, the reinforcement percentages changed about every 8 sessions in Phases 1 and 3, but every 1 or 2 sessions in Phase 2. Pigeons' choice responses adapted to a change in reinforcement percentages more quickly in Phase 2 than in Phases 1 and 3. The results from both experiments pose difficulties for several prominent models of transitional choice behaviour. The results suggest that each successive reinforcer has more impact on a subject's subsequent choice behaviour when the overall rate of reinforcement is lower and when the reinforcement contingencies have changed frequently in the recent past.

  • This 2-year longitudinal study examined the affective nature of communication between mothers and adolescents from early to mid-adolescence. Eleven-to 16-year-old adolescents and their mothers were videotaped while engaging in conversations about everyday topics, dating and sexuality, and conflicts. Nonverbal displays of affiliation, embarrassment, and contempt were found to be fairly stable across conversations, across members of the same dyad, and across time for the mothers. However, there were some effects of conversational topic in that adolescents displayed less affiliation in the conflict conversation than in the other conversations during the 1st session. In addition, boys displayed more contempt when talking about dating and sexuality than about conflicts. Over the 2-year period, the level of affiliation decreased for adolescents, and maternal conversational dominance increased. In all conversations and at both time periods, adolescents displayed more embarrassment and contempt and less affiliation than did mothers. Both maternal and adolescent levels of affiliation during the conversations in the 1st session predicted degree of satisfaction with certain family characteristics expressed in the 2nd session. Copyright © 1997, Lawrence Erlbaum Associates, Inc.

  • The hyperbolic-decay model is a mathematical expression of the relation between delay and reinforcer value. The model has been used to predict choices in discrete-trial experiments on delay-amount tradeoffs, on preference for variable over fixed delays, and on probabilistic reinforcement. Experiments manipulating the presence or absence of conditioned reinforcers on trials that end without primary reinforcement have provided evidence that the hyperbolic-decay model actually predicts the strength of conditioned reinforcers rather than the strength of delayed primary reinforcers. The model states that the strength of a conditioned reinforcer is inversely related to the time spent in its presence before a primary reinforcer is delivered. A possible way to integrate the model with Grace's (1994) contextual-choice model for concurrent-chain schedules is presented. Also discussed are unresolved difficulties in determining exactly when a stimulus will or will not serve as a conditioned reinforcer.

  • A discrete-trials adjusting-delay procedure was used to investigate the conditions under which pigeons might show a preference for partial reinforcement over 100% reinforcement, an effect reported in a number of previous experiments. A peck on a red key always led to a delay with red houselights and then food. In each condition, the duration of the red-houselight delay was adjusted to estimate an indifference point. In 100% reinforcement conditions, a peck on a green key always led to a delay with green houselights and then food. In partial-reinforcement conditions, a peck on the green key led either to the green houselights and food or to white houselights and no food. In some phases of the experiment, statistically significant preference for partial reinforcement over 100% reinforcement was found, but this effect was observed in only about half of the pigeons. The effect was largely eliminated when variability in the delay stimulus colors was equated for 50% reinforcement conditions and 100% reinforcement conditions. Idiosyncratic preferences for certain colors or for stimulus variability may be at least partially responsible for the effect.

  • The current study examined the nature and style of mother-adolescent conversations, how these conversations differ by subject matter and dyadic and individual differences. Thirty-one mother-adolescent dyads (17 boys, 14 girls) with a child between the ages of 11 and 14 had a nonstructured conversation and conversations about conflict and sexuality They also completed questionnaires on beliefs about acquired immunodeficiency syndrome (AIDS). Conversations were measured for turn taking, total number of words, and conversational dominance, as well as nonverbal measures of affiliation, shame, and contempt Conversations about sexuality involved less turn taking, fewer words, and more mother dominance than nonstructured conversations. Conversations about conflicts involved less turn taking but more words than nonstructured conversations Some gender and age differences were found. Move interactive conflict conversations contained higher levels of affiliation, and lower levels of child shame than conversations with fewer turns or higher mother dominance. In addition, children in move interactive dyads possessed a larger percentage of their mother's AIDS knowledge, and worried about AIDS a moderate amount.

  • Two experiments investigated how individuals use explicit memory cues that designate different probabilities of test. As in typical directed forgetting studies, subjects received words explicitly cued as having either a 0% or a 100% chance of being on a subsequent memory test (i.e. forget and remember cues, respectively). In addition, some words were explicitly cued as having the potential to be either forgotten or remembered (i.e. a 50% cue). Recall of 50% words was between that of 0% and 100% words. In addition, the presence of 50% words lowered recall of the 100% words compared to that of a control group that did not receive the 50% words, but received the same number of 100% words. A think-aloud task indicated that these results were due to the 50% words being treated like either 100% or 0% words at encoding. The results are discussed in terms of the effect of different probabilities of test on the strategic processing and representation of information.

  • Pigeons' responses on two keys were recorded before and after the percentage of reinforcers delivered by each key was changed. In each condition of Experiment 1, the reinforcement percentage for one key was 50% for several sessions, then either 70% or 90% for one, two, or three sessions, and then 50% for another few sessions. At the start of the second and third sessions after a change in reinforcement percentages, choice percentages often exhibited spontaneous recovery-a reversion to the response percentages of earlier sessions. The spontaneous recovery consisted of a shift toward a more extreme response percentage in some cases and toward a less extreme response percentage in other cases, depending on what reinforcement percentages were previously in effect. In Experiment 2, some conditions included a 3-day rest period before a change in reinforcement percentages, and other conditions included no such rest days. Slightly less spontaneous recovery was observed in conditions with the rest periods, suggesting that the influence of prior sessions diminished with the passage of time. The results are consistent with the view that choice behavior at the start of a new session is based on a weighted average of the events of the past several sessions.

  • STIR, Success Through Individual Recreation is examined to determine the benefits of individualized recreational activities geared towards the lower functioning...

  • In three experiments, pigeons chose between alternatives that required the completion of a small ratio schedule early in the trial or a larger ratio schedule later in the trial. Completion of the ratio requirement did not lead to an immediate reinforcer, but simply allowed the events of the trial to continue. In Experiment 1, the ratio requirements interrupted periods in which food was delivered on a variable-time schedule. In Experiments 2 and 3, each ratio requirement was preceded and followed by a delay, and only one reinforcer was delivered, at the end of each trial. Two of the experiments used an adjusting-ratio procedure in which the ratio requirement was increased and decreased over trials so as to estimate an indifference point-a ratio size at which the two alternatives were chosen about equally often. These experiments found clear evidence for `'procrastination”-the choice of a larger but more delayed response requirement. In some cases, subjects chose the more delayed ratio schedule even when it was larger than the more immediate alternative by a factor of four or more. The results suggest that as the delay to the start of a ratio requirement is increased, it has progressively less effect on choice behavior, in much the same way that delaying a positive reinforcer reduces it effect on choice.

  • Two experiments investigated how college students answered direction-giving questions when a confederate asked for directions to a destination on a university campus. The experiments applied the QUEST model (Graesser and Franklin, 1990) to direction giving, emphasizing the pragmatic component of the model that focuses on establishing common ground and dealing with the questioner's goals. The two experiments had different articulations of the direction-giving question (i.e.'How do you get to destination X?' versus `Where is destination X?'), and a different destination. The answers generated by subjects supported both aspects of the pragmatic component.

  • In an adjusting-delay choice procedure, pigeons could peck on either a red key or a green key. A peck on the red key always led to a delay associated with red houselights and then food. The delay was adjusted over trials to estimate an indifference point-a delay at which the two keys were chosen about equally often. In some conditions, a peck on the green key led to food on all trials after delays of either 10 a or 30 s, and green houselights were lit during the delays. In other conditions, food was presented on only half of the green-key trials. If the green houselights continued to occur on both reinforcement and nonreinforcement trials, preference for the green key always decreased. Preference for the green key also decreased if half of the trials had 30-s houselights followed by food and the other half had no green houselights and no food. However, preference for the green key actually increased if half of the trials had 10-s green houselights followed by food and the other half had no green houselights followed by no food. The latter condition therefore demonstrated a case in which preference for an alternative increased when food was removed from half of the trials. The results suggest that the red and green houselights served as conditioned reinforcers. A hyperbolic decay model (Mazur, 1989) provided good predictions for all conditions by assuming that the strength of a conditioned reinforcer is inversely related to the total time spent in its presence before food is delivered.

  • Pigeons pecked on two response keys that delivered reinforcers on a variable-interval schedule. The proportion of reinforcers delivered by one key was constant for a few sessions and then changed, and subjects' choice responses were recorded during these periods of transition. In Experiment 1, response proportions approached a new asymptote slightly more slowly when the switch in reinforcement proportions was more extreme. In Experiment 2, slightly faster transitions were found with higher overall rates of reinforcement. The results from the first session, after a switch in the reinforcement proportions, were generally consistent with a mathematical model that assumes that the strength of each response is increased by reinforcement and decreased by nonreinforcement. However, neither this model nor other similar models; predicted the `'spontaneous recovery” observed in later sessions: At the start of these sessions, response proportions reverted toward their preswitch levels. Computer simulations could mimic the spontaneous recovery by assuming that subjects store separate representations of response strength for each session, which are averaged at the start of each new session.

  • In three experiments, pigeons chose between a small amount of food delivered after a short delay and a larger amount delivered after a longer delay. A discrete-trial adjusting-delay procedure was used to estimate indifference points-pairs of delay-amount combinations that were chosen about equally often. In Experiment 1, when additional reinforcers were available during intertrial intervals on a variable-interval schedule, preference for the smaller, more immediate reinforcer increased. Experiment 2 found that this shift in preference occurred partly because the variable-interval schedule started sooner after the smaller, more immediate reinforcer, but there was still a small shift in preference when the durations and temporal locations of the variable-interval schedules were identical for both alternatives. Experiment 3 found greater increases in preference for the smaller, more immediate reinforcer with a variable-interval 15-s schedule than with a variable-interval 90-s schedule. The results were generally consistent with a model that states that the impact of any event that follows a choice response declines according to a hyperbolic function with increasing time since the moment of choice.

  • Earlier studies have demonstrated a significant relation between scores on the Physical Anhedonia Scale-but not on the Perceptual Aberration Scale-and premorbid social adjustment in schizophrenics (Chapman, Chapman, & Raulin, 1976, 1978; Schuck, Leventhal, Rothstein, & Irizarry, 1984). A similar relation between scores on these 2 scales and interpersonal competence in college students has also been noted (Beckfield, 1985; Haberman, Chapman, Numbers, & McFall, 1979; Numbers & Chapman, 1982). The present study extends this work by examining the relation of premorbid adjustment to scores on these 2 scales among young, nonpsychotic psychiatric inpatients. Consistent with the earlier findings, anhedonic Ss had poorer premorbid social competence when compared with nonanhedonic Ss, whereas no relation was found between scores on perceptual aberration and premorbid social competence.

  • In a discrete-trials procedure with pigeons, a response on a green key led to a 4-s delay (during which green houselights were lit) and then a reinforcer might or might not be delivered. A response on a red key led to a delay of adjustable duration (during which red houselights were lit) and then a certain reinforcer. The delay was adjusted so as to estimate an indifference point-a duration for which the two alternatives were equally preferred. Once the green key was chosen, a subject had to continue to respond on the green key until a reinforcer was delivered. Each response on the green key, plus the 4-s delay that followed every response, was called one “link of the green-key schedule. Subjects showed much greater preference for the green key when the number of links before reinforcement was variable (averaging four) than when it was fixed (always exactly four). These findings are consistent with the view that probabilistic reinforcers are analogous to reinforcers delivered after variable delays. When successive links were separated by 4-s or 8-s “interlink intervals with white houselights, preference for the probabilistic alternative decreased somewhat for 2 subjects but was unaffected for the other 2 subjects. When the interlink intervals had the same green houselights that were present during the 4-s delays, preference for the green key decreased substantially for all subjects. These results provided mixed support for the view that preference for a probabilistic reinforcer is inversely related to the duration of conditioned reinforcers that precede the delivery of food.

  • Twenty acquisition curves were obtained from each of 8 pigeons in a free-operant choice procedure. Every condition began with a phase in which two response keys had equal probabilities of reinforcement, and, as a result, subjects' responses were divided fairly evenly between the two keys. This was followed by a phase in which one key had a higher probability of reinforcement than the other, and the development of preference was observed. In all but a few cases, response proportions increased for the key with the higher probability of reinforcement. In most conditions, the two probabilities differed by .06, but the actual probabilities varied (from .16 and .10 in one condition to .07 and .01 in another). Development of preference for the key with the higher probability of reinforcement was slower when the ratio of the two reinforcement probabilities was small (.16/.10) than when it was large (.07/.01). This finding is inconsistent with the predictions of several different quantitative models of acquisition, including the kinetic model (Myerson & Miezin, 1980) and the ratio-invariance model (Horner & Staddon, 1987). However, the finding is consistent with a hypothesis based on Weber's law, which states that the two alternatives are more discriminable when the ratio of their reinforcement probabilities is larger, and, as a result, the acquisition of preference is faster.

Last update from database: 5/1/26, 4:15 PM (UTC)

Explore

Department

Publication year

Resource language