Chapter 14 Glossary

temporal precedence: changes in the suspected cause occur before changes in behavior. Because causes come before effects, researchers trying to establish causality must establish temporal precedence. Experimental designs establish temporal precedence by manipulating the treatment variable. (p. 429)

 

covariation: changes in the treatment are accompanied by changes in the behavior. To establish causality, you must establish covariation. (p. 429)

 

spurious: when the covariation observed between two variables is not due to the variables influencing each other, but because both are being influenced by some third variable. For example, the relationship between ice cream sales and assaults in New York is spurious—not because it does not exist (it does!)—but because ice cream does not cause increased assaults, and assaults do not cause increased ice cream sales. Instead, high temperatures probably cause both increased assaults and ice cream sales. Beware of spuriousness whenever you look at research that does not use an experimental design. (p. 430)

 

single-n designs: designs that try to establish causality by studying a single participant and arguing that the covariation between treatment and changes in behavior could not be due to anything other than the treatment. A key to this approach is preventing factors other than the treatment from varying. (p. 431)

 

baseline: the participant’s behavior on the task prior to receiving the treatment. (p.433)

 

stable baseline: when the participant’s behavior, prior to receiving the treatment, is consistent. To establish a stable baseline, the researcher may have to keep many factors constant. If the researcher establishes a stable baseline and then is able to change the behavior after administering the treatment, the researcher can make the case that the treatment caused the effect. (p. 433)

 

A–B design: the simplest single-n design, consisting of measuring the participant’s behavior at baseline (A) and then measuring the participant after the participant has received the treatment (B). (p. 433)

 

A–B–A design (reversal design): a single-n design in which baseline measurements are made of the target behavior (A), then a treatment is administered and the participant’s behavior is recorded (B), and then the treatment is removed and the target behavior is measured again (A).The A–B–A design makes a more convincing case for the treatment’s effect than the A–B design. (p. 436)

 

carryover effects: the effects of a treatment condition persist into later conditions. Because of carryover, investigators using an A–B–A design frequently find that the participant’s behavior does not return to the original baseline. The possibilities for carryover effects increase dramatically when you use more levels of the independent variable and/or when you use more than one independent variable. Because carryover effects are a serious concern, many single-n researchers minimize carryover’s complications by doing experiments that use only two levels of a single independent variable. (p. 437)

 

multiple-baseline design: a single-n design in which the researcher studies several behaviors at a time. The researcher collects a baseline on these different behaviors. The researcher then introduces a treatment to try to modify one of the behaviors. The researcher hopes that the treatment will change the selected behavior, but that the other behaviors will stay at baseline. Next, the researcher tries to modify the second behavior and so on. For example, a manager might collect baseline data on employee absenteeism, tardiness, and cleanliness. Then, the manager would reward cleanliness while continuing to collect data on all three variables. Then, the manager would reward punctuality, and so on. (p. 439)

 

history: events in the environment—other than the treatment—that have changed. Differences between conditions that are believed to be due to treatment may sometimes be due to history. (p. 445)

 

instrumentation: differences between conditions being due to differences in how the conditions were measured. If, for example, the actual measurement instrument used in the pretest was different than the measure used in the posttest (or the way the instrument was administered changed from pretest to posttest), you should be concerned about instrumentation. (p. 445)

 

testing: participants score differently on the posttest as a result of what they learned from taking the pretest. Practice effects could be considered a type of testing effect. (p.445)

 

maturation: changes in the participant that naturally occur over time. Physiological changes such as fatigue, growth, and development are common sources of maturation. (p. 445)

 

mortality (attrition): differences between conditions are due to participants dropping out of the study. (p. 445)

 

selection: treatment and no-treatment groups were different at the end of the study because the groups differed before the treatment was administered. (p. 455)

 

selection-maturation interaction: treatment and no-treatment groups, although similar at one point, would have grown apart (developed differently) even if no treatment had been administered. That is, the effect of maturation is different for one group than another. (p. 445)

 

regression toward the mean (also known as regression and statistical regression): one reason we don’t know an individual’s true score on a variable is that measurements are affected by random error. Averaged over all scores, random error has no net effect. That is, although random error pushes some individuals’ scores up higher than their true scores, it pushes other individuals’ scores down. However, if we only select participants with extremely high scores, we are selecting—for the most part—only those pretest scores that random error increased (if it had decreased them, those scores wouldn’t be so high). If we retest these individuals, their retest scores will be lower because random error will probably not push all of their scores up two times in a row. Consequently, these participants’ retest scores will be less extreme. That is, their scores will revert back to more average levels. In other words, their retest scores will regress toward the mean. Likewise, if you select participants based on their having extremely low pretest scores, you will find that the scores of those participants will tend to be not as low on the posttest. (p. 445)

 

quasi-experiment: a study that resembles an experiment except that random assignment played no role in determining which participants got which level of treatment. Quasi-experiments have less internal validity than experiments. The time-series design and the nonequivalent control-group design are considered quasi-experimental designs. (p. 447)

 

pretest–posttest design: a before–after design in which each participant is given the pretest, administered the treatment, then given the posttest. The pretest–posttest design is not vulnerable to selection and selection by maturation interactions. It is, however, extremely vulnerable to history, maturation, and testing effects. The pretest-posttest design is does not have enough internal validity to be considered an experimental design. In fact, because of its poor internal validity, it is usually not even considered to be a quasi-experimental design. (p. 449)

 

time-series design: a quasi-experimental design in which a series of observations is taken from a group of participants over time before and after they receive treatment. Collecting a series of observations on each participant allows the researcher to estimate the extent to which the participant’s behavior tends to change.                                                                                                 Thus, the researcher is in a good position to see whether the changes that occur after the treatment is introduced are greater than the changes that normally occur. Because the time-series design is in a better position to estimate the effects of many potential threats to internal validity, it is an improvement over the pretest–posttest design. However, it is still extremely vulnerable to history effects. (p. 449)

 

nonequivalent control-group design: a quasi-experimental design that, like a simple experiment, has a treatment group and a no-treatment comparison group. However, unlike the simple experiment, random assignment does not determine which subjects get the treatment and which do not. Having a comparison group is better than not having one, but if the comparison group was not really equivalent to the treatment group at the start of the study, you may be comparing apples with oranges. Thus, you may mistakenly believe that the treatment had an effect when it did not. In other words, selection is a serious threat to the validity of the nonequivalent control-group design. (p. 455).

 

law of parsimony: the assumption that the simplest explanation is the most likely. Quasi-experimenters often argue that their results are more parsimoniously explained by the treatment having an effect than by some cyclical effect of some other variable or than by a selection by maturation interaction. (p. 459)