Chapter 13 Glossary

matched-pairs design: an experimental design in which the participants are paired off by matching them on some variable assumed to be correlated with the dependent variable. Then, for each matched pair, one member is randomly assigned to one treatment condition, whereas the other is assigned to the other treatment condition (or to a control condition). This design usually has more power than a simple between-groups experiment. (p. 394)


dependent groups t test (also called within-subjects t test): a statistical test for analyzing matched-pairs designs or within-subjects designs that use only two levels of the treatment. (p. 398)


repeated-measures design: see within-subjects design.


within-subjects design: an experimental design in which each participant is tested under more than one level of the independent variable. Because each participant is measured more than once (for example, after receiving Treatment A, and again after receiving Treatment B), this design is also called a repeated-measures design. In a within-subjects (repeated-measures) design, a participant may receive Treatment A first, Treatment B second, and so on. (p. 401)


randomized within-subjects design: to make sure that not every participant receives a treatment series in the same sequence, within-subjects researchers may randomly determine which treatment comes first, which comes second, and so on. In other words, participants all get the same treatments, but they receive different sequences of treatments. (p. 407)


order: the position in a sequence (first, second, third, etc.) in which a treatment occurs. (p. 403)


order effects (trial effects): a big problem with within-subjects designs. The order in which the participant receives a treatment (first, second, etc.) will affect how the participant behaves. Order effects may be due to practice effects, fatigue effects, carryover effects, or sensitization. Do not confuse with sequence effects. (p.403)


practice effects: after doing the dependent-measure task several times, a participantŐs performance may improve. In a within-subjects design, this improvement might be incorrectly attributed to having received a treatment. (p. 404)


fatigue effects: decreased performance on the dependent measure due to being tired or less enthusiastic as the experiment continues. In a within-subjects design, this decrease in performance might be incorrectly attributed to a treatment. Fatigue effects could be considered negative practice effects. (p. 404)


carryover effects (also called treatment carryover effects): the effects of a treatment administered earlier in the experiment persist so long that they are present even while participants are receiving additional treatments. Carryover effects create problems for within-subjects designs because you may believe that the participantŐs behavior is due to the treatment just administered when, in reality, the behavior is due to the lingering effects of a treatment administered some time earlier. (p. 404)


sensitization: after getting several different treatments and performing the
dependent-variable task several times, participants in a within-subjects design may realize (become sensitive to) what the hypothesis is. Consequently, a participant in a within-subjects design may behave very differently during the last trial of the experiment (now that the participant knows what the experiment is about) than the participant did in the early trials (when the participant was na•ve). (p. 404)


counterbalanced within-subjects designs: designs that give participants the treatments in systematically different sequences. These designs balance out routine order effects. (p. 409)


sequence effects (do not confuse with order effects):if participants who receive one sequence of treatments score differently than those participants who receive the treatments in a different sequence, there is a sequence effect. (p. 413)


mixed design: a design that has at least one within-subjects factor and one between-subjects factor. Counterbalanced designs are a type of mixed design. (p. 420)


power: the ability to find statistically significant results when variables are related. Within-subjects designs are popular because of their power. (p. 395)