Chapter 13 Glossary

matched-pairs design: a between-subjects experimental design in which the participants are paired off by matching them on some variable assumed to be correlated with the dependent variable. Then, for each matched pair, one member is randomly assigned to one treatment condition, whereas the other is assigned to the other treatment condition (or to a control condition). This design usually has more power than a simple experiment: a two-group, independent groups experiment. Note that whereas the simple experiment is often analyzed using an independent groups t test, the matched pairs design is typically analyzed using the dependent groups t test. (p. 520)

 

dependent groups t test (also called within-subjects t test): a statistical test for analyzing matched-pairs designs or within-subjects designs that use only two levels of the treatment. The first step in calculating a dependent groups t test is to pair up scores (in a matched-pairs design, that would be scores from the same pair of participants, for a within-subjects design, that would be a pair of scores from the same participant). Then, for each pair, you get the difference between pair members by subtracting one of the pair's score from the other one. Next, you get an average of those differences by summing up all those differences and dividing by the number of scores. Finally, you divide that average difference by an error term. Note that this is different from an independent t test: In an independent t test, you would get an average for each group, subtract those averages, and then divide by an error term. So, as you can see, some people would call the dependent groups t test the paired t test and would call the independent groups t test the unpaired t test.  (p. 527)

 

repeated-measures design: see within-subjects design.

 

within-subjects design: an experimental design in which each participant is tested under more than one level of the independent variable. Because each participant is measured more than once (for example, after receiving Treatment A, and again after receiving Treatment B), this design is also called a repeated-measures design. In a within-subjects (repeated-measures) design, a participant may receive Treatment A first, be measured, Treatment B second, get measured, and so on.  (p. 531)

 

randomized within-subjects design: to make sure that not every participant receives a treatment series in the same sequence, within-subjects researchers may randomly determine which treatment comes first, which comes second, and so on. In other words, participants all get the same treatments, but they receive different sequences of treatments. (p. 539)

 

order: the position in a sequence (first, second, third, etc.) in which a treatment level occurs; the trial (first, second, etc.) in which a treatment level occurs. Do not confuse order with sequence.  Order  (e.g., first versus second treatment) varies within subjects whereas sequence varies between subjects (e.g., some participants will get one sequence of treatments whereas other participants will get a different sequence of treatments).  (p. 533)

 

order effects (trial effects): a big problem with within-subjects designs. The order in which the participant receives a treatment (first, second, etc.) will affect how the participant behaves. Order effects may be due to practice effects, fatigue effects, carryover effects, or sensitization. Researchers often try to deal with order effects by randomizing the order of treatments or by counterbalancing. So, if participants score best on the first trial, there is an order effect. (p. 533)

 

practice effects: after doing the dependent-measure task several times, a participant's performance may improve. In a within-subjects design, this improvement might be incorrectly attributed to having received a treatment. (p. 534)

 

fatigue effects: decreased performance on the dependent measure due to being tired or less enthusiastic as the experiment continues. In a within-subjects design, this decrease in performance might be incorrectly attributed to a treatment. Fatigue effects could be considered negative practice effects. (p. 534)

 

carryover effects (also called treatment carryover effects): the effects of a treatment administered earlier in the experiment persist so long that they are present even while participants are receiving additional treatments. Carryover effects create problems for within-subjects designs because you may believe that the participant's behavior is due to the treatment just administered when, in reality, the behavior is due to the lingering effects of a treatment administered some time earlier. (p. 535)

 

sensitization: after getting several different treatments and performing the
dependent-variable task several times, participants in a within-subjects design may realize (become sensitive to) what the hypothesis is. Consequently, a participant in a within-subjects design may behave very differently during the last trial of the experiment (now that the participant knows what the experiment is about) than the participant did in the early trials (when the participant did not know what the manipulation was). (p. 535)

 

counterbalanced within-subjects designs: designs that give participants the treatments in systematically different sequences. These designs balance out routine order effects. These designs also allow researchers to estimate the effect of order/trial (e.g., whether the participants do better on the first trial than on the second trial) as well as sequence effects (e.g., do groups getting treatment A first and treatment B second do better than groups getting treatment B first and treatment A second). (p. 544)

 

sequence effects (do not confuse with order effects): if participants who receive one sequence of treatments score differently than those participants who receive the treatments in a different sequence, there is a sequence effect. For example, the group getting sleep and then caffeine might score better than the group getting caffeine and then sleep. Similarly, it is often assumed that students taking the sequence Chemistry 1 then Chemistry 2 will do better than if they had taken Chemistry 2 then Chemistry 1. (p. 552)

 

mixed design: a design that has at least one within-subjects factor (all participants get more than one level of that factor) and one between-subjects factor (some participants get one level of this factor, other participants get another level of this factor). Counterbalanced designs are a type of mixed design (the within-subjects variable is the treatment variable because all participants get different levels of that variable, the sequence of treatments is a between-subjects variable because some groups get one sequence of treatments whereas other group(s) get different sequence(s)).   (p. 553)

 

power: the ability to find statistically significant results when variables are related. Within-subjects designs are popular because they are powerful. They are powerful because they (a) eliminate between-subject error and (b) get more than one observation per participant. (p. 523)