In Chapter 11, we extend the logic of the two-group experiment to experiments that have more than two groups. The advantages of using more than two groups are:

1. We can compare several treatments at once (e.g., chiropractor vs. physician vs. no-treatment)

2. We can map the functional relationship between our variables, if we

- choose the levels of our treatment variable so that they differ by either a constant amount (1,2,3) or a constant proportion (4, 8, 16)

- use a measure that gives us interval or ratio data

3. We can improve our construct validity by

- adding control groups.
- not basing our conclusions solely on comparing a treatment group to an
empty control group

To analyze the results of the multiple-group experiment, we rely on the same basic logic as when we analyze a two-group experiment. That is, in both cases we compare:

variability (differences)Why do we compare these two variances? Because you know that the group means could differ even if the treatment didn't have an effect. More specifically, the groups could differ because (1) the treatment made them different and/or (2) random error (for example, the groups were probably somewhat different to start with).betweenthe means of our groups

TO

variability (differences) of scoreswithineach group

To determine whether the difference between the group means that you observed at the end of the experiment is greater than would be expected from random error alone, you need to know how much of a difference you can reasonably expect random error to make.

You can get that estimate by looking at variability of scores within each group. Scores within each group can't differ from each other because of treatment-- they are all getting the same treatment!

So, you compare the variability between your groups (which is influenced by both random error and any treatment effect)

with

variability within your groups (which is influenced by only one thing--random error)

If variability between group means is much bigger than variability with the groups, the results are statistically significant. To find out how much bigger, you need to use the * F*
table in Appendix E.

To use the * F* table, you need to know the degrees of freedom for variability between groups (that's number of groups-1) and the degrees of freedom for the error term (number of participants-number of groups).

If you get a significant effect, you know that at least two of your groups differ, but you may not know which two. To find out, use a post hoc test, such as the Tukey test (see Appendix E).

If you aren't interested in which particular groups differ from each other but rather you want to map the shape of the functional relationship between amount of the treatment variable and score on your measure, then you will want to use a post hoc trend analysis. You need to know something about post hoc trend analyses so that you can design an experiment that will let you do one (see Box 11-2).

If you've properly designed your experiment, you can do a trend analysis by following the steps described on pages 344-345 -- or by having someone (a professor, a stat consultant) help you. However, if you don't design your study properly, nobody will be able to help you!

Back to Chapter 11 Menu