In Chapter 11, we extend the logic of the two-group experiment to experiments that have more than two groups. The advantages of using more than two groups are:
1. We can compare several treatments at once (e.g., chiropractor vs. physician vs. no-treatment)
2. We can map the functional relationship between our variables, if we
- choose the levels of our treatment variable so that they differ by either a constant amount (1,2,3) or a constant proportion (4, 8, 16)
- use a measure that gives us interval or ratio data
3. We can improve our construct validity by
- adding control groups.
- not basing our conclusions solely on comparing a treatment group to an empty control group
To analyze the results of the multiple-group experiment, we rely on the same basic logic as when we analyze a two-group experiment. That is, in both cases we compare:
variability (differences) between the means of our groupsWhy do we compare these two variances? Because you know that the group means could differ even if the treatment didn't have an effect. More specifically, the groups could differ because (1) the treatment made them different and/or (2) random error (for example, the groups were probably somewhat different to start with).
variability (differences) of scores within each group
If variability between group means is much bigger than variability with the groups, the results are statistically significant. To find out how much bigger, you need to use the F table in Appendix E.
To use the F table, you need to know the degrees of freedom for variability between groups (that's number of groups-1) and the degrees of freedom for the error term (number of participants-number of groups).
If you get a significant effect, you know that at least two of your groups differ, but you may not know which two. To find out, use a post hoc test, such as the Tukey test (see Appendix E).
If you aren't interested in which particular groups differ from each other but rather you want to map the shape of the functional relationship between amount of the treatment variable and score on your measure, then you will want to use a post hoc trend analysis. You need to know something about post hoc trend analyses so that you can design an experiment that will let you do one (see Box 11-2).
If you've properly designed your experiment, you can do a trend analysis by following the steps described on pages 344-345 -- or by having someone (a professor, a stat consultant) help you. However, if you don't design your study properly, nobody will be able to help you!