Chapter 11 Glossary

analysis of variance (ANOVA): a statistical test that is especially useful when data are interval, and there are more than two groups. For the experiments discussed in this chapter, ANOVA involves dividing between-groups variance by within-groups variance. (p. 339)

 

between-groups variance (treatment variance, variability between group means, Mean Square Treatment, Mean Square Between): at one level, between-groups variance is just a measure of how much the group means differ from each other. Thus, if all the groups had the same mean, between-groups variance would be zero. At another level, between-groups variance is an estimate of the combined effects of the two factors that would make group means differ—treatment effects and random error. (p. 337)

 

within groups variance (error variance, variability within groups, Mean Square Error, Mean Square Within): at one level, within-groups variance is just a measure of the degree to which scores within each group differ from each other. A small within-groups variance means that participants within each group are all scoring similarly. At another level, within-groups variance is an estimate of the effects of random error (because participants in the same treatment group score differently due to random error, not due to treatment). Thus, within-groups variance is also called error variance. (p. 336)

 

F ratio: at the numerical level, the F ratio is the Mean Square Between divided by the Mean Square Within. At the conceptual level, F is the between-groups variance (treatment plus random error) divided by within-groups variance (random error).

       If the treatment has no effect, the F ratio will tend to be close to 1.0, indicating that the difference between the groups could be due to random error. If the treatment had an effect, the F ratio will tend to be substantially above 1.0, indicating that the difference between the groups is bigger than would be expected if only random error were at work. (p. 340)

 

confounding variables: variables, other than the independent variable, that may be responsible for the differences between your conditions. There are two types of confounding variables: ones that are manipulation-irrelevant and ones that are the result of the manipulation. Confounding variables that are irrelevant to the treatment manipulation threaten internal validity. For example, the difference between groups may be due to one group being older than the other, rather than to the treatment. Random assignment can control for the effects of those confounding variables. Confounding variables that are produced by the treatment manipulation hurt the construct validity of the study. They hurt the construct validity because even though we may know that the treatment manipulation had an effect, we don’t know what it was about the treatment manipulation that had the effect. For example, we may know that an “exercise” manipulation increases happiness (internal validity), but not know whether the “exercise” manipulation worked because people exercised more, got more encouragement, had a more structured routine, practiced setting and achieving goals, or met new friends. In such a case, construct validity is harmed because we don’t know what variable(s) are being manipulated by the “exercise” manipulation. (p. 331)

 

empty control group: a group that gets no treatment, not even a placebo. Usually, you should try to avoid empty control groups: They hurt construct validity because they don’t allow you to discount the effects of treatment-related, confounding variables. For example, empty control groups may make your study very vulnerable to hypothesis-guessing. (p. 332)

 

hypothesis-guessingparticipants trying to figure out what the study is designed to prove. Hypothesis-guessing can hurt a study’s construct validity. (p. 332)

 

levels of the independent variable: values (amounts) of the treatment variable. In the simple experiment, you only have two levels of the independent variable. In the group experiment, you have more than two levels. Having more than two levels of the independent variable can help you determine the functional relationship between the independent and dependent variables. (p. 322)

 

functional relationship: the shape of the relationship between variables. For example, the functional relationship between the independent and dependent variables might be linear or curvilinear.

(p. 325)

 

linear relationship: a functional relationship between an independent and dependent variable that is graphically represented by a straight line. (p. 326)

 

nonlinear relationship (curvilinear relationship): a functional relationship between an independent and dependent variable that is graphically represented by a curved line. (p. 327)

 

post hoc trend analysis: a type of post hoc test designed to determine whether
a linear or curvilinear relationship is statistically significant (reliable).(p. 344)

 

post hoc test: a statistical test done after (1) doing a general test such as an ANOVA and (2)finding a significant effect. Post hoc tests are used to follow up on significant results obtained from a more general test. Because a significant ANOVA says only that at least two of the groups are significantly different from one another, post hoc tests may be performed to find out which groups are significantly different from one another. (p. 343)