Chapter 12 Glossary

factorial experiment: an experiment that examines two or more independent variables (factors) at a time. (p. 456)

 

simple main effect: the effects of one independent variable at a specific level of a second independent variable. A single, simple main effect could have been obtained merely by doing a simple experiment. The simplest factorial design, the 2 X 2 experiment, produces four simple main effects. (p. 471)

 

main effect (overall main effect): the average of a variable's simple main effects: the overall or average effect of an independent variable. (p. 472)

 

interactionwhen the effect of combining two variables is different than the sum of their individual effects. If you need to know how much of one variable participants have received to say what the effect of another variable is, you have an interaction between those two variables. If you have an interaction, then (1) the simple main effect of a variable is different in one condition than in another, and (2) the lines in a graph of your data will not be parallel. (p. 474)

 

ordinal interaction: reflects the fact that an independent variable seems to have more of an effect under one level of a second independent variable than under another level. If you graph an ordinal interaction, the lines will not be parallel, but they will not cross. It is called an ordinal interaction because the interaction may be due to having ordinal data. That is, despite the existence of an interaction at the statistical level, the independent variable may have the same psychological effect under all levels of the second independent variable. Ordinal interactions may result from ceiling or floor effects. (p. 496)

 

ceiling effect: the effect of a treatment or combination of treatments is underestimated because the dependent measure cannot distinguish between participants who have somewhat high and those who have very high levels of the construct. The measure puts an artificially low ceiling on how high a participant may score and thus produces ordinal, rather than interval, data. Consequently, a ceiling effect may cause an ordinal interaction. (for more, see p. 204)

 

floor effect: the effect of a treatment or combination of treatments is underestimated because the dependent measure artificially restricts how low scores can be. The measure puts an artificially high floor on how low a participant may score and thus produces ordinal, rather than interval, data. Consequently, a floor effect may cause an ordinal interaction (for more, see p. 204)

 

crossover (disordinal) interaction: when an independent variable has one kind of effect in the presence of one level of a second independent variable, but a different kind of effect in the presence of a different level of the second independent variable. Examples: Getting closer to someone may increase their attraction to you if you have just complimented them, but may decrease their attraction to you if you have just insulted them. It is called a crossover interaction because the lines in a graph will cross. It is called a disordinal interaction because it cannot be explained as an artifact of having ordinal, rather than interval, data. (p. 494)

 

replication factor: a factor sometimes included in a factorial design to see whether an effect replicates (occurs again) under slightly different conditions. For example, suppose an investigator wants to see if a new memory strategy is superior to a conventional one. Instead of having all the participants memorize the same story, the researcher assigns different participants to get different stories. Type of story is the replication factor in the study. The researcher hopes that memory strategy manipulation will have the same effect regardless of which story is used. But, if story type matters (there is an interaction between memory strategy and story type), the researcher might do further research to understand why the strategy was less effective in helping participants remember certain types of stories. (p. 505)

 

stimulus set: the particular stimulus materials shown to two or more groups of participants. Researchers may use more than one stimulus set in a study so that they can see whether the treatment effect replicates across different stimulus sets. In those cases, stimulus sets would be a replication factor. (p. 505)

 

systematic replication: a study that varies from the original study only in some minor aspect, such as using different stimulus materials. Thus, if you include stimulus set as a factor in your design, your study, in a sense, contains a systematic replication. (p. 505)

 

blocked design: A factorial design in which participants are first divided into groups (blocks)on a subject variable (e.g., low-IQ block and high-IQ block). Then, participants from each block are randomly assigned to experimental condition. Ideally, a blocked design will be more powerful than a simple, between subjects design. (p. 513)