Chapter 10 Glossary

**placebo**** treatment: **a fake
treatment that we know has no effect, except through the power of suggestion.
For example, in medical experiments, a participant may be given a pill that
does not have a drug in it. By using placebo treatments, you may be able to make people "blind" to whether a participant is
getting the real treatment. (p. 390)

**single**** blind: **when either
the participant or the experimenter is unaware of whether the participant is
getting the real treatment or a placebo treatment. Making the participant
"blind" prevents the participant from biasing the results of the study; making
the experimenter blind prevents the experimenter from biasing the results of
the study. (p. 390)

**double**** blind (double masked): **
neither the participant nor the research assistant knows what type of treatment
(placebo treatment or real treatment) the participant is getting. By making both
the participant and the assistant "blind," you reduce both subject (participant)
and experimenter bias. (p. 390)

**experimental**** group: **the
participants who are randomly assigned to get the treatment. Note that the
experimental group should not be a group--in most experiments, nobody in the
experimental group would see or talk to anybody else in that group. (p. 371)

**control**** group:** the participants
who are randomly assigned to *not* receive the treatment. The scores of
these participants are compared to the scores of the experimental group to see
if the treatment had an effect. Note that the control group should not be a
group--in most experiments, nobody in the control group would see or talk to
anybody else in that group. (p. 371)

**empty**** control group: **a control
group that does not receive any kind of treatment, not even a placebo treatment.
One problem with an empty control group is that if the treatment group does
better, we do not know whether the difference is due to the treatment itself or
to a placebo effect. To maximize construct validity, most researchers avoid
using an empty control group. (p. 390)

**independent**** variable: **the
treatment variable; the variable manipulated by the experimenter. The experimental
group gets more of the independent variable than the control group.
Note: Do not confuse independent variable with dependent variable. (p. 371)

**levels**** of an independent
variable: **the treatment variable must vary. The different amounts or
kinds of treatments are called *levels. *(p. 371)

**dependent**** variable (dependent
measure): **participants' scores; the response that the researcher is
measuring. (If it helps you keep the independent and dependent variable
straight, you can think of the the *dv* (dependent variable) as the "*d*ata
*v*ariable" or what the participant is *d*oing *v*ariable). In the simple experiment, the experimenter hypothesizes that the
*dependent* variable will be affected by (*depend* on) the independent variable.
(p. 376)

**independently****,
independence: **a key assumption of almost any statistical test. In the
simple experiment, observations must be independent. That is, what one
participant does should have no influence on what another participant does, and
what happens to one participant should not influence what happens to another participant.
*Individually* assigning participants to treatment or no-treatment condition and
*individually* testing each participant are ways to achieve independence. (p.
373)

**independent**** random
assignment: **randomly determining, for each individual participant,
and without regard to what group the previous participant was assigned to,
whether that participant gets the treatment. For example, you might flip a coin
for each participant to determine whether that participant receives the
treatment. Independent random assignment to experimental condition is the
cornerstone of the simple experiment. (p. 365)

**experimental**** hypothesis: **a
prediction that the treatment will cause an effect. In other
words, a prediction that the independent variable will have an effect on the
dependent variable. (p. 366)

**null**** hypothesis: **the hypothesis
that there is no treatment effect. Basically, this hypothesis states that any
difference between the treatment and no-treatment groups is due to chance. This
hypothesis can be disproven, but it cannot be proven.
Often, disproving the null hypothesis lends support to the experimental
hypothesis. (p. 369)

**simple**** experiment:** a study
in which participants
are independently and randomly assigned to one of two conditions, sometimes to either
a treatment condition or to a no-treatment condition. The simple experiment is the easiest
way to establish that a treatment causes an effect. (p. 377)

**internal**** validity: **a
study has internal validity if it can accurately determine whether an
independent variable causes an effect. Only experimental designs have internal
validity. (p. 363)

**inferential**** statistics: **the
science of chance. More specifically, the science of
inferring the characteristics of a population from a sample of that population.
(p. 377)

**population****: **the entire group
that you are interested in. You can estimate the characteristics of a population
by taking large random samples from that population. Often, in experiments, the
population is just all the participants in your study. (p. 392)

**mean****: **an average calculated
by adding up all the scores and then dividing by the number of scores. (p. 394)

**central**** limit theorem: **the
fact that, with large enough samples, the distribution of sample means will be normally
distributed. Note that an assumption of the *t *test is that the
distribution of sample means will be normally distributed. Therefore, to make
sure they are meeting that assumption, many researchers try to have "large
enough samples," which they often interpret as at least 30 participants per
group. (p. 408)

*t*** test: **the
most common way of analyzing data from a simple experiment. It involves computing
a ratio between two things: (1) the difference between your group means; and
(2) the standard error of the difference (an index of the degree to which group
means could differ by chance alone).

As
a general rule, if the difference you observe is more than three times bigger than
the standard error of the difference, then your results will probably be statistically
significant. However, exact ratio that you need for statistical significance
depends on your level of significance and on how many participants you have.
You can find the exact ratio by looking at the *t *table in Appendix F and
looking for where the column relating to your significance level meets the row
relating to your degrees of freedom. (In the simple experiment, the degrees of
freedom will be two less than the number of participants.) If the absolute
value of the *t* you obtained from your experiment is bigger than the tabled
value, then your results are significant. (p. 400)

**statistical**** significance: **when
a statistical test says that the relationship we have observed is probably not
due to chance alone, we say that the results are statistically significant. See also *p *<.05*.** *(p. 377)

** p < .05: **in the simple experiment,

**Type 1 error: **rejecting the null hypothesis when it is
really true. In other words, declaring a difference statistically significant
when the difference is really due to chance. Thus, Type 1 errors lead to "false
discoveries." If you set *p *< .05, there is less than a 5% (.05)
chance that you will make a Type 1 error. (p. 381)

**Type 2 error: **failure to reject the null hypothesis
when it is really false; failing to declare that a difference is statistically
significant, even though the treatment had an effect. Thus, Type 2 errors lead
to failing to make discoveries. (p. 383)

**power****: **the ability to find
differences; or, put another way, the ability to avoid making Type 2 errors.
Much of research design involves trying to increase power. (p. 384)

**null**** results (nonsignificant results): **results that *fail *to
disprove the null hypothesis. Null results do not prove the null hypothesis
because null results may be due to lack of power. Indeed, many null results are
Type 2 errors. (p. 379)