Measuring and Manipulating Variables:

Reliability and Construct Validity


I. Overview

II. Measurement

A. Where observers can go wrong

1. Observer bias

2. Random observer error

B. Minimizing observer error

1. Reducing observer bias

2. Reducing random observer error

C. Errors in administering the measure

D. Errors due to the participant

1. Random error

2. Subject bias

a. Social desirability bias

b. Obeying demand characteristics

3. Reducing subject bias

E. Reliability

1. Assessing overall reliability

2. Assessing observer reliability

3. Assessing random error due to participants

4. Conclusions about reliability

F. Beyond reliability: Construct validity

1. Discriminant validation strategies: Showing that you aren't measuring the wrong construct

2. Convergent validation strategies: Showing that you are measuring the right construct

3. Content validity: Is everything represented?

4. Summary of construct validity

III. Manipulating Variables

A. Common threats to a manipulation's validity

1. Random error

2. Researcher bias

3. Subject biases

B. Evidence used to argue for validity

1. Consistency with theory

2. Manipulation checks

C. Tradeoffs among three common types of manipulations

1. Instructional manipulations

2. Environmental manipulations

3. Manipulations involving stooges

D. Manipulating variables: A summary

VI. Concluding remarks


Key terms


Back to Chapter 5 Main Menu