Chapter 7 Summary: Research Design Explained

Brief Summary of Chapter 7

In Chapter 7, we discuss descriptive methods. At their most basic, descriptive methods may involve measuring a single variable. Thus, the key is to accurately measure your variable(s) using a good sample. Some methods, such as giving a psychological test to a large random sample of participants, make it relatively easy to accurately measure your variable(s) in a good sample. Archival methods may be good for getting large samples and for avoiding subject bias (participants faking their responses), but may use measures that don't quite fit the construct you want to measure. Observation may provide data about what most individuals actually do-- if observers can be objective and if you can observe a large enough sample. The tradeoffs among these methods are summarized in Table 7.1 (p. 244).

Regardless of the method used, researchers usually want to do more than measure a single variable. They want to see the relationships between variables. Because they often use correlation coefficients to see how variables are related, descriptive methods are also called correlational methods.

 Correlational methods allow you to find out that there is a relationship, but they don't tell you why the relationship exists. That is, they don't tell you which variable is influencing which (the chicken and the egg problem)--or whether both variables are side effects of another variable. After looking at Figure 7-1 (page 227), you will know why correlational methods don't allow you to make cause-effect conclusions-- and you will know why the following cartoon (from XKCD.com) is funny.

 

As the name suggests, the results of correlational studies can often be summarized using correlation coefficients. If both variables are interval, you can use the Pearson r (If one or both variables are not interval, see Box 7.3, p. 257). Positive correlations mean that if a participant scores high on one variable, the participant will tend to score high on the other variable. Negative correlations, on the other hand, mean that if a participant scores high on one variable, the participant will tend to score low on the other variable. (To visualize this, see Figure 7.6, page 254.)

Some relationships are stronger than others. That is, in some cases, knowing a person's score on one variable (their height as a 21-year-old) is a very good predictor of their score on another variable (their height as a 22-year-old). In other cases, a relationship may be relatively weak (SAT scores and college grade-point average). In those cases, knowing one variable doesn't allow you to predict the other one very well. As you would expect, you can tell from the correlation coefficient whether the relationship is strong or weak. However, you can't tell by looking at the sign of the correlation coefficient!     The sign of the correlation coefficient has nothing to do with its strength. A -.6 correlation is just as strong as a +.6; a -.7 correlation is stronger than a +.6. Instead, you look at the strength of the relationship by seeing how far the correlation coefficient is from zero--the farther, the stronger. Alternatively, you could square the correlation coefficient to get the coefficient of determination. The bigger the coefficient of determination, the stronger the relationship. Thus, you could say that a

 -.5 correlation was stronger than a +.4 correlation because squaring -5 gives you a coefficient of determination of .25 (-.5 X -.5), which is bigger than .16, what you would get from squaring +4 (because .4 X.4 = .16).

    If you find a correlation between two variables in your random sample, does that mean the two variables are really related? No, because it could just be due to random error. For example, if you had taken a different sample, you might have found no relationship or even the opposite relationship.

    To find out whether the correlation you found in your sample means that there is a relationship in the population, you need to do a significance test. You can do a t test (if you divide your participants into two groups), an F test, or a test to see if the correlation is statistically different from zero. If you only have two groups (men and women), then all three tests will give you essentially the same results (see for yourself by looking at Box 7.4, p. 269). However, using the t test or F test can sometimes lead to two problems.

    First, some people think that because they saw t tests being used with data from experiments to help show that a treatment caused an effect, whenever a t test is used, one can make cause-effect conclusions. What they are overlooking is that they were able to make cause-effect statements on data analyzed by a t test only because those data were the results of an experiment.

    Second, what do you do if you don't have two groups? For example, suppose you have a bunch of individuals who all scored differently on an introversion-extroversion test. You could arbitrarily categorize half of them as introverts and the other half as extroverts, and then do a t test. Note, however, that you have (as far as the t test analysis is concerned) thrown away each participant's individual score and just lumped him or her into a group of participants who have somewhat similar scores. The cost of  throwing away so much information about each individual's score is that you lose power. By comparing your two "groups" with a  t test rather than testing to see if the correlation was significantly different from zero, you may fail to find a significant relationship.

    Unfortunately, improper analyses are not the only reason you may obtain null results. You may fail because you didn't have enough participants, because you had an insensitive measure, because your test was looking for a linear relationship but the actual relationship was not linear, and because of restriction of range: you couldn't see whether  your variables varied together because at least one of the variables didn't vary much.


Back to Chapter 7 Menu