Bonus Article for Chapter 1 of Researchdesign explained



If you need to convince yourstudents that the media reports of research are not always accurate, assign thefollowing article:

Bushman, B.J., & Anderson, C. A. (2001, June/July). Media violence and the Americanpublic: Scientific facts versus media misinformation. American Psychologist, 68,477-489.


The article is relatively easy for students to read. To makethe article even easier for students to read, give students Table 1.


Table 1

Helping Students Understand the Article


 Tips, Comments, and Problem Areas


Discontinuity: gap; lack of connection

Vested interest: biased because promoting that viewpoint will make one richer; self-interest




Last paragraph of page 477

Ambrosia: desirable food (has many meanings)


Page 478


Content analysis: an objective, systematic strategy for categorizing behavior and content  (in this case, the content of television shows). To get a general idea of how television violence could be content analyzed, follow either of the links below.



If you want to know more about content analysis, go to the site below to see professional researchers’ actual coding sheets and codebooks: rules of how to content analyze data.



Commodity: product


Page 480: note that the catharsis idea has been around along time, but that does not mean it is true. In fact, research has not supported it. To see an example of research casting doubt on that idea, click on the link below


narrative review: an article that verbally summarizes the results of several studies. In a narrative review, the author’s subjective judgment is a factor in the author’s conclusions about the extent to which variables are, according to past research, related.  Usually, the strength of the relationship is described verbally (e.g., “strong,” “moderate,” “weak”) rather than mathematically.


Meta-analytic review: an article that uses some objective, mathematical (numerical) index of effect size to summarize the degree to which variables are, according to past research, related.


Significantly different from zero: reliably different from zero


Footnote 1:

 This footnote does a good job of describing correlation coefficients. If you need more help understanding correlation coefficients, download “The Correlator,” a tutorial that will teach you about correlation coefficients. You can download either the

          Windows version


or the Mac version



If you believe you have a good understanding of correlation coefficients, you can put your knowledge to the test at the following site:



dichotomous variable: a variable that can only have two values (e.g., in their example, people can either watch or not watch television; in gender research, gender is typically coded as either male or female). If researcher puts responses into one of two categories (e.g., watches television or does not watch television), the researcher has created a dichotomous variable. 



Figure 2

Note that positive correlation does not mean good. For example, the correlation between smoking and lung cancer is positive. Note also that negative correlation coefficients are not necessarily bad. There is a negative correlation between self-examination and extent of breast cancer (the more self-examination, the less breast cancer). Finally, note that negative correlation coefficients indicate that variables are related. For example, the negative correlation between exposure to lead and IQ scores tells us that the more exposure to lead, the lower IQ scores tend to be.


Page 481


Merely correlational: correlational research involves looking for statistical relationship between variables without manipulating any of those variables. With correlational research, you could find that people who watch violent television are more likely to be violent. However, you would not know that violent television caused them to be violent. For example, it could be that their hunger for violence causes them to watch violent television. Correlational research is different from experimental research, in which the researcher determines who gets which treatment. In short, typically, experimental research has internal validity whereas correlational research does not.


More Logical Analyses: The Smoking and Media Violence Analogy


The first of these six points is a very important one. Many students dismiss psychological research findings because they “have a friend who did not behave  (the way the research says people behave).” The authors’ analogy may help explain why the “I have a friend who” attack does not destroy the credibility of a research finding.


Dissipate: go away; fade

If you are having trouble coming up with a research idea, you might take another look at the fourth point. Can you think of other instances where the short-term effect of treatment might be different from the long-term effect?


Page 482

When small is big: As we note throughout Research design explained, psychologists are increasingly interested in effect size: how strong the observed relationship between the variables was in their study. As we also note in Research design explained (see pages 280-281), for some studies, “effect size” measures will not tell you about a relationship’s practical importance.


In your own computerized searches, you can use the authors’ strategy of putting an asterisk at the end of the stem of a key term. For example, if you want articles on adolescents, your search term might be “adolesc” so that you could capture “adolescent,” adolescents,” and “adolescence.” For more about doing computerized searches, see Appendix B of Research design explained.


Reliability: In this case, the authors are interested in inter-rater reliability: the degree to which judges agree. High reliability would indicate that different raters judge the same behavior similarly. Low reliability would mean that raters are not agreeing. Low reliability between raters would be bad. At best, it would mean that one rater’s failure to focus on the behavior and the rules of scoring behavior caused the rater to make inconsistent ratings. At worst, it could mean that the raters’ subjective biases caused the raters to make biased ratings.

Page 483

SD: abbreviation for standard deviation. Standard deviation is a measure of the variability (spread) of the scores.

t (635) = 6.60, p < .0001: They used a t test with 635 degrees of freedom to determine whether they could conclude that the average value of all the articles, even taking random rating error into account, was reliably less than 5. The value of that t test was 6.60. If the real average value of the articles were 5 or greater, obtaining a t of 6.60 or greater would happen less than 1 in 10,000 times by chance alone. Therefore, they determined that the average rating of articles in the media was below 5. If you want to do their analysis online, you can enter the data at the following website:

When you get to that website, plug in the mean article rating (4.15) in the “Mean 2” space, the standard deviation for the article ratings in the “Standard Deviation 2” space, and put the number of articles in the “N of Cases 2” spot. Plug in “5” for “Mean 1” (E stands for expected), leave “N of Cases 1” and “Standard deviation 2” blank. Then, click on the “population” button. You will get approximately the same result as the authors.

95% confidence interval: In this case, a range in which the true mean probably falls. For the mean, the 95% confidence interval often extends from about 2 standard deviations below the sample mean to 2 standard deviations above the sample mean. Thus, if the sample mean was 16 and the standard deviation was 3, the 95% confidence interval might extend from 10 to 22. If you 100 samples and created 95% confidence intervals for the mean from each sample, the true population mean would be within the confidence interval in about 95 of those samples.

Polynomial regression to examine the linear and quadratic effects. Regression is a technique for predicting an outcome (in this case, article ratings) using one or more predictors (in this case, year). Standard regression only looks for linear (straight line) effects. Thus, in standard regression, the strategy would be to see how well the data points in Figure 3 fit a straight line. As you can see, the points do not fit a straight line very well. Indeed, the correlation of .02 is very close to no relationship (zero correlation). Polynomial regression can look at how different types of lines—curved as well as straight—fit the data. In this case, they looked for a certain type of curved line (curvilinear): a “u”-shaped (quadratic) line. As you can see from Figure 3, the data seem to fit an upside down “u” shape. The statistical analysis confirms that this is a reliable (statistically significant) pattern.

d = 0.31 is a measure of effect size. By tradition, psychologists consider a d of .31 small. However, note that the difference between the means was about 1 point on the 0 to 10 scale. Thus, you might consider the effect size moderate to large.

PsycINFO: the American Psychological Association has put Abstracts (summaries) of articles in an electronic (computerized) format. PsycINFO is a great tool to help you find articles. To learn how to use PsycINFO, see Appendix B of your text or go to the following site:

Page 484

(first part)

Empirical studies: studies that involved collecting data from participants; different from theoretical papers or review articles. 

Affect: feelings; emotions; moods

Cognition: thought

Inanimate: not living. (Most people think that objects are inanimate. For most people, the phrase “inanimate objects”—like the phrase “living, breathing person”— seems redundant.)


Results Page 484



99.9% confidence interval: In this case, a range of values in which the true mean probably falls. If you took 1000 samples and created 99.9% confidence intervals for the mean from each sample, the true population mean would be within the confidence interval in about 999 of those samples. Note that because, in the authors’ research, zero is not in those 99.9% intervals, we can be confident that television violence has a nonzero effect. (For more about confidence intervals, see the earlier discussion of 95% confidence intervals.)


Experimental versus nonexperimental studies

Note that experimental studies allow researchers to make cause-effect statements; nonexperimental studies do not.

Footnote 3

This footnote addresses an important distinction that we emphasize throughout Research design explained: Experimental research has internal validity, but nonexperimental research does not. It also makes another point that we will repeatedly discuss later in the text: random assignment of participants to different levels of treatment is a key element of most experimental designs. If you want to learn more about random assignment now, go to the following site:


confounds: factors, other than the suspected cause, that are responsible for the effect. For example, suppose the group that watched television was more aggressive than the group that did not watch television. It might be tempting to conclude that television caused the one group to be more aggressive than the other. However, depending on the study, a potential confound could be personality. For example, frustrated people may choose to watch violent television whereas happy people may choose not to watch violent television. In that case, the higher levels of aggression in the group that watches more violent television could be due to their increased level of frustration rather than to their watching violent television shows.


Page 485

Note that the statements in Footnote 4 argue against looking at effect size in experimental studies. That is, the effect size will depend on factors unrelated to the variable’s real life importance, such as (a) how well the experimenter controls nontreatment (extraneous) variables, (b) how intense the experimenter’s manipulation is, and (c) how quickly after the experimenter exposes participants to the treatment the experimenter measures participants’ behavior.


In experimental studies, as you can see from Figure 5, the relationship between media violence and aggression did not change over time. The correlation between media violence and aggression did not go steadily up or down (a linear relationship). It did not go up and then go back down (a curvilinear relationship). Indeed, the correlation never got below  .__ or above .___ .


In nonexperimental studies, there is a trend for the relationship between media-related violence and year tends to increase over time. As you can see from Figure 5, the correlation between media violence and aggression was about ___ in 1975, but about ___ in 2000. As you can also see from Figure 5, a line that fits the correlation coefficients for nonexperimental studies is a combination of an upward moving straight line (linear component) and a “u”-shaped curve (curvilinear).


For now, you probably do not need to understand the discussion about the possible explanation for the increase in effect sizes. Later, you may want to go back to this section because it points out factors that affect how big “effect size” is (issues that are relevant when  (a) interpreting nonsignificant correlational results [see pages 175-176] as well as when (b) debating the value of using statistical significance tests—tests that do not consider effect size (see Box 9-3).


Negative correlation:  an inverse relationship; a relationship that can be described as

 “the more _________________________, the less __________________________.”

 To understand what the authors mean, fill in the blanks above with the names of the variables discussed in the last paragraph of page 485. Note that the authors would have hoped for a positive correlation between those two variables.



Page 486

1st paragraph

In addition to the correlation between average effect size and news report rating that they reported on the previous page, they computed the correlation between the lowest reasonable estimate of effect size and news report rating.

General discussion:

Note the very strong statement that journalism does not seem to value truth. Note also that (a) popular press reports are often inconsistent with the findings of psychological research and (b) the reasons for this inconsistency


Page 487

Heuristic: general rule; simple strategy

Note that, as we stress in Chapter 1, science is not put to popular vote. Note also that facts (e.g., the moon exists; television violence causes violence; Saddam did not have nuclear weapons) are true—there is not a valid other side. Finally, note that an opinion that is supported by facts is more informed and is much more likely to be valid than an opinion that is not supported by facts.

Obfuscations: make something unclear; statements designed to confuse people so they do not know the truth

Note that if you go into counseling or some applied area of psychology and you want to use up-to-date, valid treatments, you will probably need to educate the general public about key research findings.

If you still need reason to doubt secondhand information, see the statement in parentheses at the end of the next to the last paragraph.

Caveats: qualifications

Page 488

Note the costs to being truthful. If you are not willing to pay those costs, psychology may not be the right field for you.





Back to Bonus Articles Menu


Back to Chapter 1 Menu


Back to Research design explained home page