Professional Documents
Culture Documents
A Content Analysis of Content Analyses: Twenty-Five Years of Journalism Quarterly
A Content Analysis of Content Analyses: Twenty-Five Years of Journalism Quarterly
Why examine the use of a research method such as content analysis? Rationale
Methodology - study of research methods - helps a discipline im-
prove. Thus, Lowry critiqued population validity of samples in communica-
tion studies."* Chase and Baran examined statistical power of tests that are
used.- Lacy and Riffe^ described "sins of omission and commission" in
manuscripts reviewed for a dozen journals (including failure to report coder
reliability in content analyses).
More compelling as a rationale for the study is the method's wide-
spread use. Fowler'' showed 84.1% of master's level research methods
courses include content analysis, slightly more than the 78% and 73%
covering experimental and survey research. Not surprisingly, Wilhoit^
found 21.9'^i of 1981-82 theses and dissertations employed content analysis
(28.1 % employed survey, 8.4% were experiments, and 20% were categorized
as historical method).
Content analysis is also well-represented in the journals. Wilhoit's"*
comparison of 1978-80 Communication Abstracts data with 1944-64 data
collected with Danielson"' showed one-tenth of published mass commu-
Daniel Riffe is professor in the F.W. Scripps School of Journalism at Ohio University j&MC Quarterly
where Alan Freitag is a doctoral student. Vol. 74. No. 3
Autumn 1997
515-524
5iO }oiJRHAU
author yielded a presumably exhaustive listing of 486 articles using content
analysis across the twenty-five-year period.-'*
Variables recorded for each article included: authorship (solo or
multiple faculty, and student involvement); medium (newspaper, television,
etc.); domestic or international focus; and content (news/editorial, advertis-
ing, etc.).
"Theoretical linkage" variables included presence of explicit hypoth-
eses or research questions; explicit linkage to theory, whether to antecedents
(e.g., conditions of production) or consequences (e.g., possible effects); and
use with another research method (e.g., a survey) or extra-media data (e.g..
Census figures).
Finally, "quality of method" variables included: use of convenienceor
purposive sampling (instead of representative d.ita from a random sample,
or a census); whether intercoder reliability based on randomly drawn (rep-
resentative) reliability samples-^ was reported; and whether data analysis
was limited to descriprive statistics (e.g., percentages or means).
One trained mass communication doctoral student performed the
coding. A reliability check on a random sample of articles with a second coder
yielded a .85 level of agreement by Holsti's coefficient of reliability.'^
The 486 full-length articles using content analysis published during Results
1971-95 represented a fourth (24.6%) of the total 1,977 research articles
published. Recall Wilhoit's estimate that 21.971 of theses and dissertations
used content analysis, and Moffett and Dominick's report of similar (21%)
prevalence of content analyses in Journal of Broadcasting}'
But if our data suggest that scholars in jMCQ are slightly more likely
than graduate students and scholars published in JOB to employ content
analysis, recall that our research objective was to examine change over time.
Table 1 data reflect the annual total of articles in the journal, annual frequency
of content analyses in particular, and percentage of the annual total repre-
sented by that frequency.
Annual totals of full-length articles published, viewed across the
twenty-five years, support Stempel's observation^" that the number of ar-
ticles in JMCQ is increasing. When ranks are assigned years (the most recent
is "1") and annual totals (the largest number is "1"), between-ranks correla-
tion indexes a statistically significant increasing trend (tau - 0.25, p < .04) - it
is not monotonic because of the drop in annual totals beginning in 1991 after
the giant 1988-90 issues.
When ranks are based just on annual "raw frequency" of content
analyses, tan is stronger (0.51, p<W^), indicating a significant trend toward
increased publication of content analyses. Teak years were 1985 and 1989.
Afterl976no volume had fewer than ten content analyses; four of six before
1977 had ten or fewer.
But is that growth in content analysis a result of increasing publication
of all articles? No. When ranks are based on the relative percentage of each
year's total which are content analyses {e.g., 197rs 6.3'^, or the 1975 peak of
34.8%), tau (0.48, /x.OOl) indicates a twenty-five-year trend of increased
content analyses. JMCQ was indeed publishing more articles overall, but a
significantly increasing percentage of them were content analyses.
Table 2 shows a plurality (38.5%) of articles were solo-authored by
faculty. When faculty coauthored articles (Z6,l^/() are added, faculty account
for two-thirds of the total (64.6%). Another one in six (17.5%) was co-
authored with one or more students, but a respectable percentage (11.7%)
f f %
1971 64 4 6.3
1972 66 10 15.1
1973 76 17 28.0
1974 73 10 13.7
1975 75 17 22.7
1976 69 10 14.5
1977 70 18 25.7
1978 74 14 18.9
1979 87 18 20.9
1980 64 13 20.3
1981 57 13 24.1
1982 64 20 31.2
1983 73 19 26.0
1984 97 27 27.8
1985 93 31 33.3
1986 88 25 28.4
1987 92 29 31.5
1988 112 29 25.9
1989 107 35 32.7
1990 113 28 24.8
1991 76 15 19.5
1992 77 22 28.6
1993 71 20 28.2
1994 73 19 26.0
1995 66 23 34.8
conceptual reasons specific to a particular study,'' but gen era li/.ability and
use of some statistics can become questionable in such instances.
Table 3 shows just over a fourth (27.6%) of the articles involved an
explicit theoretical framework.^- Under half (45.7%) involved explicit state-
ments of hypotheses or research questions that guide research design and
focus the analysis.''-' About 10% used content analysis with another research
method, such as a survey or experiment, and 16.9% incorporated extra-media
data, such as Census or circulation/ratings data.
Just over half reported specific intercoder reliability levels, in all but
a very few instances citing only an overall figure rather than variable-by-
variable reliabilities. One author reported only that reliability was within
"acceptable" levels. Reliability measures were usually identified ("simple
percentage of agreement," "Holsti's method," etc.), but only 10.1% explicitly
specified random sampling in reliability tests. Intracoder reliability reports
were even rarer (6.6%). One researcher who did all the coding in one study
observed that, as a result, no measure of reliability was necessary. Some
content analyses may, of course, involve measurement that requires little
coder judgment (e.g., the number of photos on a front page), but reporting of
reliability seems essential.
520 launuAU
TABLE 4
Trends in Content Aiialyses
in Journalism & Mass Communication Quarterly (1971-95)
Percentage of Content Analyses that have
(t) % % % % % % % %
1986- 146 28.5 52.1 28.8 24.7 26.6 8.9 56.2 33.6
1990
NOTES