Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

A CONTENT ANALYSIS OF CONTENT ANALYSES:

TWENTY-FIVE YEARS OF JOURNALISM QUARTERLY

By Daniel Riffe and Alan Freitag

Examination of the increasing number of articles employing quanti-


tative content analysis in 1971-95 Journalism & Mass Communica-
tion Quarterly showed primary focus on newsjeditorial content in U.S.
media. Nearly half examined neivspapers, and half were coauthored.
Most used convenience or purposive samples. Few involved a second
research method or extra-media data, explicit theoretical grounding, or
research questions or hypotheses. Half reported intercoder reliability,
and two-fifths used only descriptive statistics. Analysis of trends shows
growth in coauthorship and reporting of reliability, and increasing
emphasis on more sophisticated statistical analysis. No parallel trend
exists, however, in use of explicit hypotheses/research questions or
theoretical grounding.

Research journals, what Weaver and Wilhoit called "the nerves of


a discipline,"' are a barometer of the substantive focus of scholarship and
the research methods most important to the discipline. This longitudinal
study examines trends in content analysis in journalism & Mass Communica-
tion Quarterly- (or jMCQ) over the last quarter century (1971-95). Schweitzer
has shown jMCQ's predominance among journals which publish the "vast
majority" of mass communication research.-*

Why examine the use of a research method such as content analysis? Rationale
Methodology - study of research methods - helps a discipline im-
prove. Thus, Lowry critiqued population validity of samples in communica-
tion studies."* Chase and Baran examined statistical power of tests that are
used.- Lacy and Riffe^ described "sins of omission and commission" in
manuscripts reviewed for a dozen journals (including failure to report coder
reliability in content analyses).
More compelling as a rationale for the study is the method's wide-
spread use. Fowler'' showed 84.1% of master's level research methods
courses include content analysis, slightly more than the 78% and 73%
covering experimental and survey research. Not surprisingly, Wilhoit^
found 21.9'^i of 1981-82 theses and dissertations employed content analysis
(28.1 % employed survey, 8.4% were experiments, and 20% were categorized
as historical method).
Content analysis is also well-represented in the journals. Wilhoit's"*
comparison of 1978-80 Communication Abstracts data with 1944-64 data
collected with Danielson"' showed one-tenth of published mass commu-

Daniel Riffe is professor in the F.W. Scripps School of Journalism at Ohio University j&MC Quarterly
where Alan Freitag is a doctoral student. Vol. 74. No. 3
Autumn 1997
515-524

A ComrsrrANALisis or CONTENT TVJENTY-FIVE YEARS OF]aii«NAUSM 515


nication research articlesinboth periods used content analysis. Yet Lowry's"
examination of 1970-75 }MCQ "empirical articles" showed 20% were content
analyses. Evans and Priest'^ found more than a third of journal articles on
sciencecoverageused content analysis. A recentsamplingstudy'"* found that
jMCQ has included a total of eighty-five magazine content analyses, through
Autumn 1994, while Gerlach'-'sho'wed that 53% of all 1964-83/MC(3 studies
of magazines were content analyses. Finally, Moffettand Dominick'''found
that 21% of 1970-85 articles in journal of Broadcasting were content analyses.
The centraiity of content analysis in journalism/mass communi-
cation scholarship was also shown in a citation study."" While the forty-three
"most cited references" in 1978, 1979, and 1980 jMCQ volumes included
scholars as pivotal as Klapper, Berelson, and Lazarsfeld, only four were
methodological: Siegel's Nanparametric Statistics for the Behavioral Sciences; a
statistical package user's manual; Holsti's Content Analysis for the Social
Sciences and Humanities; and Budd, Thorp, and Donohew's Content Analysis
in Communication Research.
But if these studies are evidence of the past stature of content analysis,
there is evidence scholarly use of content analysis is increasing.''' Stempel"^
reported that the first forty volumes oijMCQ (1924-63) included fifty content
analyses. The next ten volumes (1964-73) alone included fifty-one content
analyses, while the ten volumes from 1974 to 1983 included 106 content
analyses. (His 1990 analysis of research trends''* did not provide an update on
content analyses since 1983.)
Growth in content analysis research may parallel growth in the
number of scholars in journalism and mass communication^" and an in-
creased emphasis on scholarly publication. Another indication of those
trends is the fact that jMCQ has increased numbers of pages and articles
devoted to research since 1973. Other factors may include the growth of
graduate research programs, the movement of graduates into the academy,
and the broadening purview of the discipline itself - signaled by the change
from journalism Quarterly to journalism & Mass Communication Quarterly. Of
course, the last quarter century has also witnessed increased availability of
archived and database content, and a renewed interest in content driven by
questions or criticism of how mass media represent reality.
But is more content analysis better? Not, critics say, unless those
analyses contribute to theory. Shoemaker and Reese^' lamented that devel-
opment of theories of media content is "stuck on a plateau" because most
content analyses are not linked "in any systematic way to either the forces that
created the content or to its effects." Stevenson urged more explicit linkages
of international content analysis research to theory.^^

Objective Within this context - of a widely used research technique allegedly


becoming more important in theory-building, and concern over various
methodological "sins" - we conducted a content analysis of content analyses
in order to answer a single research question that has several dimensions:

How have published content analyses changed during the last


twenty-five years, in terms of frequency, authorship, focus on
different media and content, sampling, reliability reporting,
links to theory, and use with other methods and data?

Method ^^ examined all full-length (i.e., no "Research in Brief") content


analyses in/MCQ's^^ 1971-95 issues. Article-by-arficle examination by each

5iO }oiJRHAU
author yielded a presumably exhaustive listing of 486 articles using content
analysis across the twenty-five-year period.-'*
Variables recorded for each article included: authorship (solo or
multiple faculty, and student involvement); medium (newspaper, television,
etc.); domestic or international focus; and content (news/editorial, advertis-
ing, etc.).
"Theoretical linkage" variables included presence of explicit hypoth-
eses or research questions; explicit linkage to theory, whether to antecedents
(e.g., conditions of production) or consequences (e.g., possible effects); and
use with another research method (e.g., a survey) or extra-media data (e.g..
Census figures).
Finally, "quality of method" variables included: use of convenienceor
purposive sampling (instead of representative d.ita from a random sample,
or a census); whether intercoder reliability based on randomly drawn (rep-
resentative) reliability samples-^ was reported; and whether data analysis
was limited to descriprive statistics (e.g., percentages or means).
One trained mass communication doctoral student performed the
coding. A reliability check on a random sample of articles with a second coder
yielded a .85 level of agreement by Holsti's coefficient of reliability.'^

The 486 full-length articles using content analysis published during Results
1971-95 represented a fourth (24.6%) of the total 1,977 research articles
published. Recall Wilhoit's estimate that 21.971 of theses and dissertations
used content analysis, and Moffett and Dominick's report of similar (21%)
prevalence of content analyses in Journal of Broadcasting}'
But if our data suggest that scholars in jMCQ are slightly more likely
than graduate students and scholars published in JOB to employ content
analysis, recall that our research objective was to examine change over time.
Table 1 data reflect the annual total of articles in the journal, annual frequency
of content analyses in particular, and percentage of the annual total repre-
sented by that frequency.
Annual totals of full-length articles published, viewed across the
twenty-five years, support Stempel's observation^" that the number of ar-
ticles in JMCQ is increasing. When ranks are assigned years (the most recent
is "1") and annual totals (the largest number is "1"), between-ranks correla-
tion indexes a statistically significant increasing trend (tau - 0.25, p < .04) - it
is not monotonic because of the drop in annual totals beginning in 1991 after
the giant 1988-90 issues.
When ranks are based just on annual "raw frequency" of content
analyses, tan is stronger (0.51, p<W^), indicating a significant trend toward
increased publication of content analyses. Teak years were 1985 and 1989.
Afterl976no volume had fewer than ten content analyses; four of six before
1977 had ten or fewer.
But is that growth in content analysis a result of increasing publication
of all articles? No. When ranks are based on the relative percentage of each
year's total which are content analyses {e.g., 197rs 6.3'^, or the 1975 peak of
34.8%), tau (0.48, /x.OOl) indicates a twenty-five-year trend of increased
content analyses. JMCQ was indeed publishing more articles overall, but a
significantly increasing percentage of them were content analyses.
Table 2 shows a plurality (38.5%) of articles were solo-authored by
faculty. When faculty coauthored articles (Z6,l^/() are added, faculty account
for two-thirds of the total (64.6%). Another one in six (17.5%) was co-
authored with one or more students, but a respectable percentage (11.7%)

A Com-ENT ANALYSIS Of CONTTWANAIXSLS: rwENTv-fiVEyiMRSof JOURNAUSMQUARTERLV 517


TABLE 1
Twenty-Five Years of Content Analysis Research Articles
in Journalism & Mass Communication C^arterly
Total Number of Total Articles Employing
Year Articles Appearing Content Analysis

f f %
1971 64 4 6.3
1972 66 10 15.1
1973 76 17 28.0
1974 73 10 13.7
1975 75 17 22.7
1976 69 10 14.5
1977 70 18 25.7
1978 74 14 18.9
1979 87 18 20.9
1980 64 13 20.3
1981 57 13 24.1
1982 64 20 31.2
1983 73 19 26.0
1984 97 27 27.8
1985 93 31 33.3
1986 88 25 28.4
1987 92 29 31.5
1988 112 29 25.9
1989 107 35 32.7
1990 113 28 24.8
1991 76 15 19.5
1992 77 22 28.6
1993 71 20 28.2
1994 73 19 26.0
1995 66 23 34.8

Total 1,977 486 24.6

score for trend:


t(IU = .25 .51 .48
!' = .038 .0003 .0006

was student-authored {\0.7"A by a single student and 1.0% by multiple


students).^^ In sum, 50.2*551 were solo-authored and 49.8%. were coauthored.
Newspapers (46.7%) were the medium of choice. A fourth (24.3%) of
content analyses, however, focused on television. The 15%; "Other" figure
reflects content ranging from wire services to presidential debate transcripts
to catechisn:\s.
The primary focus on news/editorial content (71 %j) is consistent with
]MCQ's news "heritage," but the limited focus on entertainment belies the
nature of most consumers' use of media.-^'
Nine of ten studies focused on U.S. media (86%) or on U.S. and other
nations' media (5.1%). While one expects accessible domestic media to be
examined, this imbalance may limit the usefulness of some content data in
building comparative or cross-cultural theory.
Few studies were based on data from a representative random sample
or census. Most (77.8%) used convenience or purposive samples. Non-
random samples are often desirable - even necessary - for logistical or

518 jouFNAUSM & MASS CoMMUNsymoN QuAKrEgjy


TABLE 2
Authorship and Focus Profile for Content Analyses
in Journalism & Mass Communication Quarterly (1971-95)
(N=486)
%

Authorship solo faculty 38.5


multiple faculty 26.1
faculty and studt'nt(s) 17.5
student(s) only 11.7
other 6.2
total 100.0

Focus Medium newspaper(s) 46.7


television 24.3
magazines 13.6
other 15.4
total 100.0

Content news/editorial 71.0


advertising 10.1
entertainment 7.2
visual/graphics/olher n.7
total 100.0

Nation U.S. 86.0


non-U.S. 8.8
combination 5.1
total 100.0

Sampling census or representative 22.2


purposive 68.1
convenience 9.7
total 100.0

conceptual reasons specific to a particular study,'' but gen era li/.ability and
use of some statistics can become questionable in such instances.
Table 3 shows just over a fourth (27.6%) of the articles involved an
explicit theoretical framework.^- Under half (45.7%) involved explicit state-
ments of hypotheses or research questions that guide research design and
focus the analysis.''-' About 10% used content analysis with another research
method, such as a survey or experiment, and 16.9% incorporated extra-media
data, such as Census or circulation/ratings data.
Just over half reported specific intercoder reliability levels, in all but
a very few instances citing only an overall figure rather than variable-by-
variable reliabilities. One author reported only that reliability was within
"acceptable" levels. Reliability measures were usually identified ("simple
percentage of agreement," "Holsti's method," etc.), but only 10.1% explicitly
specified random sampling in reliability tests. Intracoder reliability reports
were even rarer (6.6%). One researcher who did all the coding in one study
observed that, as a result, no measure of reliability was necessary. Some
content analyses may, of course, involve measurement that requires little
coder judgment (e.g., the number of photos on a front page), but reporting of
reliability seems essential.

A ComEm ANALYSIS OF CoNTEm ANALYSES: TwEwrr-fji'i |i.>iK\ALisMQuAKiHU.y 519


TABLE 3
Theory and Methodology of Content Analyses
in JournaJism & Mass Communication (Quarterly (1971-95)
Percentage of content analyses
(N=486) that included:

Explicit theoretical framework 27.6

Hypotheses or research questions 45.7

Other research method 10.1

Extra-media data 16.9

Report of intercoder reliability 56.0

Reliability based on randon\ sample of


coded content 10.1

Descriptive statistics only 40.1

Four of ten (40.1%) articles used only descriptive statistics (e.g.,


percentages, averages) in presenting results. Of course, there is no intent here
to suggest what constitutes "appropriate" statistical procedures; research
questions or hypotheses, level of measurement, desire for parsimony, etc.,
guide such decisions. And some statistical procedures require random
sampling absent in many studies.
Table4presentsselectedfrequency,authorship, theoretical and method
data by time period, with data presented in five five-year periods.
Content analyses as a percentage of total articles clearly accelerated
during the first fifteen years and have leveled off in the past decade (from
1971-75's 16.5%, to the twin peaks of 28.6'>i. and 28.5'/,. in 1981-85 and
1986-90).
The data reflect increased coauthorship, with the percentage increas-
ing from 34.5% during 1971-75 to 64.6% during 1991-95. Stempei"^ reported
in 1990 that25%ofall/MCQarticles in 1973and 40% in 1989 were coauthored.
He predicted correctly that coauthorship would continue to increase -
yielding "better research" because "two heads are better than one" - due to
larger faculties, muitimethod approaches requiring different talents, and
student-mentor coauthoring.
Table 4 data suggest an initial trend to diminished use of convenience
or purposive samples. One in five used representative data in 1971-75,
compared to 28.8%. in 1986-90. However, the 1991-95 period saw the
percentage return to the 1970s level (18.2%).
There was no clear trend or pattern in use of explicit theoretical
frameworks. The largest percentage - by a margin of }'-7(' - was 1976-80's
30A%. That same period, however, saw one of the smallest percentages
(16.2%) of studies with explicit hypotheses or research questions. And
during 1986-90, when use of theoretical underpinnings reached its nadir
(24.9%), use of explicit hypotheses or research questions paradoxically
reached its zenith (26.6%). Overall, the percentage with hypotheses or
research questions increased monotonically until 1991-95, when it dropped
rather sharply.

520 launuAU
TABLE 4
Trends in Content Aiialyses
in Journalism & Mass Communication Quarterly (1971-95)
Percentage of Content Analyses that have

As Percen- Represen- Theoret. Res.


Content ta^e of Co- tative Frame- Quest. / Other Reported Descript.
Year Anal. All Articles authors Sample/Cens, work Hypotheses Method Reliability Statistics

(t) % % % % % % % %

1971- 58 16.5 34.5 19.0 27.6 14.0 6.9 50.0 51.7


1975

1976- 73 20.1 45.2 17.8 30.1 16.2 ]2.3 46.6 42.5


1980

1981- no 28.6 42.7 21.8 29.1 23.0 9.1 50.9 48.2


1985

1986- 146 28.5 52.1 28.8 24.7 26.6 8.9 56.2 33.6
1990

1991- 99 27.2 64.6 18.2 28.3 20.3 13.! 71.7 32.3


1995

The data show no recognizable trend, however, in use of content


analyses with other methods, despite an increase between 1971-75's 6.9%
and 1991-95's 13.1%.
But if Table 4 data do not reflect movement toward more theoretical
rigor or representative data, they do show a near-monotonic rise in articles
reporting intercoder reliability, from 5O'/i' during the first five-year period to
71.7% most recently.
Moreover, statistical procedures have become increasingly sophisti-
cated. Morethanha!f(51.7%}ofthe 1971-75 studies reported only descriptive
statistics, but in 1976-80 that had dropped to 42.5%, and to 32.3% in 1991-95.

As suggested earlier, growth in jMCQ content analyses may reflect Conclusions


growth in number of journalism/mass communication scholars, growth in
graduate programs, and broadening of the field to include broader defini-
tions of content. Easier access to content through databases or archives may
continue to contribute to use of the method. Computerized content analysis,
as old as the General Inquirer''''and as new as free Web site programs, may
signal continued growth.'* These new sources of content and alternatives to
human coding, of course, raise new questions about data representativeness,
reliability, and validity.
On the other hand, we optimists would like to believe that a changing
jMCQ profile reflects an increasing theoretical interest in content. Whether
studying effects or processes of communication, one can't ultimately under-
stand or explain either unless one proceeds from a theoretical context to
examine the content that is produced or that yields a particular effect.
But, paradoxically, if growth reflects increased theoretical importance
of content, it is all the more disconcerting that there is no clear parallel trend
toward more theoretical rigor that would allow linkage of content to process

A CoNTEtJT ANALYSIS Of Comicm ANALYSES TV^NTY-FIVE Yi:Aiii of JOURNAUSMQUARVERIY 521


and effects. In other words, growth has not been paired with increased
theoretical grounding, nor with more frequent formulation of explicit re-
search questions or hypotheses. The latter seem, at minimum, to be essential
to tightly designed studies and focused analyses - in short, to relationship-
testing and theory building.
Similarly, continued reliance on nonrepresentative data frtim avail-
able domestic media, and on news content that may be of decreasing
importance to many consumers, may limit the contribution of content analy-
sis to theory.
On the plus side, progress has occurred in reporting reliability, though
too few studies provide clear information on whether reliability samples
represent the population of coded data, as they should.-"" Still, reliability is a
necessary condition for validity, and improved reliability reporting is progress.
And movement away from simple description of means and percent-
ages might be viewed as progress, if descriptive statistics have been sup-
planted by procedures allowing tests of hypotheses and relationships. But
recall the lack of growth in explicit theoretical development or testable
research hypotheses, or in types of sampling that permit statistical inference.
Improving reliability and statistical sophistication of analyses of non-
representative data in studies that are not theoretically grounded or don't
permit clear, focused research design, is not progress.
A final note; it is far too easy to criticize decisions that researchers have
made, often for the most legitimate of reasons. The goal, if not the accom-
plishment, of this study, however, has been to describe changing patterns in
content analysis, in order to help researchers and students see the progress
that has been made in achieving appropriate levels of rigor, and how they
might contribute to continued progress.

NOTES

1. David Weaver and G. Cleveland Wilhoit, "A Profile of JMC Educators:


Traits, Attitudes and Values," Journalism Educator 43 {summer 1988): 4-41.
2. We will refer to the journal, previously (before 1995) known as
Journalism Quarter!]/ or jQ , by its current acronym, JMCQ.
3. John C. Schweitzer, "Research Article Productivity by Mass Commu-
nication Scholars," Journalism Quarterly 65 (summer 1988): 479-84.
4. Dennis T. Lowry, "Population Validity of Communication Research:
Sampling the Samples," Journalism Quarterh/ 56 (spring 1979): 62-68, 76.
5. Lawrence J. Chase and Stanley J. Baran, "An Assessment of Quanti-
tative Research in Mass Communication," Journalism Quarterly 53 (summer
1976): 308-311.
6. Stephen Lacy and Daniel Riffe, "Sins of Omission and Commission in
Mass Communication Quantitative Research,"/oMrH«/ismQ;jflr((T/y 70 (spring
1993): 133-39.
7. Gilbert L. Fowler Jr., "Content and Teacher Characteristics for Master's
Level Research Course," Journalism Quarterly 63 (autumn 1986): 594-99.
8. Frances Goins Wilhoit, "Student Research Productivity: Analysis of
Journalism Abstracts," lournalism Quarterly 61 (autumn 1984): 655-61.
9. G. Cleveland Wilhoit, "Introduction," in Mass Cominunication Review
Yearbook Vol. 2, ed. Wilhoit and Harold de Bock (Beverly Hills, CA: Sage,
1981), 13-15.
10. Wayne A. Danielson and G. Cleveland Wilhoit, A Computerized

522 jouiwAUSM & MASS CoMMUNicAnoN QUAKTF.RI.Y


Bibliography of Mass Communication Research, 1944-1964 (NY: Magazine Pub-
lishers Association, 1967), xvii.
11. Dennis T. Lowry, "An Evaluation of Empirical Studies Reported in
Seven Journals in the '70s," journalism Quarterly 56 (summer 1979): 262-68,
288.
12. William Fvans and Susanna Hornig Priest, "Content in Context: Can
Content Studies of Science News Be Theory-Driven in an Information Age?"
(paper presented at the annual meeting of AEJMC, Atlanta, August 1994).
13. Daniel Riffe, Stephen Lacy, and Michael Drager, "Sample Size in
Content Analysis of Weekly News Magazines," ]ournatisin & Mass Communi-
cation Quarterly 73 (autumn 1996): 635-44.
14. Peter Gerlach, "Research About Magazines Appearing in Journalism
Quarterly," journalism Quarterly 64 (spring 1987): 178-82.
15. E. Albert Moffett and Joseph R. Dominick, "Statistical Analysis in the
JOB 1970-85: An Update," Feedback 28 (spring 1987): 13-16.
16. James W. Tankard Jr., Tsan-Kuo Chang, and Kuo-JenTsang, "Citation
Networks as Indicators of Journalism Research Activity," journalism Quar-
terly 61 (spring 3984): 89-96,124.
17. While he did not address this question specifically, Perloff's twenty-
year look at journalism research showed an increased emphasis on quantita-
tive methods (517( during the 1955-64 period, followed by 60%. during the
1965-74 period) and statistical analysis (v38% and 56% respectively). See
Richard M. Perloff, "Journalism Research: A 20-Year Perspective," journalism
Quarterly 53 (spring 1976): 123-26.
18. Guido H. Stempei 111, "Introduction," in journalism Quarterly Special
Supplement: Cumulative Index to Vots. 51-60,1974-83 61 (1984): iii-iv.
19. Guido H. Stempei III, "Trends in Journalism Quarterly: Reflections of
the Retired Editor," journalism Quarterly 67 (summer 1990): 277-81.
20. Stempei, "Trends," however, points out that the growth in number of
faculty in journalism/mass communication may not have been matched by
growth in scholarly activity (p. 278).
21. Pamela J. Shoemaker and Stephen D. Reese, "Exposure to What?
Integrating Media Content and Effects Studies," journalism Quarterly 67
(winter 1990): 649-52.
22. Robert L. Stevenson, "Defining Internationa! Communication as a
Field," Sournalism Quarterly 69 (autumn 1992): 543-53.
23. Thejournal's "Research in Brief" section was discontinued in 1989. To
maximize comparability across all twenty-five years, therefore, we limited
our analysis to only full-length articles. We recognize that some important
content analyses were omitted from the study because of this exclusion
decision. For example, Stempei (1997, personal correspondence) notes that
"Research in Brief" was home to pioneering content analyses of coverage of
African Americans in U.S. media, of MTV, and of women in public television.
24. We chose not to rely on the published annual index. First, there were
omissions, perhaps due to how method or focus of a study was labeled.
Second, we were also interested in studies which used content analysis in
conjunction with other methods but were not included in the index's "Con-
tent Analysis" category. An agenda-setting study involving content analysis,
for example, might be indexed under "Effects Analysis" or "Communication
Theory."
25. Stephen Lacy and Daniel Riffe, "Sampling Error and Selecting Inter-
coder Reliability Samples for Nominal Content Categories," journalism &
Mass Communication Quarterh/ 73 (winter 1996): 963-73.

A CONTENT ANAI-YSH OF CONTCWT AN.AI.M£S.- TWEHIY-FIVE YEARS of JOURNALISM QUAKIERLV 523


26. Ole R. Holsti, Cofitent Analysis for the Social Sciences and Humanities
(Reading, MA: Addison-Wesley, 1969).
27. Moffett and Dominick, "Statistical Analysis in the JOB 1970-85."
28. Stempel, "Trends in Journalism Quarterly."
29. The "other" category (f).2%) includes studies authored or coauthored
by researchers outside academic settings.
30. On the other hand, analyses of entertainment content are likely
dominated by a focus on television, and may be more at home in the journal
of Brondcnsting and Electronic Media. Obviously, patterns of publication in
JMCQ are a function of what is (or is not) submitted.
31. Daniel Riffe, Charles F. Aust, and Stephen R. Lacy, "The Effectiveness
of Random, Consecutive Day and Constructed Week Sampling in Newspa-
per Content Analysis," Journalism Quartcrty 70 (spring 1993): 133-39; Daniel
Riffe, Jason Nagovan, Stephen Lacy, and Larry Burkum, "The Effectiveness
of Simple and Stratified Random Sampling in Broadcast Mews Content
Analysis," journalism & Mass Communication Quarterly 73 (spring 1996): 159-
68.
32. Measurement criteria here were admittedly fairly rigorous. Merely
reviewing literature was not sufficient. For example, author(s) had to de-
scribe explicitly (we did not "infer" theoretical context) an antecedent process
(e.g., gatekeeping and news values) or effect theory (e.g., cultivation or
agenda-setting), some aspect of which was related to the data or used to
justify its collection.
33. Measurement criteria were equally severe for this variable; simply
stating a goal of "finding out" how much newspaper space was devoted to
news of industrial strikes, or how many models in magazine ads were
smiling, did not constitute hypotheses or research questions. Both required
literature review, and hypotheses required specification and prediction that
a level of one variable would be associated with another. Research questions,
to be credited, needed to reflect a literature-directed purpose beyond simply
"measuring things," but without the additional directional prediction of
hypotheses. For example, research questions might ask whether a pattern of
coverage found in a previous study has changed over time; or how media
portrayal corresponded to real-world norms or data. A hypothesis, by
contrast, might predict greater emphasis on conflict in Third World coverage
than in news from Western nations.
34. Stempel, "Trends in Journalism Quarterly."
35. Philip J. Stone, D.C. Dunphy, M.S. Smith, and D.M. Ogilvie, The
General inquirer: A Computer Ap'proach to Content Anali/sis in the Behaviorai
Sciences (Cambridge, MA: MIT Press, 1966 General Inquirer).
36. For example, Hauss describes software programs capable of analyz-
ing media coverage, both print and broadcast transcripts, for the presence of
as many as forty variables. Deborah Hauss, "Measuring the Impact of Public
Relations: New Electronic Research Methods Improve Campaign Evalua-
tions," Public Relations lournai 49 (February 1993): 14-21.
37. Lacy and Riffe, "Sampling Error and Selecting Intercoder Reliability
Samples."

524 jouENAUSM & MASS COMMUNIC/(FION QuAitTEitLr

You might also like