Professional Documents
Culture Documents
Deluca 2022 Sample Selection Matters Moving Toward Empirically Sound Qualitative Research 2
Deluca 2022 Sample Selection Matters Moving Toward Empirically Sound Qualitative Research 2
Empirically Sound
Qualitative Research
Reviewed by: Stefanie DeLuca , Johns Hopkins University, Baltimore, MD, USA
DOI: 10.1177/00491241221140425
Abstract
Increasingly, the broader public, media and policymakers are looking to qualita-
tive research to provide answers to our most pressing social questions. While
an exciting and perhaps overdue moment for qualitative researchers, it is also a
time when the method is coming under increasing scrutiny for a lack of reliabil-
ity and transparency. The question of how to assess the quality of qualitative
research is therefore paramount, but the field still lacks clear standards to evalu-
ate qualitative work. In their new book, Qualitative Literacy, Mario Luis Small and
Jessica McCrory Calarco aim to fill this gap. I argue that Qualitative Literacy offers
a compelling set of standards for consumers to assess whether an in-depth inter-
view or participant observation was of sufficient quality and, to an extent,
whether sufficient time was spent in the field. However, by ignoring the vital
importance of employing systematic, well-justified, and transparent sampling
strategies, the implication is that such essential criteria can be ignored, under-
mining the potential contribution of qualitative research to a more cumulative
creation of scientific knowledge.
Keywords
Qualitative methods, sampling, interview methods, mixed methods,
transparency
1074 Sociological Methods & Research 52(2)
The goal of research is to produce results that can be falsifiable and in some way
affirmable by rational processes of actors other than the author. Most important is
that the researcher provide an account of how the conclusions were reached, why
the reader should believe the claims and how one might go about trying to
produce a similar account. What makes science morally, and rationally, compel-
ling is that it is a public enterprise… distinguished by the claim to produce shared
Book Review 1075
the write-up of the results where people are coming from. It reveals the world
according to the participants rather than the researcher.
When a qualitative study demonstrates heterogeneity of outcomes or
mechanisms, it is a signal that the researcher did not succumb to the tempta-
tion of presenting only the findings consistent with their priors or extant
theory. Rather, respondents’ life stories, world views, motivations, and beha-
viors are presented with their messy complexity. Observing heterogeneity in
qualitative findings also suggests that interviews were systematically con-
ducted and coded (so as to reveal possible heterogeneity), and that there
was adequate “exposure”—the term S&C use to indicate time spent in the
field—to allow such heterogeneity to reveal itself.
The palpability standard is met when the data take a reader into the world
of another person, including visual and auditory descriptions of the context in
which the interview or observation took place. Palpability depends on provid-
ing sufficient concrete details—especially in a participant’s own words—for a
reader to imagine the participant’s reality, rather than relying on a researcher’s
abstract summaries (see also Katz 2004). Meeting this standard signals that
the researcher did not merely ask questions that could be answered with a
“yes” or “no,” but rather, pushed for richer responses by employing questions
such as, “Tell me the whole story about that last time that happened,” and
detailed follow-up probes. Satisfying this criterion requires cognitive
empathy. The payoff is a more authentic, emotionally rich account of
aspects of a participant’s story or circumstances.
Evidence of follow-up shows that an iterative approach was used, one that
deployed multiple tools in the sociologist’s toolbox—survey and administra-
tive data, archival materials, additional interviews, additional sites, and so on,
to confirm findings gleaned from interviews (sometimes involving follow-up
with the participants themselves). Meeting this criterion indicates that the
researcher resisted the temptation to be lazy or to jump to conclusions too
soon.
S&C define self-awareness as “the extent to which the researcher under-
stands the impact of who they are on those interviewed or observed” (119).
Their description of this criterion goes beyond empty statements about posi-
tionality and gives readers and researchers specific dimensions (access, dis-
closure and interpretation) through which to identify and account for their
impact on the research and/or to minimize it when necessary.
I am especially appreciative of the reflection that demographic matching
between interviewers and participants cannot replace self-awareness (S&C,
129–131). In situations where interviewers differ from participants on socio-
economic or racial characteristics, participants may be less likely to assume
Book Review 1077
shared understandings, and more likely to explain their experiences and views
in detail.
Articles Using
All Articles Using Interviews as Primary
Interview Methods Method
Transparency
Low/none 88 53.7 47 48.0
Partial 57 34.8 36 36.7
High 19 11.6 15 15.3
Response rate provided 12 7.3 9 9.2
Observations 164 98
Notes: These data are sourced from American Sociological Review, American Journal of Sociology,
Social Forces, and Social Problems over the past 5 years (January 2018–October 2022).
Transparency refers to the degree to which sample selection techniques and justifications were
explicitly described. For example, “low/none” typically means that the article contained no or few
details describing the sample selection methods used, limited details on the sites, no mention of
participants who were not reached, and scant recruitment details (e.g., participants responded to
fliers posted in a non-specific geography or location); “partial” transparency typically means that
the article included some description of how research participants were selected, possibly with a
sampling method (purposive, snowball), but no sampling frame or well-defined pool of potential
participants, or no clarity on which groups or participants were likely missed with this strategy;
“high” transparency articles typically included a detailed sampling frame or well-defined
population of interest, explicit justification for sample inclusion or systematic sampling approach,
response rates when possible, and clarity on the limits of the sample selection strategy or
attempts to mitigate such limits.
Funding
The author disclosed receipt of the following financial support for the research, author-
ship, and/or publication of this article: This work was supported by the Smith
Richardson Foundation, Bill and Melinda Gates Foundation.
Book Review 1083
ORCID iD
Stefanie DeLuca https://orcid.org/0000-0002-4122-1032
Notes
1. Importantly, perfect replication is an elusive goal in any kind of data analysis. For
example, in large scale survey research, it is not always practical to locate everyone
in the same sample. Even if one could, time has passed and circumstances have
likely changed. Thus, the goal is not exact replication of results, but instead replic-
ability through a clear enumeration of the methods used to generate results.
2. At least one limitation of this exercise is that it leaves out books using qualitative
data and interviews, which typically allow for more explication and detail on dif-
ferent methodological dimensions.
3. I am grateful to Kendall Dorland, Thelonious Goerz, Matthew Gonzalez, Claire
Smith, and Margaret Tydings for their research assistance with these analyses.
4. This excludes 14 papers that are primarily single-case studies or historical com-
parative case studies.
References
Becker, Howard S. 1996. “The Epistemology of Qualitative Research.” Pp. 53-71 in
Ethnography and Human Development: Context and Meaning in Social Inquiry,
edited by R. Jessor, A. Colby, and R. A. Shweder. Chicago, IL: University of
Chicago Press.
Becker, Howard S. 2009. “How to Find out How to Do Qualitative Research.”
International Journal of Communications 3:545-53.
Berk, Richard A. 1983. “Introduction to Sample Selection Bias in Sociological Data.”
American Sociological Review 48(3):386-98.
Boyd, Melody L. and Stefanie DeLuca. 2017. “Fieldwork with In-Depth Interviews:
How to Get Strangers in the City to Tell You Their Stories.” Pp. 239-53 in
Methods in Social Epidemiology, 2nd edition edited by M. J. Oakes and
J. Kaufman. Hoboken, NJ: John Wiley & Sons.
Duneier, Mitchell. 2011. “How Not to Lie with Ethnography.” Sociological
Methodology 41(1):1-11.
Edin, Kathryn J., Corey D. Fields, Jonathan Fisher, David B. Grusky, Jure Leskovec,
Hazel R. Markus, Marybeth Mattingly, Kristen Olson, and Charles Varner. 2022.
“Who Should Own Data? The Case for Public Qualitative Datasets.” Working
paper.
Garboden, Philip M.E. and Eva Rosen. 2018. “Evaluation Tradecraft: Talking to
Landlords.” Cityscape 20(3):281-91.
1084 Sociological Methods & Research 52(2)
Garboden, Philip M.E. and Eva Rosen. 2019. “Serial Filing: How Landlords Use the
Threat of Eviction.” City & Community 18(2):638-61.
Gerring, John. 2005. “What Standards Are (or Might be) Shared?” Pp. 107-123 in
Workshop on Interdisciplinary Standards for Systematic Qualitative Research,
edited by M. Lamont and P. White. Washington, DC: National Science
Foundation.
Greif, Meredith J. 2022. Collateral Damages: Landlords and the Urban Housing
Crisis. New York: Russell Sage Foundation (American Sociological Association’s
Rose Monograph Series).
Katz, Jack. 2004. “Commonsense Criteria.” Pp. 83-90 in Workshop on Scientific
Foundations of Qualitative Research, edited by C. C. Ragin, J. Nagel, and P.
White. Washington, DC: National Science Foundation.
King, Gary, Robert Keohane, and Sidney Verba. 1994. Designing Social Inquiry.
Princeton, NJ: Princeton University Press.
Lamont, Michèle and Patricia White. 2005. Workshop on Interdisciplinary Standards
for Systematic Qualitative Research. Washington, DC: National Science
Foundation.
Lareau, Annette. 2012. “Using the Terms Hypothesis and Variable for Qualitative
Work: A Critical Reflection.” Journal of Marriage and Family 74(4):671-7.
Lareau, Annette. 2022. Listening to People. Chicago, IL: University of Chicago Press.
Manski, Charles F. 1993. “Identification of Endogenous Social Effects: The Reflection
Problem.” The Review of Economic Studies 60(3):531-42.
Ragin, Charles C., Joane Nagel, and Patricia White. 2004. Workshop on Scientific
Foundations of Qualitative Research. Washington, DC: National Science
Foundation.
Rosen, Eva, Philip Garboden, and Jennifer Cossyleon. 2021. “Racial Discrimination
in Housing: How Landlords Use Algorithms and Home Visits to Screen
Tenants.” American Sociological Review 86(5):787-822.
Schaefer, Stephan M. and Mats Alvesson. 2020. “Epistemic Attitudes and Source
Critique in Qualitative Research.” Journal of Management Inquiry 29(1):
33-45.
Silbey, Susan. 2004. “Designing Qualitative Research Projects.” Pp. 121-6 in
Workshop on Scientific Foundations of Qualitative Research, edited by C. C.
Ragin, J. Nagel, and P. White. Washington, DC: National Science Foundation.
Small, Mario L. 2009. “‘How Many Cases do I Need?’: On Science and the Logic of
Case Selection in Field-Based Research.” Ethnography 10(1):5-38.
Small, Mario L. and Jessica McCrory Calarco. 2022. Qualitative Literacy: A Guide to
Evaluating Ethnographic and Interview Research. Berkeley, CA: University of
California Press.
Book Review 1085
Author Biography
Stefanie DeLuca is the James Coleman Professor of Social Policy and Sociology at
the Johns Hopkins University, director of the Poverty and Inequality Research Lab,
and an affiliate of Opportunity Insights at Harvard University.