Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Book Review Symposium: Qualitative Literacy

Sociological Methods & Research


2023, Vol. 52(2) 1073–1085
Sample Selection Matters: © The Author(s) 2022
Article reuse guidelines:
Moving Toward sagepub.com/journals-permissions
journals.sagepub.com/home/smr

Empirically Sound
Qualitative Research

Small, M.S. and Calarco McCrory, J. 2022.


Qualitative Literacy: A Guide to Evaluating Ethnographic and Interview Research. Oakland,
CA: University of California Press. ISBN 978-0-5203-9066-9

Reviewed by: Stefanie DeLuca , Johns Hopkins University, Baltimore, MD, USA
DOI: 10.1177/00491241221140425

Abstract
Increasingly, the broader public, media and policymakers are looking to qualita-
tive research to provide answers to our most pressing social questions. While
an exciting and perhaps overdue moment for qualitative researchers, it is also a
time when the method is coming under increasing scrutiny for a lack of reliabil-
ity and transparency. The question of how to assess the quality of qualitative
research is therefore paramount, but the field still lacks clear standards to evalu-
ate qualitative work. In their new book, Qualitative Literacy, Mario Luis Small and
Jessica McCrory Calarco aim to fill this gap. I argue that Qualitative Literacy offers
a compelling set of standards for consumers to assess whether an in-depth inter-
view or participant observation was of sufficient quality and, to an extent,
whether sufficient time was spent in the field. However, by ignoring the vital
importance of employing systematic, well-justified, and transparent sampling
strategies, the implication is that such essential criteria can be ignored, under-
mining the potential contribution of qualitative research to a more cumulative
creation of scientific knowledge.

Keywords
Qualitative methods, sampling, interview methods, mixed methods,
transparency
1074 Sociological Methods & Research 52(2)

While qualitative sociologists have long revealed the limitations of surveys,


identified faulty policy assumptions, and upended what scientists thought they
knew about people’s lives, only recently has such research become prominent
in the media and in key policy debates (Edin et al. 2022). Increasingly, qualita-
tive researchers appear as experts in national news outlets, weigh in on the
nation’s op-ed pages, and give Congressional testimony. In sum, the broader
public is looking to qualitative research to find the answers that surveys, experi-
ments, and even big data often cannot provide. Even while yielding powerful
findings, some with unprecedented scope, quantitative research often fails to
identify the processes and mechanisms that underlie its results. Qualitative
researchers have been called upon to fill in the gap.
This is certainly a cause for celebration. But it is also a time when the
method is coming under increasing scrutiny (Edin et al. 2022), and has
been criticized for a lack of reliability and transparency. Just as the
promise of qualitative research is enormous, so is the peril of getting it
wrong. Thus, the question of how to assess the quality of qualitative research
is paramount. Recognizing this, between 2003 and 2005, the National
Science Foundation (NSF) commissioned two working groups which pro-
duced lengthy reports capturing a range of perspectives on how such research
should be evaluated (Ragin et al. 2004; Lamont and White 2005). Yet, practi-
tioners of qualitative methods still lack clear standards to evaluate qualitative
work. In their new book, Qualitative Literacy, Mario Luis Small and Jessica
McCrory Calarco (2022; hereafter S&C) aim to fill this gap.
I will argue that while S&C offer a useful set of standards for consumers to
assess whether an in-depth interview or participant observation was of suffi-
cient quality and, to a lesser extent, whether sufficient time was spent in the
field, the glass they offer is only half full. By ignoring the vital importance of
employing systematic, well-justified, and transparent sampling strategies, the
implication is that such essential criteria can be blithely skipped over, under-
mining the potential contribution of qualitative research to a more cumulative
creation of scientific knowledge.
As Susan Silbey argued in her 2004 NSF workshop essay on designing
qualitative research:

The goal of research is to produce results that can be falsifiable and in some way
affirmable by rational processes of actors other than the author. Most important is
that the researcher provide an account of how the conclusions were reached, why
the reader should believe the claims and how one might go about trying to
produce a similar account. What makes science morally, and rationally, compel-
ling is that it is a public enterprise… distinguished by the claim to produce shared
Book Review 1075

understanding through modes that can be rationally and collectively apprehended.


In short, we have an obligation not to “hide the ball.” To the extent that we do
“hide the ball,” we transform our science into rhetorical performance.

A Glass Half Full


The central contribution of Qualitative Literacy is the vital clarification that
what makes qualitative research distinct from quantitative and archival
research is that “the researcher not only collects the data but also produces
the data,” and that interviews and participant observation create data
through “reactive interactions” (S&C 12). This is exactly what makes quali-
tative research so powerful: the fact that through the intimate revelations of
everyday experiences, research participants provide countless chances to
teach researchers new things that push them beyond their own often
limited imaginations and experiences. Precisely because qualitative data are
created in the interaction between the researcher and the participant, the stric-
tures of surveys with fixed choice answers are gone, the processes and
mechanisms that underlie coefficients and experimental estimates are
exposed, and the stereotypes—some couched in theory—that often persist
in the confines of academic offices evaporate in the face of ground truth.
The extent of this “co-production” goes unappreciated by those who have
never collected qualitative data.
After elevating this unique aspect of qualitative research, S&C outline
dimensions for assessing whether in fact qualitative research as it is presented
in finished work actually reflects the best of this co-creative opportunity, or
instead falls short. The majority of the book provides a description of five
compelling criteria to assist consumers in assessing this co-creative dimen-
sion of qualitative work: cognitive empathy, heterogeneity, palpability,
follow-up, and self-awareness. The criteria are instantly recognizable to
those who conduct qualitative research as essential to what constitutes a high-
quality in-depth interview or ethnographic study. The text offers rich explica-
tion of the criteria and concrete examples; Qualitative Literacy is essential
reading for anyone collecting or assessing qualitative data.
Cognitive empathy is what opens the doors and creates the bridges that
bring people together through observation who typically never meet in
person. Cognitive empathy is the invitation to “tell me the story of your
life,” that confers expertise to the participant and puts the researcher in the
student’s desk (see Boyd and DeLuca 2017; Lareau 2022). Cognitive
empathy allows the researcher to authentically “show” rather than “tell” in
1076 Sociological Methods & Research 52(2)

the write-up of the results where people are coming from. It reveals the world
according to the participants rather than the researcher.
When a qualitative study demonstrates heterogeneity of outcomes or
mechanisms, it is a signal that the researcher did not succumb to the tempta-
tion of presenting only the findings consistent with their priors or extant
theory. Rather, respondents’ life stories, world views, motivations, and beha-
viors are presented with their messy complexity. Observing heterogeneity in
qualitative findings also suggests that interviews were systematically con-
ducted and coded (so as to reveal possible heterogeneity), and that there
was adequate “exposure”—the term S&C use to indicate time spent in the
field—to allow such heterogeneity to reveal itself.
The palpability standard is met when the data take a reader into the world
of another person, including visual and auditory descriptions of the context in
which the interview or observation took place. Palpability depends on provid-
ing sufficient concrete details—especially in a participant’s own words—for a
reader to imagine the participant’s reality, rather than relying on a researcher’s
abstract summaries (see also Katz 2004). Meeting this standard signals that
the researcher did not merely ask questions that could be answered with a
“yes” or “no,” but rather, pushed for richer responses by employing questions
such as, “Tell me the whole story about that last time that happened,” and
detailed follow-up probes. Satisfying this criterion requires cognitive
empathy. The payoff is a more authentic, emotionally rich account of
aspects of a participant’s story or circumstances.
Evidence of follow-up shows that an iterative approach was used, one that
deployed multiple tools in the sociologist’s toolbox—survey and administra-
tive data, archival materials, additional interviews, additional sites, and so on,
to confirm findings gleaned from interviews (sometimes involving follow-up
with the participants themselves). Meeting this criterion indicates that the
researcher resisted the temptation to be lazy or to jump to conclusions too
soon.
S&C define self-awareness as “the extent to which the researcher under-
stands the impact of who they are on those interviewed or observed” (119).
Their description of this criterion goes beyond empty statements about posi-
tionality and gives readers and researchers specific dimensions (access, dis-
closure and interpretation) through which to identify and account for their
impact on the research and/or to minimize it when necessary.
I am especially appreciative of the reflection that demographic matching
between interviewers and participants cannot replace self-awareness (S&C,
129–131). In situations where interviewers differ from participants on socio-
economic or racial characteristics, participants may be less likely to assume
Book Review 1077

shared understandings, and more likely to explain their experiences and views
in detail.

A Glass Half Empty


The subtitle of Qualitative Literacy makes the claim that it is “a guide to
evaluating ethnographic and interview research,” and the preface and intro-
duction explain that the book is aimed at helping consumers—reviewers,
program officers, non-qualitative researchers—assess the strengths and weak-
nesses of such research. On the first page, the authors ask the reader if, given
two pieces of qualitative work, the reader could tell which is “empirically
unsound,” and “what criteria you would use to tell the difference” (ix). We
learn that the book is meant to instruct readers on how to determine
whether a given qualitative study is “scientifically believable” (xii).
While it admirably fills a gap in the field, I argue that Qualitative Literacy
falls short in filling the gap, which is much broader than S&C claim. This
becomes evident on page 15, where S&C write, “we happen to disagree
with the propositions that either survey or experimental data collection prin-
ciples should inform qualitative data collection, that interviews must have
large sample sizes or that ethnographies must have multiple sites (emphasis
added).”
I agree with the authors that not all empirical claims are the same—nor
should they be. Nonetheless, how one judges the quality of research—any
research—should depend on whether, and to what extent, the research can sup-
ports its claims. I see no a priori reason to eschew survey, experimental, or other
data collection principles that inform qualitative research design, if doing so
adds more tools to help answer a question or back up a claim. S&C take the
opposite view: “except for a high level of exposure, no design feature is neces-
sary for all interview studies to be empirically effective” (149).
Thus, Qualitative Literacy takes an extraordinarily strong stance in the
long-running debate over whether qualitative work should be held to the
same standards as quantitative research, or whether qualitative researchers
should create their own standards and language (Becker 1996, 2009; King
et al. 1994; Lamont and White 2005; Lareau 2012; Ragin et al. 2004;
Small 2009 and others). Qualitative research is indeed different from quanti-
tative research, in all of ways S&C describe. But taken as a whole, social
science is a “public enterprise… distinguished by the claim to produce
shared understanding through modes that can be rationally and collectively
apprehended” (Silbey 2004). This necessarily involves engagement with
researchers deploying different methods than our own.
1078 Sociological Methods & Research 52(2)

In defending the logic of qualitative research as distinct from that


employed in quantitative research, we risk losing sight of basic standards
that all works of social science must share (Gerring 2005). If qualitative
researchers want their findings to add to scientific knowledge or to inform
public debates, then it is not enough just to conduct good in-depth interviews
or ethnographic observations; the work also needs to be able to provide sci-
entific contributions that can be translated, reproduced, and built on. For
example, to pick but one criterion to which both quantitative and qualitative
researchers should adhere, others in the scientific community need to know
how the research participants were selected and included in the study—
including from what groups, and how successful the researcher was in collect-
ing data from the intended participants. To neglect this principle is to abandon
the aim of contribution to the creation of knowledge, or, worse still, by cre-
ating “mis-learning” when getting it wrong.
While the logic behind sampling in qualitative work need not always be
identical to that typically used in quantitative work (see Small 2009), a well-
justified sampling design, based on a well-defined population of interest, is
nonetheless necessary to ensure that a researcher has explored the range in
that population of interest, including both “known heterogeneity” (theoretic-
ally important differences based on previous research or logic, like gender,
age, ethnicity) and “unknown heterogeneity” (allowing for the unexpected).
It is simply not good enough to talk only to the people who respond first to
a study invitation—such people will no doubt be different from those who
are slow to volunteer or never opt-in at all, and the “easy gets” alone
cannot speak to the experiences of the whole. In fact, it is often the “hard
to gets” who—by virtue of their disadvantage (or privilege)—have the
most vital stories to tell. Adhering to the principle of employing a well-
justified, systematic design helps ensure that we do not neglect the very par-
ticipants who could transform our findings or theoretical contributions.
Achieving generalizability in qualitative research is challenging, as it is in
any kind of empirical research (generalizability is a concept not bound to a
specific method). Most qualitative research aspires to have some relevance
beyond just the specifics of a given study. Those researchers who are able
to select their study participants from an available population, such as a
survey, policy intervention, or some other well-defined group (such as chil-
dren enrolled in a given grade), and also use random or stratified random sam-
pling designs to select samples from these populations can and should report
response rates. If the sample is randomly selected and response rates are suf-
ficiently high, such research is more generalizable than research collected
through venue or convenience sampling. Besides increasing the
Book Review 1079

generalizability of the study, random sampling ensures heterogeneity. When


sampling randomly is not possible, working to ensure that the sampling
design is nonetheless well-justified and systematic can accomplish the
same aim. The goal is to ensure that the researcher is listening to all of the
kinds of participants whose voices should be heard in order to draw reliable
conclusions, and to engender trust among other researchers that the research
can be relied upon.
Applying these standards also increases transparency, which is critical for
other researchers who wish to replicate one’s findings in whole or part. While
I do not believe a perfect standard of replication can always be met in quali-
tative research due to the co-creative aspect of the work, there is no a priori
reason to reject out of hand the aim of producing findings that can be repli-
cated by other research—whether qualitative or quantitative.1
As S&C note, one book cannot do it all. Yet, given the stated purpose of
Qualitative Literacy, and its potential reach, I worry that a new generation of
aspiring qualitative researchers might come away from the book believing
that if they implement these five criteria, they have met the standard for
empirically sound qualitative research. The problem is, no matter how
adroitly one deploys their craft with a given research participant, it is still crit-
ical to know how and why that participant was included in the study.
While reading this book, I am reminded of Charles Manski’s seminal
argument about identification problems in quantitative social science
research—if a research design or sampling method cannot account for
unobservables, it does not matter how big the sample is, causal inference
is weakened (1993; see also Berk 1983). Given the nature of the questions
qualitative researchers ask, which are often exploratory and evolve over
time, the analogy does not hold in full. But it can in part: if qualitative
researchers neglect to represent the full range of the population in their
research, their ability to support their claims weakens significantly
(Schaefer and Alvesson 2020).
Are we to assume that the shyest, the busiest, or the most socially isolated
people in our population of interest are simply observations missing at
random—that what they say would not affect our empirical claims? S&C
hold that variation as a feature of research design is not “necessary for a
study to be empirically sound” and that “exposure” is enough to ensure het-
erogeneity (49, 51) “unless the researcher interviewed very few people” (62).
Here, the obvious questions arise: what exactly is adequate exposure and how
does one know whether they’ve accomplished it? This recalls the “saturation”
debate which has rocked the subfield for decades.
1080 Sociological Methods & Research 52(2)

In sum, in order to rise to the standards of “empirically sound qualitative


research,” qualitative scholars must repeatedly (and systematically) ask them-
selves “Who did not I talk to?” Taking the extra steps to answer that question
to the best of one’s ability makes the work harder, but I hold that it is more
costly to ignore it, for the researcher, the scientific community, and other con-
sumers of qualitative work. As the ethnographer Mitchell Duneier argues,
when qualitative researchers fail to consider the respondents they neglected,
it is easier to “sidestep alternative perspectives or deceive themselves into
thinking that these alternative perspectives either do not exist or do not
have implications for their developing line of thinking” (2011:9). Duneier
argues that, “Ethnographers well into their studies could, as a matter of
course, ask a few simple questions: Are there people or perspectives or obser-
vations outside the sample whose existence is likely to have implications for
the argument I am making?” (2011:8).
In my two decades of mentoring future researchers, I have seen the high
payoff from a thoughtfully constructed sampling design. Then graduate
student Phil Garboden and post-doctoral researcher Eva Rosen observed
that while many qualitative researchers had explored the lives of low-income
tenants, few had included their landlords. There is no national registry of
owners of rental property. Nor do cities or other municipalities keep such
databases. So how might one sample landlords? Eva Rosen and Phil
Garboden scraped rental ads on multiple rental websites in three cities with
varying housing market characteristics, geocoded all scraped addresses, and
then generated a random stratified sample of units, ensuring that the tracts
chosen would represent diversity in SES and ethnicity/race (see Garboden
and Rosen 2019 for details). Due to this sampling design, the study could
claim that, while hardly representative of all landlords nationally, the findings
were generalizable to landlords renting properties in these three cities at the
time. Importantly, this design ensured that despite a relatively small sample
by quantitative standards (N = 127), it included the perspectives of a broad
range of property owners in these markets (Garboden and Rosen 2018).
Their participants included both small-scale amateur landlords, highly profes-
sionalized owners of large developments, landlords who specialized in
renting to Housing Choice Voucher (Section 8) holders, and more, each
group employing a distinct set of business practices that varied across
markets. Some of these landlords engaged in practices that conformed with
stereotypes about slumlords yet most did not (Garboden and Rosen 2018;
Rosen et al. 2021).
For a policy audience seeking to learn from this work, the implications of
a narrow and idiosyncratic versus a broad systematic sample are enormous;
Book Review 1081

if a policymaker presumes that most landlords operating in low-income com-


munities are slumlords, but in fact, many are actually over-leveraged
working-class people who are themselves barely making it, with many at
risk of foreclosure, imposing greater landlord sanctions may simply push
many of the providers of low-income housing out of the market, speeding
the process of abandonment in disinvestment in low-income communities
(see also Greif 2022). As Garboden and Rosen argue,

“When possible, it is vastly preferable to select 100 respondents with stratified


random selection than to introduce the bias associated with convenience, venue,
or snowball sampling. Like other industries, real estate consists of dozens of
niches and hundreds of supply-side networks. To sample based on location
or referrals, then, is to introduce inaccurate homogeneity into one’s sample
and potentially miss significant sections of the market.”

Perhaps I am preaching to the choir—many qualitative researchers are


already employing careful sample selection or sampling designs in their
research, at least to some degree. Between 2018 and 2022, the American
Sociological Review, American Journal of Sociology, Social Forces, and
Social Problems published 856 articles, 299 (35 percent) of which used quali-
tative methods.2,3 Of these, 164 (54.8 percent) employed in-depth interviews
in some capacity,4 with 98 using in-depth interviews as the primary method of
data collection (interviews were one of the data collection methods focused
upon in Qualitative Literacy).
Nearly all of the articles that used interviews included some information
about the participants they interviewed (such as broad demographic characteris-
tics or institutional group membership), and many described a sample selection
strategy of some kind, ranging from snowball recruitment techniques to sam-
pling techniques where research participants were selected at random from
defined sampling frames. Beyond that, however, there is a striking lack of con-
formity in the methods sections of these articles. Articles differed in the degree
to which they justified their sample selection approach, or were explicit about
who was and was not included in the study. Roughly half of the articles
based in part or solely on interviews did not describe their sample selection
design with more than low or partial transparency on these details (Table 1).
A wide variety of sample selection approaches were employed. This is not
necessarily a problem if the logic of the design can be defended and the
method of inclusion is transparent, but these standards were rarely met.
In sum, I concur with S&C (147) that one advantage of qualitative research
is the ability to capture how people understand themselves. Yet without a
1082 Sociological Methods & Research 52(2)

Table 1. Relative Transparency of Sample Selection Procedures in Qualitative


Articles, 2018–2022.

Articles Using
All Articles Using Interviews as Primary
Interview Methods Method

Count Percent Count Percent

Transparency
Low/none 88 53.7 47 48.0
Partial 57 34.8 36 36.7
High 19 11.6 15 15.3
Response rate provided 12 7.3 9 9.2
Observations 164 98
Notes: These data are sourced from American Sociological Review, American Journal of Sociology,
Social Forces, and Social Problems over the past 5 years (January 2018–October 2022).
Transparency refers to the degree to which sample selection techniques and justifications were
explicitly described. For example, “low/none” typically means that the article contained no or few
details describing the sample selection methods used, limited details on the sites, no mention of
participants who were not reached, and scant recruitment details (e.g., participants responded to
fliers posted in a non-specific geography or location); “partial” transparency typically means that
the article included some description of how research participants were selected, possibly with a
sampling method (purposive, snowball), but no sampling frame or well-defined pool of potential
participants, or no clarity on which groups or participants were likely missed with this strategy;
“high” transparency articles typically included a detailed sampling frame or well-defined
population of interest, explicit justification for sample inclusion or systematic sampling approach,
response rates when possible, and clarity on the limits of the sample selection strategy or
attempts to mitigate such limits.

well-justified sampling design, consumers of our work will not be able to


judge whether it is empirically sound. As the great qualitative sociologist
Howard S. Becker (1996) once wrote, “don’t make up what you could find
out.”

Declaration of Conflicting Interests


The author declared no potential conflicts of interest with respect to the research,
authorship, and/or publication of this article.

Funding
The author disclosed receipt of the following financial support for the research, author-
ship, and/or publication of this article: This work was supported by the Smith
Richardson Foundation, Bill and Melinda Gates Foundation.
Book Review 1083

ORCID iD
Stefanie DeLuca https://orcid.org/0000-0002-4122-1032

Notes
1. Importantly, perfect replication is an elusive goal in any kind of data analysis. For
example, in large scale survey research, it is not always practical to locate everyone
in the same sample. Even if one could, time has passed and circumstances have
likely changed. Thus, the goal is not exact replication of results, but instead replic-
ability through a clear enumeration of the methods used to generate results.
2. At least one limitation of this exercise is that it leaves out books using qualitative
data and interviews, which typically allow for more explication and detail on dif-
ferent methodological dimensions.
3. I am grateful to Kendall Dorland, Thelonious Goerz, Matthew Gonzalez, Claire
Smith, and Margaret Tydings for their research assistance with these analyses.
4. This excludes 14 papers that are primarily single-case studies or historical com-
parative case studies.

References
Becker, Howard S. 1996. “The Epistemology of Qualitative Research.” Pp. 53-71 in
Ethnography and Human Development: Context and Meaning in Social Inquiry,
edited by R. Jessor, A. Colby, and R. A. Shweder. Chicago, IL: University of
Chicago Press.
Becker, Howard S. 2009. “How to Find out How to Do Qualitative Research.”
International Journal of Communications 3:545-53.
Berk, Richard A. 1983. “Introduction to Sample Selection Bias in Sociological Data.”
American Sociological Review 48(3):386-98.
Boyd, Melody L. and Stefanie DeLuca. 2017. “Fieldwork with In-Depth Interviews:
How to Get Strangers in the City to Tell You Their Stories.” Pp. 239-53 in
Methods in Social Epidemiology, 2nd edition edited by M. J. Oakes and
J. Kaufman. Hoboken, NJ: John Wiley & Sons.
Duneier, Mitchell. 2011. “How Not to Lie with Ethnography.” Sociological
Methodology 41(1):1-11.
Edin, Kathryn J., Corey D. Fields, Jonathan Fisher, David B. Grusky, Jure Leskovec,
Hazel R. Markus, Marybeth Mattingly, Kristen Olson, and Charles Varner. 2022.
“Who Should Own Data? The Case for Public Qualitative Datasets.” Working
paper.
Garboden, Philip M.E. and Eva Rosen. 2018. “Evaluation Tradecraft: Talking to
Landlords.” Cityscape 20(3):281-91.
1084 Sociological Methods & Research 52(2)

Garboden, Philip M.E. and Eva Rosen. 2019. “Serial Filing: How Landlords Use the
Threat of Eviction.” City & Community 18(2):638-61.
Gerring, John. 2005. “What Standards Are (or Might be) Shared?” Pp. 107-123 in
Workshop on Interdisciplinary Standards for Systematic Qualitative Research,
edited by M. Lamont and P. White. Washington, DC: National Science
Foundation.
Greif, Meredith J. 2022. Collateral Damages: Landlords and the Urban Housing
Crisis. New York: Russell Sage Foundation (American Sociological Association’s
Rose Monograph Series).
Katz, Jack. 2004. “Commonsense Criteria.” Pp. 83-90 in Workshop on Scientific
Foundations of Qualitative Research, edited by C. C. Ragin, J. Nagel, and P.
White. Washington, DC: National Science Foundation.
King, Gary, Robert Keohane, and Sidney Verba. 1994. Designing Social Inquiry.
Princeton, NJ: Princeton University Press.
Lamont, Michèle and Patricia White. 2005. Workshop on Interdisciplinary Standards
for Systematic Qualitative Research. Washington, DC: National Science
Foundation.
Lareau, Annette. 2012. “Using the Terms Hypothesis and Variable for Qualitative
Work: A Critical Reflection.” Journal of Marriage and Family 74(4):671-7.
Lareau, Annette. 2022. Listening to People. Chicago, IL: University of Chicago Press.
Manski, Charles F. 1993. “Identification of Endogenous Social Effects: The Reflection
Problem.” The Review of Economic Studies 60(3):531-42.
Ragin, Charles C., Joane Nagel, and Patricia White. 2004. Workshop on Scientific
Foundations of Qualitative Research. Washington, DC: National Science
Foundation.
Rosen, Eva, Philip Garboden, and Jennifer Cossyleon. 2021. “Racial Discrimination
in Housing: How Landlords Use Algorithms and Home Visits to Screen
Tenants.” American Sociological Review 86(5):787-822.
Schaefer, Stephan M. and Mats Alvesson. 2020. “Epistemic Attitudes and Source
Critique in Qualitative Research.” Journal of Management Inquiry 29(1):
33-45.
Silbey, Susan. 2004. “Designing Qualitative Research Projects.” Pp. 121-6 in
Workshop on Scientific Foundations of Qualitative Research, edited by C. C.
Ragin, J. Nagel, and P. White. Washington, DC: National Science Foundation.
Small, Mario L. 2009. “‘How Many Cases do I Need?’: On Science and the Logic of
Case Selection in Field-Based Research.” Ethnography 10(1):5-38.
Small, Mario L. and Jessica McCrory Calarco. 2022. Qualitative Literacy: A Guide to
Evaluating Ethnographic and Interview Research. Berkeley, CA: University of
California Press.
Book Review 1085

Author Biography
Stefanie DeLuca is the James Coleman Professor of Social Policy and Sociology at
the Johns Hopkins University, director of the Poverty and Inequality Research Lab,
and an affiliate of Opportunity Insights at Harvard University.

You might also like