Professional Documents
Culture Documents
Brymans Social Research Methods, 6th Edition (Clark Etc.)
Brymans Social Research Methods, 6th Edition (Clark Etc.)
METHODS
SIXTH EDITION
TOM CLARK
LIAM FOSTER
LUKE SLOAN
ALAN BRYMAN
All rights reserved. No part of this publication may be reproduced, stored in a retrieval
system, or transmitted, in any form or by any means, without the prior permission in
writing of Oxford University Press, or as expressly permitted by law, by licence or under
terms agreed with the appropriate reprographics rights organization. Enquiries concerning
reproduction outside the scope of the above should be sent to the Rights Department,
Oxford University Press, at the address above
You must not circulate this work in any other form and you must impose this same
condition on any acquirer
Published in the United States of America by Oxford University Press 198 Madison Avenue,
New York, NY 10016, United States of America
Data available
ISBN 978–0–19–879605–3
ebook ISBN 978–0–19–263661–4
Links to third party websites are provided by Oxford in good faith and for information only.
Oxford disclaims any responsibility for the materials contained in any third party website
referenced in this work.
Screenshots and screen captures are taken from IBM SPSS® Statistics Version 26.0 for Mac;
NVivo® Version 12 for Mac; R Version 4.0.0 for Mac; Stata® Version 15 for Mac; and
Microsoft Excel® 2016 for Windows.
Hundreds of thousands of students across six continents have been fortunate enough to
learn from Alan’s publications. Few contemporary UK academics have had such a profound
effect on learning. At Oxford University Press we are incredibly proud of Alan’s significant
achievements over the many years we worked with him. We thank him for everything he has
done for research methods as a discipline, and for his tireless dedication to the pursuit of
shining the light of understanding into the dark corners of students’ minds. It was a real
pleasure to work with him.
ABOUT THIS BOOK
Alan Bryman originally wrote this book, and we have updated it, with
two main groups of readers in mind. The first are undergraduates in
social science subjects such as sociology, social policy, criminology,
human geography, politics, and education who take a research
methods module as part of their course. The second group comprises
undergraduates and postgraduates on these courses who carry out a
research project or dissertation as part of their studies, usually
towards the end of their programme of study.
You will notice that certain topics recur across Parts Two and Three:
sampling, interviewing, observation, documents, and data analysis.
However, it will become clear that quantitative and qualitative
researchers often approach these issues in different ways.
Our experience as lecturers suggests that students will use this book
in many different ways. For some, it will be a reference book that
aids them throughout their programme, some will dip in and out of it
as required for their research methods module and/or student
research project, while others might use specific discussions
associated with a particular method as a platform for their own
study.
How we recommend that you use this book will therefore depend on
what you are using it for, but the ‘Guide to using this book’, which
appears later on in this introductory section, provides a good starting
point for all readers. This will give you an overview of the main
features and resources that we have provided within and alongside
the text, and how they might help you achieve your aims.
However you use this book (and however many times during the
course of your studies and/or career), we hope that you find it as
invaluable a guide to the essentials of social research as we found its
earlier editions. Whatever your focus or stage, we recommend
making full use of both the core chapter content and the online
resources, including the self-test questions, the flashcard glossary
provided for every chapter, and the ‘Research process in practice’
simulation. We also suggest that before you begin reading, you watch
our videos about the impact of Covid-19 on social research. The
pandemic is still underway at the time of writing but has already had
a significant effect on the social world and therefore also on social
research, so our reflections are intended to highlight some
considerations to keep in mind as you approach the book’s content
and accompanying resources.
ABOUT THE AUTHORS
This edition has been updated by a new author team: Tom Clark is
a Lecturer in Research Methods at the University of Sheffield, UK;
Liam Foster is a Senior Lecturer in Social Policy and Social Work
at the University of Sheffield; and Luke Sloan is Professor, Deputy
Director of the Social Data Science Lab, and Director of Learning and
Teaching at the School of Social Sciences at Cardiff University, UK.
Watch this video to hear more about the new authors, and then read
on to learn about this book’s original author, Professor Alan
Bryman.
Placed at the start of each chapter, the Chapter guide grounds you
in each topic, outlining the issues to be discussed and the knowledge
and skills you will gain. It helps you navigate the chapters and
provides benchmarks against which you can measure your progress
and understanding. As you read, Key concept boxes provide clear,
concise explanations of crucial terms and processes, and concluding
lists of Key points sum up the essential ideas to take away.
CONSOLIDATE YOUR
UNDERSTANDING
Tips and skills boxes outline key practical considerations and skills
for each method, and the Student researcher’s toolkit online
provides checklists, templates, and examples to help you put this
knowledge into practice in your student research project. We also
address common questions in the toolkit FAQs. The
‘Research process in practice’ simulation allows you to gain
experience of the full process—from choosing a research design to
disseminating your findings—without real-world consequences if
things don’t work out quite as they should.
At the time of going to press the pandemic is ongoing and it is far too
early for us to predict its longer-term impact for social research and
social research methods. Instead of providing written advice based
on guesswork, then, we will be sharing our reflections on this theme
via the video below. We will publish new videos regularly to respond
to the latest developments as we continue live through what is likely
to be an ever-evolving ‘new normal’.
Given the way Alan’s work had inspired us throughout our time at
university and beyond (Tom and Liam were fortunate enough to
have met and worked with him on several occasions), we were all
extremely saddened to hear about his passing—and in many ways,
we will always be indebted to him for helping us to develop the skills
necessary for us to do our jobs today. So, we all experienced a
strange combination of great regret and some honour when Oxford
University Press told us that he had recommended our involvement
in the new edition of this text. As such, our first acknowledgement
must go to Alan Bryman, who has been such a great influence on
many students and lecturers, ourselves included. In updating the
text, we have tried very hard to be true to its original aims and Alan’s
interests, and we very much hope that he would have been pleased
with the sixth edition of Bryman’s Social Research Methods—the
title of which has been changed to ensure that its association with his
name continues.
But more than anyone else, we owe a huge debt of gratitude to Livy
Watson, our Development Editor for the sixth edition. She has given
so much time, effort, enthusiasm, and good humour, and we very
much appreciate her involvement—without her, this book would
simply not have been possible. Indeed, the process of writing has
been particularly challenging as much of our work on the book took
place during the Covid-19 pandemic. But without any doubt, it is a
stronger book because of her hard work and assistance—thank you
Livy.
The authors and Oxford University Press would like to thank the
many members of the academic community whose generous and
insightful feedback has helped to shape this edition.
Dr Robert Ackrill
Nottingham Trent University
Dr Bernadine Brady
National University of Ireland, Galway
Dr Milena Büchs
University of Leeds
Dr Weifeng Chen
Brunel University London
Dr Yekaterina Chzhen
Trinity College Dublin
Dr Roxanne Connelly
University of Edinburgh
Dr Teresa Crew
Bangor University
Dr Neli Demireva
University of Essex
Dr Federico Farini
University of Northampton
Dr Sharif Haider
The Open University
Dr Katherine Hubbard
University of Surrey
Dr Lyn Johnstone
Royal Holloway, University of London
Dr Justin Kotzé
Teesside University
Dr Sung-Hee Lee
University of Derby
Dr Tim Mickler
Leiden University
Dr Gerben Moerman
University of Amsterdam
Dr Jasper Muis
Vrije Universiteit Amsterdam
Dr Stefanie Reissner
Newcastle University
Dr Chloe Shu-Hua Yeh
Bath Spa University
Dr Sebastiaan Steenman
Utrecht University
Dr Thomas Stubbs
Royal Holloway, University of London
Dr Sotirios Zartaloudis
University of Birmingham
Dr Anna Zoli
University of Brighton
ABBREVIATIONS
KWIC key-word-in-context
TB tuberculosis
UKDA UK Data Archive
WI Women’s Institute
In Part One of this book we lay the groundwork for the more
specialized chapters in Parts Two, Three, and Four. In Chapter 1, we
outline what social research is and how we go about doing it, as well
as discussing some of its main elements. Chapters 2 and 3 focus on
two ideas that we will come back to throughout this book—the idea of
research strategy and the idea of research design—and outline
considerations that affect how we practice social research. We will
identify two research strategies: quantitative and qualitative
research. Chapter 3 outlines the different research designs that are
used in social research. In Chapters 4 and 5 we provide advice on
some of the issues you will need to consider if you have to prepare a
dissertation based on a relatively small-scale research project.
Chapter 4 deals with planning and formulating research
questions, including the principles and considerations that need to
be taken into account when designing a small-scale research project,
while Chapter 5 is about how to conduct a literature review. In
Chapter 6 we discuss ethical and political aspects of social research
and address some of the key challenges presented by new forms of
data such as social media.
CHAPTER 1
THE NATURE AND PROCESS
OF SOCIAL RESEARCH
CHAPT ER G UI DE
This chapter is an introduction to the fundamental
considerations we must work through when we conduct
social research. We will cover much of this material in
detail later in the book, so think of this as a simple outline
of what social research involves. In this chapter we will
discuss
First, this understanding should help you avoid some of the errors
and difficulties that can arise when conducting social research, such
as forgetting to match research methods to the research questions
being asked, asking ambiguous questions in interviews or surveys, or
engaging in ethically questionable practices. If you are expected to
conduct a research project as part of your course, an education in
research methods will not only help ensure that you follow the
correct procedures; it will also ensure that you are aware of the full
range of methods and approaches available to you and can choose
those that are best suited to your project.
Not only theory itself but also a researcher’s views about the nature
of the theory–research relationship can have implications for
research. Some people think that theory should be addressed at the
beginning of a research project. At this stage, a researcher might
engage in some theoretical reflections, through which, if they are
carrying out quantitative research, they formulate and then test a
hypothesis. In qualitative research, it is more common to refer
to exploring a ‘research question’ rather than using the language of
hypothesis and testing. Others view theory as an outcome of the
research process—as something that is formulated after research has
been carried out, and developed through or in response to it. This
difference has implications for research: the first approach implies
that a set of theoretical ideas drive the collection and analysis of data,
whereas the second suggests a more open-ended strategy in which
theoretical ideas emerge out of the data. In reality, views on the role
of theory in relation to research are rarely as simple and neatly
contrasting as this account suggests, but they do differ from
researcher to researcher.
Reviewing the literature is the subject of Chapter 5 and we also discuss it in other
chapters, including in Chapter 25, which covers the writing up of a literature review as
part of a research write-up.
Our views about the nature of the social world, and social
phenomena (meaning observed facts, events, or situations), are
known as ontological positions. Some writers believe that the social
world should be viewed as external to social actors and something
over which we have no control. It is simply there, acting upon and
influencing behaviour, beliefs, and values. They might view the
culture of an organization as a set of values and behavioural
expectations that exert a powerful influence over those who work in
the organization and into which new recruits have to be socialized.
But the alternative view is that the social world is in a constant
process of reformulation and reassessment. Considered through this
perspective, an organization’s culture is continually shaped and
reshaped by the practices and behaviour of members of the
organization. These considerations are essentially about whether
social phenomena are relatively inert and beyond our influence, or
are very much a product of social interaction.
We address ethical and political issues in Chapter 6 and touch on them in several other
chapters.
TA B L E 1 . 1 Stages in the research process and where to find them in this book
Concepts The ideas that drive the We cover theories in Chapter 2, and we go
and research process and that into greater depth about concepts in
theories help researchers interpret Chapters 7 and 16.
their findings. In the course
of the study, the findings
also contribute to the ideas
that the researchers are
examining.
Stage Description of stage Where to find in this book
Literature review
As we noted in Section 1.3, the existing literature is an important
element in any research project. When we have identified a topic or
issue that interests us, we need to explore what has already been
written about it in order to determine
• what is already known about the topic;
You then share this knowledge with your future readers (including
the person(s) who will be assessing and grading your work, if you are
doing a student project) by writing a literature review. The key
thing to note about this element of the research process—a point we
make multiple times in Chapter 5 because it is so important—is that
it is not simply a summary of the literature: it must be critical rather
than simply descriptive. This does not necessarily mean that you
need to be negative about the papers you read, but it does mean that
you need to assess the significance of each piece of work and show
how it fits into the narrative that you want to construct about the
literature. If you do this, then the literature review becomes a very
useful way to demonstrate the credibility of your research and the
contribution it makes.
We expand on the contrast between inductive and deductive approaches to theory and
research in Chapter 2.
Research questions
We have touched on research questions several times in this
chapter, and you may have already realized that they are a key
element of research. Research questions are explicit statements of
what you intend to find out about. Most people start with a general
idea of what they are interested in researching, and the exercise of
developing research questions forces us to narrow down our broad
area of interest to the thing(s) that we really want to find out and to
express this much more precisely and rigorously—as we can see in
Learn from experience 1.3.
L EARN F RO M EXPERI ENCE 1 . 3
Generating and changing research questions
The nature of the research question will determine how you proceed
with your investigation and what research design you use. If you are
looking at the impact of an intervention, then you might consider
conducting an experiment; if you are interested in social change
over time, then a longitudinal design might be appropriate.
Research questions that are concerned with particular communities,
organizations, or groups might use a case study design, while
describing current attitudes or behaviours at a single point in time
could use a cross-sectional design. Or maybe there is a
comparative element that is integral to your question? You will
need to become familiar with the many implications associated with
different research designs and how the different designs lend
themselves to different types of research question.
We discuss research designs in Chapter 3.
Sampling
In social research we are rarely able to interview, observe, or send
questionnaires to all the individuals who are appropriate to our
research. Equally, we are unlikely to be able to analyse the content of
all publications or social media posts relating to the topic that
interests us. Time and cost issues always limit the number of cases
we can include in our research, so we nearly always have to study a
sample of the wider group.
Data collection
To many people, data collection is the key point of any research
project, so it is not surprising that we probably spend more words
and pages discussing this stage in the research process than any
other. Some of the methods of data collection we cover, such as
interviewing and questionnaires, may be more familiar to you than
others.
Data analysis
At its most basic level, data analysis involves applying statistical
techniques to data that have been collected. However, many types of
data are not suitable for statistical analysis, and even when they are,
researchers may want to take alternative approaches.
Our basic definition also does not acknowledge the fact that there are
multiple aspects to data analysis: managing the raw data, making
sense of the data, and interpreting the data.
Writing up
Research is no use to anyone if it is not written up. We conduct
research so that it can be shared with others, allowing them to read
what we have done, respond to it, and maybe even build their own
work upon ours.
The ways in which social researchers write up their work will vary
according to the different styles of doing research, especially
depending on whether the research adopted a quantitative,
qualitative, or mixed methods approach. However, most
dissertations, theses, and research articles will include the following
elements.
• Introduction. This outlines the research area and its
significance. It may also introduce the research questions.
• Literature review. This sets out what is already known about
the research area and examines it critically.
You will notice that Table 1.1 doesn’t show a discrete stage for ethical
considerations, and that is because we must consider and reflect on
ethics, alongside our values and politics, throughout the research
process. We will cover ethical aspects of social research in Chapter 6.
( https://blogs.lse.ac.uk/impactofsocialsciences/2013/03/13/overly-
honest-social-science/)
When writing up your own social research project you should reflect
on any challenges you faced and any limitations of your study. By
setting out what you planned to do, what actually happened, and any
limitations arising from the change in plan, you are demonstrating to
the reader that you understand the implications of changes of
direction to your own work. In our experience of supervising (and
marking) many student projects over the years, we have always been
pleased to see students reflecting on what they could have done
better or what they would do if developing their research further. No
one expects a student project to be perfect—but we do want to see
that you understand whether certain aspects could have been better.
There are two key points to take away from this chapter. The first is
that social research encompasses a variety of methodological
traditions that are not always in agreement with each other. This is
an important and healthy trait: it encourages us to justify our
decisions and think carefully about what it is we are trying to find
out. An additional consideration is that the dividing lines between
these traditions, such as between quantitative and qualitative
research strategies, are often less clear-cut than they seem (an issue
explored further in Chapter 24).
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
This view of the role that theory plays within research is what Merton
had in mind when he identified middle-range theories, arguing that
they are ‘principally used in sociology to guide empirical inquiry’
(Merton 1967: 39). As shown in Figure 2.1, which sets out the
sequence of events for a deductive research project, theory and the
hypothesis deduced from it come first, and they drive the process of
gathering data, as was the case in Röder and Mühlau’s (2014) study—
see Research in focus 2.1. The last step in this process, revision of
theory, involves a movement that goes in the opposite direction from
deduction—it involves induction, as the researcher reflects on the
implications of their findings for the theory that prompted the whole
exercise. The new findings are fed back into the existing body of
knowledge in the area being studied. For example, the findings of a
study involving Twitter data by Williams et al. (2017a) built upon the
knowledge base relating to the ‘broken windows’ theory (Wilson and
Kelling 1982), which suggests that visible evidence of low-level
crime, such as broken windows, encourages more crime. In the study
conducted by Williams and colleagues, tweets containing certain
terms relating to low-level disorder were found to be associated with
rates of recorded crime, offering an insight into how the theory
manifests in social media:
The association of the measure [tweets containing ‘broken windows’ indicators] with a
range [of] crime types can be explained in several ways. It is possible that tweeters
sense degradation in the local area, and this is associated with increased crime rates. If
this is the case, then it would suggest further support for the broken windows thesis,
that is, if we accept the proxy measure of broken windows via social media. This
argument certainly seems to hold for residents in low-crime areas in relation to certain
offences. But the argument does not hold for high-crime areas. This can be explained
in terms of differences in disposition to report local issues of crime and disorder, a
pattern found in offline settings.
The fact that this study was based on qualitative data is significant
because, in much the same way that the deductive strategy is
associated with quantitative research, an inductive strategy linking
data and theory is usually associated with qualitative research.
O’Reilly and colleagues used qualitative data in the form of the
researchers’ field notes from observations and the respondents’
detailed answers to the questions posed in in-depth, semi-
structured interviews. However, as we will see, it would be
simplistic to say that the inductive strategy is always associated with
qualitative research: some qualitative research does not generate
theory, and theory is often used as a background to qualitative
investigations.
There is a link between the five principles in Key concept 2.2 and
some of the points that we have considered about the relationship
between theory and research. For example, positivism involves
elements of both a deductive approach (principle 2) and an inductive
approach (principle 3). Also, positivism draws quite a sharp
distinction between theory and research, the role of research being to
test theories and to provide material for the development of ‘laws’.
But this implies that it is possible to collect observations in a way
that is not influenced by pre-existing theories. The fact that
theoretical terms are only considered scientific if they can be directly
observed implies that observation has greater epistemological status
than theory.
Interpretivism
Interpretivism (see Key concept 2.4) is an epistemology that
contrasts with positivism. It is a wide-ranging term that incorporates
a number of different perspectives and approaches. This includes
hermeneutics, phenomenology, Weber’s concept of Verstehen, and
symbolic interactionism. What these traditions share is the view that
the social world cannot be studied using a scientific model. This is
because the subject matter of the social sciences—people and their
institutions—is fundamentally different from that of the natural
sciences. The logic of social science research is, therefore, also
different, and the methods used in social science research need to
reflect the distinctiveness of human consciousness and experience.
KEY CO NCEPT 2 . 4
What is interpretivism?
Phenomenology
Another key intellectual tradition that has been responsible for the
anti-positivist interpretivist position is phenomenology. This is a
philosophical approach that focuses on how individuals make sense
of the world around them and how, in particular, the philosopher is
able to overcome their own preconceptions to better understand the
phenomena that are associated with human consciousness.
Phenomenological ideas were first applied to the social sciences by
Alfred Schutz (1899–1959), whose work did not come to the notice of
most English-speaking social scientists until his major writings were
translated from German in the 1960s, over two decades after they
had been written. His work was heavily influenced by Weber’s
concept of Verstehen, as well as by phenomenological philosophers
such as Edmund Husserl (1859–1938). Schutz’s position is captured
in the following passage.
The world of nature as explored by the natural scientist does not ‘mean’ anything to
molecules, atoms and electrons. But the observational field of the social scientist—
social reality—has a specific meaning and relevance structure for the beings living,
acting, and thinking within it. By a series of common-sense constructs they have pre-
selected and pre-interpreted this world which they experience as the reality of their
daily lives. It is these thought objects of theirs which determine their behaviour by
motivating it. The thought objects constructed by the social scientist, in order to grasp
this social reality, have to be founded upon the thought objects constructed by the
common-sense thinking of [people], living their daily life within the social world.
Symbolic interactionism
Objectivism
Objectivism is an ontological position that implies that social
phenomena confront us as external facts that are beyond our reach
or influence (see Key concept 2.5).
KEY CO NCEPT 2 . 5
What is objectivism?
Constructionism
We come to the alternative ontological position—constructionism
(Key concept 2.6). This position challenges the suggestion that
categories such as organizations and cultures are pre-given, external
realities that social actors have no way of influencing.
KEY CO NCEPT 2 . 6
What is constructionism?
Like Strauss et al. (1973), Becker does also recognize that culture has
a reality that ‘persists and antedates [pre-exists] the participation of
particular people’ and shapes their perspectives, but it is not a fixed,
objective reality that possesses only a sense of constraint: it acts as a
point of reference but is always in the process of being formed.
Quantitative Qualitative
While these lists and Table 2.1 provide a neat and, we hope, useful
overview, you will have noticed that we introduced them as ‘very
simple summaries’. This is because the quantitative/qualitative
distinction is not quite as straightforward as it might seem. We
outline the nature of quantitative and then qualitative research in
more detail in Chapters 7 and 16, but for now it is important to
appreciate that although quantitative and qualitative approaches can
be broadly distinguished in terms of their epistemological and
ontological foundations and tendencies, we need to be careful about
seeing them as exact opposites. For one thing, studies with the broad
characteristics of one research strategy often have some
characteristics of the other. For example, in the Williams et al.
(2017a) study we discussed previously, a quantitative strategy was
used, but the measurement of low-level disorder terms (‘broken
windows’ indicators) began with the authors taking textual data
(tweets) and coding it into a simple binary measure that reflected
whether a tweet contained a ‘broken windows’ indicator or not. The
process through which this happened was rigorous, but the fact
remains that the authors started with what would typically be
defined as qualitative data and transformed it into a quantifiable
measure.
Many writers also argue that the two strategies can be usefully
combined within a mixed methods research project. We explore
this type of research in Chapter 24, but the study described in
Research in focus 2.4 will give you a sense of what it involves, as well
as showing you why we should avoid drawing a hard line between
quantitative and qualitative research. The approaches of these two
strategies might at first seem incompatible, but this study
demonstrates that they can be usefully combined.
RESEARCH I N F O CUS 2 . 4
Mixed methods research
• 7 focus groups
Values
Values reflect the personal beliefs or the feelings of a researcher, and
there are different views about the extent to which they should
influence research—and whether we can control this at all. These
views can be summarized as
• choice of method;
• analysis of data;
• interpretation of data;
• conclusions.
The researcher’s values can also intrude into their work if they
develop an affection or sympathy for the people being studied. As we
will explore in Chapter 18, this is quite a common issue for
researchers working within a qualitative research strategy,
particularly when they use participant observation or very intensive
interviewing, and they can find it difficult to disentangle their stance
as social scientists from their subjects’ perspective(s). This possibility
might be exacerbated by the tendency that Becker (1967) identified
for sociologists in particular to be sympathetic to ‘underdog’ groups
(more marginalized groups). However, having a fixed set of values
and beliefs can also be a barrier to understanding the people we
study. Turnbull (1973) studied an African tribe known as the Ik and
perceived them as a loveless (and for him unlovable) tribe that left its
young and very old to die. Turnbull identified the conditions that had
led to this culture but he was very honest about his strong
disapproval for what he witnessed. This reaction was the result of his
Western values about the family, and it is likely, as he acknowledged,
that these values influenced his perception of what he witnessed. He
wrote (Turnbull 1973: 13): ‘the reader is entitled to know something
of the aims, expectations, hopes and attitudes that the writer brought
to the field with him, for these will surely influence not only how he
sees things but even what he sees.’
References:
Bhabha, H. K. (2003). ‘Democracy De-realized’, Diogenes, 50(1): 27–
35.
Mirza, H. S. (1998). ‘Race, Gender and IQ: The Social Consequence
of Pseudo-scientific Discourse’, Race, Ethnicity and Education, 1(1):
109–26.
Practical considerations
We should not forget about the importance and significance of
practical issues in decisions about how social research should be
carried out. There are three key areas to consider:
The nature of the topic and/or of the people you intend to investigate
also matters when you are planning your research. If you need to
engage with individuals or groups involved in activities that are
illegal or not socially acceptable, such as football hooliganism
(Pearson 2009; Poulton 2012), drug dealing (Goffman 2014; P. A.
Adler 1985), or the trade in human organs (Scheper-Hughes 2004),
it is very unlikely that survey research (as described in Chapters 9
and 10) would allow you to gain the confidence of participants or
achieve the necessary rapport. It would also be extremely difficult to
envisage and plan such research. The same principle may apply if
you plan to engage with marginalized groups or individuals. It is not
surprising that researchers in these kinds of areas have tended to use
qualitative approaches, which usually offer more of an opportunity to
gain the confidence of the subjects of the investigation to the extent
that they feel comfortable disclosing sensitive data about themselves.
In some rare cases researchers have chosen to carry out covert
research and not reveal their identities, though this comes with
ethical dilemmas of the kind we will discuss in Chapter 6. In
contrast, it seems unlikely that the hypothesis described in Research
in focus 2.1—Röder and Mühlau’s (2014) research on gender-
egalitarian values among migrants—could have been tested with a
qualitative method such as participant observation.
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
Reliability
Reliability is concerned with whether we would get the same results
from a study if we repeated it under the same conditions. The term is
commonly used when considering whether the measures that we
devise for concepts in the social sciences (such as poverty, racial
prejudice, relationship quality, religious orthodoxy) are consistent.
In Chapter 7 we will look at the idea of reliability in greater detail,
particularly the different conceptualizations of it. Reliability is a
particular concern for quantitative research, as the quantitative
researcher is likely to be concerned with whether a measure is stable
or not. For example, if IQ test scores, which were designed as
measures of intelligence, were found to fluctuate within individuals,
so that a person’s IQ score was wildly different when the test was
administered on two or more occasions, we would consider the IQ
test an unreliable measure—we could not have faith in its
consistency.
Replication
The idea of reliability is very close to another criterion of research—
replication, and more especially replicability. Sometimes
researchers choose to replicate (reproduce) previous studies. There
are various reasons for doing so, such as a feeling that the original
results do not match other existing evidence, or to see whether
findings are consistent over time or between different groups. In
order for replication to take place, a study must be capable of
replication—it must be replicable. If we want to replicate a study,
then we need to know exactly how it was designed, who was
involved, what data was collected, and how it was analysed. So for a
study to be replicable, a researcher must pay close attention to
reporting their methodological choices when writing up.
Validity
A further criterion of quality, and in many ways the most important,
is validity. Validity is concerned with the integrity of the conclusions
generated from a piece of research. We will examine this concept in
greater detail in Section 7.6 and Section 16.6 for quantitative and
qualitative research, respectively, but in the meantime it is important
to be aware of the main facets of validity.
In the next sections we will look in detail at the five research designs:
experimental design, cross-sectional design, longitudinal design,
case study design, and comparative design. Some of these designs
have variants that you should be aware of, and these are discussed in
the relevant subsections. Your choice of design will depend on your
research question, but also on the resources and time you have
available, as discussed in Learn from experience 3.1.
L EARN F RO M EXPERI ENCE 3 . 1
Choosing a research design
Campbell (1957) and Cook and Campbell (1979) also consider threats
to the external validity and therefore the generalizability of an
investigation. They identify the following five major threats:
Replicability of an experiment
What about replicability? The authors clearly lay out the procedures
and measures they used for their study and, if you were to carry out a
replication, theoretically you would be able to obtain any further
information you needed from the authors (although as time passes
this might become more difficult). So from an operational point of
view the research is replicable, although the ethical issues concerning
deception and treating the two groups of students differently based
on random assignment would need to be resolved. Clairborn (1969)
conducted one of the earliest replications and followed a procedure
that was very similar to Rosenthal and Jacobson’s, but the study was
carried out in three middle-class, suburban schools, and the timing
of the creation of teacher expectancies was different from that in the
original Rosenthal and Jacobson study. Clairborn failed to replicate
Rosenthal and Jacobson’s findings. This failure to replicate casts
doubt on the external validity of the original research and suggests
that the interactions between selection, setting, and history in the
treatment may have played a part in the differences between the two
sets of results.
In the case of the study described in Research in focus 3.2, there will
not have been a problem of interaction effects of pre-testing because,
as in many experiments, there was no pre-testing. However, it is
quite feasible that reactive effects may have been set in motion by
the experimental arrangements. The main potential threats to the
validity of this experiment include the following:
• the subjects were students, who are not representative of the
general population, so their responses to the experimental
treatment may have been distinctive;
• the students were recruited in a non-random way; and
• the students were given incentives to participate, which might
be another way in which they differ from others, since not
everyone is equally likely to accept inducements.
Quasi-experiments
A number of writers (such as Shadish et al. 2002) have highlighted
the possibilities offered by the many varieties of quasi-experiments—
that is, studies that have some characteristics of experimental
designs but do not fulfil all the internal validity requirements. Quasi-
experiments are often characterized by the lack of random
assignment of participants to the experimental and control groups
because of practical difficulties associated with implementing this—
as, for example, in Research in focus 3.3. Another form of quasi-
experiment occurs in the case of ‘natural experiments’, where there is
manipulation of a social setting but this occurs as part of a naturally-
occurring attempt to alter social arrangements. In these
circumstances, it is simply not possible to assign subjects randomly
to experimental and control groups.
RESEARCH I N F O CUS 3 . 3
A quasi-experiment
In this book, we will only use the term ‘survey’ for research that uses
a cross-sectional research design and that collects data by structured
interview or by questionnaire (covered in Chapters 9 and 10
respectively). This will allow us to keep a firm grip on the
conventional understanding of a survey and also to recognize that
the cross-sectional research design has a wider relevance—it is not
necessarily associated with collecting data by questionnaire or by
structured interview.
Non-manipulable variables
As we noted in ‘Features of the classical experimental design’, in
most social research it is not possible to manipulate the variables
that we are interested in: they are non-manipulable variables.
This is why quantitative researchers tend to use cross-sectional
research designs rather than experimental ones. Researchers need to
be cautious and refer to relationships rather than causality; they
might also argue on theoretical grounds that a variable is more likely
to be an independent than a dependent variable (see Research in
focus 3.5 for an example).
The very fact that we can regard certain variables as ‘givens’ provides
us with a clue as to how we can make causal inferences in cross-
sectional research. Many of the variables that we are interested in
can be assumed to be temporally prior to other variables: that is,
they are characteristics that are established and fixed before other
variables. For example, we can assume that, if we find a relationship
between ethnicity and alcohol consumption, the former is more
likely to be the independent variable because it is temporally prior to
alcohol consumption. In other words, while we cannot manipulate
ethnic status, we can draw causal inferences from cross-sectional
data about it.
It is also striking that the study was concerned with the factors that
influence couples’ management of household finances (such as the
development of health problems), because the very idea of an
‘influence’ suggests that qualitative researchers are interested in
investigating causes and effects, albeit not in the context of the
language of variables used in quantitative research, and with more of
an emphasis on revealing the experience of something like the
management of household finances than is usually the case in the
quantitative tradition. However, the main point of discussing this
example here is its similarities to the cross-sectional design in
quantitative research. It involved interviewing quite a large number
of people at a single point in time. Just as with many quantitative
studies using a cross-sectional design, examining early influences on
people’s past and current behaviour is based on their retrospective
accounts of factors that influenced them in the past.
Panel studies
• housing;
• health; and
• socio-economic values.
Cohort studies
Panel and cohort studies share similar features. They have a similar
design structure. Figure 3.3a illustrates that data are collected in at
least two waves on the same variables on the same people. In Figure
3.3b we’ve provided a simple example of recording employment
status at three time points (in a real longitudinal study you would
have many more variables at each time point). Both panel and cohort
studies are concerned with exploring social change and with
improving the understanding of causal influences over time. The
latter means that longitudinal designs are better able to deal with the
problem of ‘ambiguity about the direction of causal influence’ than
cross-sectional designs. Because we can identify certain potentially
independent variables in an earlier survey (at T1), we are in a better
position to infer that alleged effects that we identify through later
surveys (at T2 or later) have occurred after those independent
variables. This does not deal with the entire problem of the
ambiguity of causal influence, but it at least addresses the problem of
knowing which variable came first. In all other respects, the points
we have discussed in relation to cross-sectional designs are the same
as those for longitudinal designs.
F I G U R E 3 . 3 (a) The longitudinal design (b) An example of longitudinal data
The main problem with attrition is that those who leave the study
may differ in some important respects from those who remain (that
is, drop-out is not random), so that those who are left do not form a
representative group. There is some evidence from panel studies that
the problem of attrition declines with time (Berthoud 2000); in other
words, those who do not drop out after the first wave or two of data
collection tend to stay on the panel.
The basic case study design involves detailed and intensive analysis
of a single case. As Stake (1995) observes, case study research is
concerned with the complexity and particular nature of the case in
question. Some of the best-known studies in sociology are based on
this kind of design. Some examples follow.
For example, let’s say that the research is carried out in a single
location, which could be an organization or community. Sometimes,
research is carried out in a single location but the location itself is
not part of the object of analysis—it simply acts as a backdrop to the
data collection. When this occurs, the sample from which the data
were collected is the object of interest and the location is of little
significance. On other occasions, the location is either primarily or at
least to a significant extent the object of interest. In Research in
focus 3.10 we can compare two studies, one where the location
certainly provides background but is not central to the research, and
another where location is central.
RESEARCH I N F O CUS 3 . 1 0
Location and case studies
With a case study, the case is an object of interest in its own right,
and the researcher aims to provide an in-depth examination of it. If
we did not draw this kind of distinction, almost any kind of research
could be construed as a case study: research based on a national,
random sample of the population of Great Britain would have to be
considered a case study of Great Britain! What distinguishes a case
study is that the researcher is usually trying to reveal the unique
features of the case, whereas in the previous example Great Britain is
just the area where the population of interest are. This is known as
an idiographic approach. Research designs like the cross-sectional
design are known as nomothetic, in that they are concerned with
generating statements that apply regardless of time and place.
However, in rare cases an investigation may have elements of both.
Types of case
It is useful to consider a distinction between different types of case
that is sometimes made by writers. Yin (2009) distinguishes five
types.
• The revelatory case. The basis for the revelatory case exists
‘when an investigator has an opportunity to observe and
analyse a phenomenon previously inaccessible to scientific
investigation’ (Yin 2009: 48). As examples, Yin cites Whyte’s
(1955) study of Cornerville, and Liebow’s (1967) research on
unemployed Black men.
• The longitudinal case. Yin suggests that a case might be
chosen because it gives the researcher the opportunity to
investigate on two or more occasions. However, many case
studies comprise a longitudinal element, so it is more likely
that a case will be chosen not only because it can be studied
over time but also because it is appropriate to the research
questions on one of the other four grounds.
It may be that it is only at a very late stage that the uniqueness and
significance of a case becomes apparent (Radley and Chamberlain
2001). Flyvbjerg (2003) provides an example of this. He shows how
he undertook a study of urban politics and planning in Aalborg,
Denmark, thinking it was a critical case. After conducting his
fieldwork for a while, he found that it was in fact an extreme case. He
writes as follows:
Initially, I conceived of Aalborg as a ‘most likely’ critical case in the following manner:
if rationality and urban planning were weak in the face of power in Aalborg, then,
most likely, they would be weak anywhere, at least in Denmark, because in Aalborg the
rational paradigm of planning stood stronger than anywhere else. Eventually, I
realized that this logic was flawed, because my research [on] local relations of power
showed that one of the most influential ‘faces of power’ in Aalborg, the Chamber of
Industry and Commerce, was substantially stronger than their equivalents elsewhere.
Therefore, instead of a critical case, unwittingly I ended up with an extreme case in the
sense that both rationality and power were unusually strong in Aalborg, and my case
study became a study of what happens when strong rationality meets strong power in
the area of urban politics and planning. But this selection of Aalborg as an extreme
case happened to me, I did not deliberately choose it.
The main concern here is the quality of the theoretical reasoning that
the case study researcher engages in. How well do the data support
the researcher’s theoretical arguments? Is the theoretical analysis
incisive? For example, does it demonstrate connections between
different conceptual ideas that the researcher has developed out of
the data? The crucial quality-related question is therefore not
whether the findings can be generalized to a wider universe but how
well the researcher generates theory out of the findings. This view of
generalization is called ‘analytic generalization’ by Yin (2009)
and ‘theoretical generalization’ by Mitchell (1983), and it places case
study research firmly in the inductive tradition (where theory is
generated out of research). However, a case study design is not
necessarily associated with an inductive approach. Case studies can
be associated with both theory generation and theory testing and, as
Williams (2000) has argued, case study researchers are often in a
position to generalize by drawing on findings from comparable cases
investigated by others. We will return to this topic in Chapter 17.
• ensuring, when new data are being collected, that the need to
translate data-collection instruments written in another
language (for example, questions in interview schedules) does
not undermine genuine comparability.
This last problem raises the further difficulty that, even when
translation is carried out competently, there could still be
insensitivity to specific national and cultural contexts.
Despite these potential issues, a major benefit of cross-cultural
research is that it helps mitigate the fact that social science findings
are often, if not always, culturally specific. For example, Crompton
and Birkelund (2000) conducted research using semi-structured
interviewing with comparable samples of male and female bank
managers in Norway and Britain. They found that, in spite of more
family-friendly policies in Norway, bank managers in both countries
struggle to manage career and domestic life. It might have been
assumed that the presence of family-friendly policies would ease
these pressures, but cross-cultural research of this kind shows how
easy it is to make such a mistaken inference. In the field of
criminology, Williams (2016) used the Special Eurobarometer 390 to
investigate incidences of online identity theft across Europe. While
he was interested in individual level factors to see what the common
predictors of being a victim might be, he also modelled country-level
factors to control for the impact of national practices and security
frameworks. This allowed him to identify how country-level
differences moderated the effect of individual-level factors on the
incidence of victimization.
RESEARCH I N F O CUS 3 . 11
A multiple-case study based on difference between
cases
On the other hand, the type of causality that can be inferred from a
multiple-case study is more ‘generative’. Critical realism (see Key
concept 2.3) seeks out generative mechanisms that are responsible
for patterns they observe in the social world and attempts to examine
how they operate in particular contexts. In other words, it focuses on
the causes and effects produced by distinctly social structures, rather
than separating variables from the complexities of everyday reality.
Critical realist writers see multiple-case studies as having an
important role for research because the intensive nature of case
studies enhances the researcher’s sensitivity to factors that lie behind
observed patterns (Ackroyd 2009). The multiple-case study offers an
even greater opportunity to do this because the researcher can
examine how generative causal mechanisms operate across different
or similar contexts. Where findings do vary across specific contexts,
then generative causal inferences can be made.
Not all writers are convinced about the merits of multiple-case study
research. Dyer and Wilkins (1991), for example, argue that it tends to
mean that the researcher pays less attention to the specific context
and more to the contrasts between the cases. The need to forge
comparisons can also mean that the researcher needs to develop an
explicit focus at the outset, whereas critics of the multiple-case study
argue that it may often be preferable to adopt a more open-ended
approach. These concerns about retaining contextual insight and a
rather more unstructured research approach are very much
associated with the goals of qualitative research (see Chapter 16).
Quantitative Qualitative
Quantitative Qualitative
Case study Typical form: survey Typical form: the intensive study by
research on a single case ethnography or qualitative interviewing
that aims to reveal of a single case, which may be an
important features about its organization, life, family, or
nature. community.
Strictly speaking, Table 3.1 should have a third column for mixed
methods research (see Chapter 24), as an approach that combines
both quantitative and qualitative research; but the resulting table
would be too complicated, because mixed methods research can
involve the combined use of different research designs (for example,
a cross-sectional design and a multiple-case study) as well as
methods. However, the quantitative and qualitative components of
some of the mixed methods studies discussed in this book are
included in the table.
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
You will probably have an idea of what topic you want to research
quite a while before you actually start your dissertation, but it is all
too easy to get carried away with an idea without first checking that it
is appropriate and researchable. Unfortunately, most of us do not
operate in a world of limitless resources with no word counts, no
time constraints, and extensive financial resources. These
restrictions impact upon what we can and can’t research, and we
need to take them into account before we begin any research project.
This is not to say that you can’t be ambitious, but you do need to be
realistic. Understanding what resources are available to you and how
much time you have is a key part of this, and this is where
supervisors are incredibly important. They will know what is and
isn’t possible and will guide their students through the research
process. If your supervisor tells you that a topic is not suitable or
feasible, then you really must not be upset by this. Their job is to
make sure that you can complete your project and make a success of
it.
In the following section we outline some of the main points that you
should consider as you begin to plan your project. We discuss what is
expected of you and how your supervisor can support you. We also
suggest some strategies for planning and managing your time and for
considering what resources are available to you.
What does your institution expect
from you?
Your institution or department will have specific requirements
relating to your dissertation’s contents and format. These are likely
to include such things as the form of binding; how it is to be
presented; whether an abstract is required; how big the page margins
should be; the format for referencing; the maximum number of
words; the structure of the dissertation; how much advice you can
get from your supervisor; how many research questions you
should have; whether or not a proposal is required; how to gain
ethical clearance; plagiarism; deadlines; how much (if any) financial
assistance you can expect; and so on.
It is also worth being aware that students who get stuck at the start
of their dissertations, or who get behind with their work, sometimes
respond to the situation by avoiding their supervisors. As a result,
they don’t get help to move forward and get caught in a vicious cycle
that results in their work being neglected and potentially rushed at
the end. Try to avoid this situation by confronting the fact that you
are experiencing difficulties in beginning your work or are getting
behind, and approach your supervisor for advice.
See Learn from experience 4.1 for our panel’s tips on making the best
use of your supervisor’s support.
L EARN F RO M EXPERI ENCE 4 . 1
How to work productively with your dissertation
supervisor
If you are to conduct effective research within the time available, you
need to begin by working out a timetable—preferably in conversation
with your supervisor—that sets out the different stages of your
research (including the literature review and writing up) and the
dates by which you should start and finish them (see Tips and skills
4.1). Some stages are likely to be ongoing—for example, searching
the literature for new references (a process that will be covered in
Chapter 5)—but that should not stop you from developing a
timetable. Be careful not to underestimate the time you will need for
certain stages. Sometimes, for example, your project will involve
securing access to an organization in order to study some aspect of it,
and students often underestimate the time it can take to arrange this.
For his research on commercial cleaning, Shaun Ryan (2009) spent
nearly two years trying to secure access to a suitable firm. Even if an
organization wants to give you access to data, delays can occur.
When Facebook wanted to make available a database of 38 million
links (URLs) on civic discourse, it took around 20 months for the
data to be properly anonymized and made available (Mervis 2020)!
Our panel share their thoughts on managing your time in Learn from
experience 4.2.
T I PS AND SKI L L S 4 . 1
Drawing up a timetable
It is very important that you find out early on what, if any, resources
will be available for carrying out your research. For example, will you
receive help from your institution with such things as access to an
online survey platform, travel costs, photocopying, postage,
stationery, and so on? Will the institution be able to loan you any
equipment you need, for example recording and transcription
devices to help you with interviewing? Has your institution got the
software you will need for processing your data, such as SPSS or a
qualitative data analysis package such as NVivo? (Visit our
online resources for guidance on using these packages.) This kind
of information will help you to work out whether your research
design and methods are financially feasible and practical.
Throughout this section, including in the tips from our panel, you
will have seen a clear message emerging: you must allow sufficient
time for the various stages of the research process. Students often
underestimate the time it will take to secure access to participants,
analyse data, and write up findings. Another time-related issue, as
illustrated in Tips and skills 4.1, is that it can take a long time to
receive clearance from research ethics committees to conduct your
investigation. (We discuss ethical issues in detail in Chapter 6.)
However, one final point needs to be made: even with a really well-
planned project, unexpected problems can mess up your timetable.
For example, McDonald, Townsend, and Waterhouse (2009) report
that they successfully negotiated access to the Australian
organizations that were involved in a number of research projects in
which the researchers were engaged. However, changes to personnel
meant that those who had agreed to give them access (often called
‘gatekeepers’ in the research methods literature) left or moved
on, so that the researchers had to forge new relationships and
effectively had to renegotiate the terms of their investigations, which
considerably slowed down the progress of their research. Such
disruptions to research are impossible to predict. It is important not
only to realize that they can occur but also to introduce a little
flexibility into your research timetable so that you can absorb their
impact. Most importantly, if this happens to you, then you should
contact your supervisor as soon as possible for advice.
It is likely that your course leaders and supervisors will ask you to
start thinking about what you want to research a while before you are
actually due to start work on your dissertation. It is worth giving
yourself quite a lot of time to consider your options. As you are
studying various subjects, you should begin to think about whether
there are any topics that interest you and might provide you with a
researchable area. In this section, we return to the topic of research
questions and explore where they might come from and how to
assess their suitability. At this point it is important to acknowledge
that not all projects necessarily require a research question up-front,
and some supervisors might prefer to think in terms of research aims
and objectives. For the purposes of this chapter, though, we will
assume that you are required to pose research questions (this is often
the case when submitting an initial proposal, even if your ideas then
change as your project develops). We also discuss the importance of
the research proposal and identify what you need to consider when
preparing to conduct your research, as it is always beneficial to
anticipate what might happen later in your project.
While your supervisor will work with you to define your research
question(s), you should not solely rely on them to evaluate your
questions. You can assess your own research questions using
the list of criteria above, and this will save you a lot of time at
the start of your project by helping you fine-tune your ideas.
Developing these skills will also help you to evaluate how
effective others’ research questions are.
Let’s consider how the research questions posed in Luke’s
study of Twitter usage (Sloan 2017—see Research in focus
4.1) stand up to scrutiny by these standards. Before we
evaluate them using the criteria above it is important to note
that the study had two research questions, one of which we did
not list in Research in focus 4.1 because it is a repeat example
of a descriptive question. The two research questions were:
RQ1) To what extent are certain demographic characteristics
associated with Twitter use for GB users?
Your research proposal may also be part of the process for gaining
ethical clearance (see Chapter 6). Once ethical clearance has been
granted, there may be a limit to how much you can change your ideas
without seeking new approval from your institution’s ethics
committee.
You will also need to think about access and sampling issues, which
we discuss in depth in Chapters 8 and 17. If your research requires
you to gain access to or the cooperation of one or more closed
settings, such as an organization, you need to confirm at the earliest
opportunity that you have the necessary permission to conduct your
work. You also need to think about how you will go about gaining
access to people. These issues lead you into sampling considerations,
such as the following:
• Who do you need to study in order to investigate your
research questions?
• How easily can you gain access to a list of every
person/household/organization who is eligible to participate
in your study (a sampling frame)?
• What kind of sampling strategy will you employ (for example,
probability sampling, quota sampling, theoretical
sampling, convenience sampling—all of which are
covered in Chapters 8 and 17)?
• Can you justify your choice of sampling method?
Before you move on, check your understanding of this section by
answering the following self-test questions:
Since doing your research and analysing your results are the main
focus of this book as a whole, we will not go into great detail here
about these stages of the process. However, experience has taught us
that the following hints and tips will help your project to run
smoothly.
In addition to this list, we asked our panellists for their top hints and
tips for a successful research project. You can read what they said in
Learn from experience 4.4.
L EARN F RO M EXPERI ENCE 4 . 4
Hints and tips for a successful research project
Our panellists have lots of hints and tips that will help
you succeed in your research project, ranging from the
importance of taking breaks to the value of seeking
feedback from your friends.
Find a balance that works for you—it’s your personal project, after
all—but make sure you are in control. Be strict with yourself about
when you work, but make time to have breaks too. I also found
creating a mind map really useful at the start of my undergraduate
dissertation, as I had never done a project of that size before. The
mind map really helped me see how everything all linked together
and where certain elements fitted best.
Jodie
Start early. Please. And don’t be worried if your research design
is radically different from the designs of your peers. All research
questions are different so they all require different research
methods.
Reni
Planning to speak with your supervisor after you submit each
draft can alleviate the worries you may have about the quality of
your work. And if you feel lost or overwhelmed, take a break, reach
out to a friend, go for a walk. I often gained perspective on tricky
elements of my research when brainstorming casually away from
the computer. By giving your brain a rest, you will be able to return
to your schedule refreshed and more productive.
Starr
I found that getting another student’s perspective on your work
can be really helpful, as it’s easy to get vacuumed in your own
thoughts and you can often overlook new ways of looking at your
research questions or problems.
Zvi
You can read about our panellists’ backgrounds
and research experiences in the ‘Learn from
experience’ section at the start of this book.
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
Once you have identified your research questions (see Chapter 4),
the next step in any research project is to search the existing
literature and write a literature review. In a research context, ‘the
literature’ means existing research on a topic and ‘review’ means a
critical evaluation of it. We conduct literature reviews to establish
what is already known about a topic and, usually, to provide a
background and justification for the investigation that we want to
undertake.
There are two kinds of literature review: the narrative review and
the systematic review. Narrative reviews are the traditional kind
of literature review and they generally lead directly into a research
project, whereas systematic reviews tend to be stand-alone (not a
preface to research)—although their results may act as a springboard
for research. We will discuss both types of review in the following
sections, but will focus mainly on narrative reviews since it will
nearly always be this type that people mean when they refer to ‘doing
your literature review’ in the context of a student research project or
dissertation. You might conduct a systematic review at some stage,
and it is worth being aware that some of the processes involved can
also be used within narrative reviews, but the narrative type will be
your main concern as a student researcher.
5.2 Narrative reviews
Here are our top tips for conducting a narrative literature review.
• Try to be reasonably comprehensive in your coverage. At
the very least, you should cover the most important
readings relating to your area (your supervisor can advise
on this).
• Consider dividing the review up into themes or sub-
themes that will help to provide structure (see Research
in focus 5.1 for an example).
• Try to be balanced in the way you present the literature.
Do not give some authors or research more attention
than others unless you want to make a particular point
about their work, for example that you are going to try to
reproduce their methods in a different context.
• Aim to comment on each item in your review and show
how it relates to other items—avoid just describing its
content and presenting the review as a series of points so
that it looks more like a set of notes (A says this, B notes
that, C says something else, D says something else
again, etc.). Develop an argument about the items you
include.
• Build up an argument by creating a ‘story’ about the
literature. This means asking yourself: What key point or
points do I want to get across about the literature as a
whole? You will do this most effectively if you use your
own words and avoid quoting too much. Put your own
imprint on the literature.
1. Define Review question: ‘What is known about the barriers to, and
the facilitators of, healthy eating among young people?’ (Shepherd et
purpose al. 2006: 243).
and scope
of the
review
2. Seek out The authors used a combination of terms to do with healthy eating
studies (e.g. ‘nutrition’), health promotion or the causes of health or ill-
relevant to health (e.g. ‘at-risk populations’), and young people (e.g.
the scope ‘teenager’). In order for a study to be included in the review, it had
and to be about ‘the barriers to, and facilitators of, healthy eating
purpose of among young people’ and to have been conducted in the UK, in
the review English. The authors included both intervention studies (also
known as outcome evaluations, i.e. evaluating the outcome of an
intervention) and non-intervention studies (e.g. cohort or case
control studies, or interview studies). They formulated guidelines
separately for these two types of study. Shepherd et al. searched
several online bibliographical databases for the studies, including
SSCI and PsycINFO. They also searched lists of references and
other sources.
4. Assess Two researchers entered data for each study into a database,
the studies summarized the findings from each one, and assessed its
from Step 3 methodological quality. They used separate quality criteria for
intervention and non-intervention studies. The application of the 8
criteria for intervention studies resulted in just 7 being regarded as
‘methodologically sound’ and the results of just these 7 studies
are the focus of the authors’ summary.
Steps in Corresponding practices in Shepherd et al. (2006)
systematic
review
You can see from the information in the table that this set of
criteria, and a corresponding set for the intervention studies,
reduced their numbers considerably.
In their synthesis of their review findings, the authors report
that the non-intervention studies identified several barriers to,
and facilitators of, healthy eating. For example, they write:
‘Facilitating factors included information about nutritional
content of foods/better labeling, parents and family members
being supportive, healthy eating to improve or maintain one’s
personal appearance, will-power and better availability/lower
pricing of healthy snacks’ (Shepherd et al. 2006: 255). The
authors linked such findings with intervention studies, arguing
that ‘juxtaposing barriers and facilitators alongside
effectiveness studies allowed us to examine the extent to which
the needs of young people had been adequately addressed by
evaluated interventions’ (Shepherd et al. 2006: 255).
Online databases
Online databases allow you to search a vast repository of academic
sources by keyword, author, and often also by date of publication.
Many of these databases integrate with referencing software (see
Tips and skills 5.3) so that you can easily import long lists of
bibliographic information, which will save you a lot of time if your
literature review is extensive. Online databases will often give you
full electronic access to an article in PDF format, meaning that you
don’t have to physically visit the library. Having said that, you will
find that not all sources are available through this service,
particularly older work. If you’re unsure of what is and isn’t available
or you can’t find what you are looking for, then you should ask your
subject librarian for help—after all, that’s what they are there for (see
Learn from experience 5.3).
L EARN F RO M EXPERI ENCE 5 . 3
Ask a librarian
• After you have logged in you will see the ‘Basic Search’ tab.
You can use this to specify topics, titles, authors, and editors
that you’re interested in. By selecting ‘Add row’, you can refine
your search in several ways, for example by specifying that
you want to find articles containing the words ‘migration’ but
not the word ‘bird’. Alternatively, you could search for the
keyword ‘migration’ and the author ‘Bloggs J’.
• The ‘Timespan’ option allows you to specify how far back you
want the search to go. This may or may not be relevant
depending on how recent (or not) your area of interest is. You
can choose to search in a custom time range if none of the
other options are appropriate (note that WoS goes back as far
as 1900!).
To make the most of these tools you will need to become comfortable
using search functions such as AND, OR, and NOT. The technical
term for these functions is Boolean operators, but despite the formal-
sounding name (they are named after a nineteenth-century
mathematician called George Boole), the concept is quite simple and
you are likely to use them every day for searching online. As an
example, let’s assume that you are researching hate crime on Twitter.
Searching ‘hate crime on Twitter’ may return some results, but it is
likely that many studies concerned with this topic did not use that
exact phrasing. Here are three different ways that you might specify
a search using Boolean logic.
• You could search ‘hate crime’ AND ‘Twitter’ meaning that only
articles containing both those words, but not necessarily in
that order, would be returned.
• If you were interested in multiple social media platforms, you
could amend your search to include ‘Twitter’ OR ‘Facebook’
OR ‘YouTube’.
A s e a rc h i n g t o o l
Google’s academic search engine, Google Scholar, can be
accessed from the Google home page and is a simple way to
search broadly for academic literature. Searches are focused on
peer-reviewed papers, theses, books, abstracts, and articles,
from academic publishers, professional societies, preprint
repositories, universities, and other scholarly organizations.
Google Scholar also tells you how often an item has been cited
by other people. This can be very useful in assessing the
importance of an idea or a particular scholarly writer. See
http://scholar.google.com.
Cu rre n t a f f a i rs
For case study analyses and keeping up to date on current
social issues, the BBC News website is reasonably well
balanced and quite analytical: www.bbc.co.uk.
St a t i s t i c s o n s o c i a l t re n d s
Many national governments make a huge amount of data about
social trends available online, from reports on the educational
background of young offenders to the number of premises per
square kilometre licensed to sell alcohol: for example, the UK’s
www.statistics.gov.uk.
European statistics relating to specific countries, industries,
and sectors can be found on the Eurostat pages of Europa, the
portal to the European Union website:
https://ec.europa.eu/eurostat.
• The location—where is the site located? The URL can help you
here. Is it an academic site (.ac.uk in the UK or .edu in the US
—although not all research institutions across the globe use
these) or a government site (.gov), a non-commercial
organization (.org), or a commercial one (.com or .co)?
• Currency—how recently was the site updated? Many sites will
give you a ‘last updated’ date, but you can get clues as to
whether a page is being well maintained by whether the links
are up to date and by its general appearance.
Methods of referencing
Your institution will probably have its own guidelines as to which
style of referencing you should use in your dissertation, and if it does
you should definitely follow them. The punctuation and style of
references—such as where to place a comma, or whether to capitalize
a title in full or just the first word—varies considerably from source
to source. For example, in some books and journals using Harvard
referencing (discussed below), the surname of the author is
separated from the date in the text with a comma—for example
(Name, 1999)—but in others, such as this book, there is no comma.
The important thing is to follow the advice of your institution.
Technical issues aside, we will now look in detail at the two main
methods used for referencing: the Harvard system and the use of
footnotes and endnotes.
The Harvard system
Harvard system
Reference As Name and Other (2021) argue, Name, A., and Other, S.
to a book the line between migration and (2021). Title of Book in
tourism is becoming increasingly Italics. Place of Publication:
blurred. Publisher.
Note system
Note system Reference in the text Reference in the
bibliography, list of
references, or
footnote/endnote
3
Reference On the other hand, research by Name A. Name and S. Other,
to a book and Other3 has drawn attention to the Title of Book in Italics.
influence of intrinsic factors on Place of Publication,
employee motivation. Publisher (2021), pp.
170–77.
39
Reference Scopus describes itself as providing a Scopus,
to online ‘comprehensive, curated abstract and www.elsevier.com/en-
content citation database with enriched data gb/solutions/scopus
and linked scholarly content’.39 (2021) (accessed 9 April
2021).
What is plagiarism?
To plagiarize is defined in The Concise Oxford Dictionary as to ‘take
and use another person’s (thoughts, writings, inventions …) as one’s
own’. Plagiarism does not just relate to the literature you read in the
course of preparing an essay or report. Taking large amounts of
unattributed material from essays written by others or from websites
is also plagiarism. It is also possible to self-plagiarize—this is when a
person lifts material that they have previously written and passes it
off as new work.
The final reason you should avoid plagiarizing at all costs is because
when found in student work (and indeed the work of professional
academics) it is nearly always punished, which could involve having
a mark set to zero or, in extreme cases, being disqualified from all
future assessments. Universities are so concerned about the growth
in the number of plagiarism cases that come before examination
boards that they now make heavy use of plagiarism detection
software, which compares the content of student work with existing
information online and in its database (for example, Turnitin UK:
www.turnitin.com/, accessed 28 February 2019). This means
that, as several writers (e.g. McKeever 2006) have observed, the
internet is a facilitator of plagiarism but also provides a way to detect
it. Even well-known search engines like Google can be used to detect
student plagiarism by searching for unique strings of words.
First, do not ‘lift’ large sections of text from other sources without
making it clear that they are in fact quotations. This makes it clear
that the text in question is not your own work but that you are
making a point by quoting someone. It is easy to get this wrong. In
June 2006 it was reported that a plagiarism expert at the London
School of Economics had been accused of plagiarism contained in a
paper he published on plagiarism! A paragraph was found that
copied a published source by someone else word for word, and had
not been acknowledged properly as being from another source. The
accused person claimed that this was due to a formatting error. It is
common practice in academic publications to indent a large section
of material that is being quoted, like this:
First, do not ‘lift’ large sections of text from other sources without making it clear that
they are in fact quotations. This makes it clear that the text in question is not your own
work but that you are making a point by quoting someone. It is easy to get this wrong.
In June 2006 it was reported that a plagiarism expert at the London School of
Economics had been accused of plagiarism contained in a paper he published on
plagiarism! A paragraph was found that copied a published source by someone else
word for word, and had not been acknowledged properly as being from another
source. The accused person claimed that this was due to a formatting error.
The second way that you can avoid plagiarizing is to ensure that you
do not present other people’s ideas as though they are your own. This
means always acknowledging the source of any ideas you include
that are not your own. It was this aspect of plagiarism that led to the
author of The Da Vinci Code (2003), Dan Brown, being accused of
plagiarism. His accusers did not suggest that he had taken large
chunks of text from other works and presented it as his own. Instead,
they accused him of lifting ideas from a non-fiction book they had
written (The Holy Blood and the Holy Grail). However, Dan Brown
did acknowledge his use of their historical work on the grail myth,
though only in a general way in a list of acknowledgements, and
Brown’s accusers lost their case (fortunately for their readers,
novelists do not need to continuously reference ideas that they use in
their fictional work).
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
In this chapter we introduce the issues and debates about ethics with
which all social researchers should engage. As we noted in the
Chapter guide, ethical concerns are an essential part of the research
process: if you are writing a proposal for a final-year project, then
you will almost certainly have to highlight the ethical issues that your
investigation may encounter. The increasing use of online data, such
as social media, in social-scientific research has particular ethical
implications that we will discuss.
We are not going to try to resolve the ethical issues that we present
here because ethics are complicated, and some of the stances you
could take (such as situation ethics, discussed in Section 6.3) do not
provide researchers with clear direction. However, we will provide
enough of an outline of the ethical principles involved in social
research to give you the awareness and knowledge that you will need
in order to make informed decisions about your own work. In social
research, ethical considerations revolve around issues like these:
1. Writers often differ quite widely from each other over ethical
issues and questions. In other words, they differ over what is
and is not ethically acceptable and take different stances (see
Section 6.3 under ‘Stances on ethics’).
4. Related to this last point is the fact that these extreme cases
of ethical violation tend to be associated with particular
research methods—notably covert observation and the
use of deception in experiments. The problem with
associating particular studies and methods with ethical
issues is that it implies that some methods are unlikely to
bring up ethical concerns. However, it would be wrong to
assume that methods such as questionnaires or overt
observation are immune from ethical problems, especially
because ethical problems often arise from the questions that
are being asked. For example, conducting questionnaire or
overt observation research with children will raise a lot
of ethical issues that may not arise when the research is on
adults.
RESEARCH I N F O CUS 6 . 1
An example of ethical transgression
Universalism
A universalist stance takes the view that ethical rules should never be
broken. Violations of ethical principles are wrong in a moral sense
and are damaging to social research. This kind of stance can be seen
in the writings of Erikson (1967), Dingwall (1980), and Bulmer
(1982). Bulmer does, however, point to some forms of what appears
to be disguised observation that he suggests may be acceptable. One
is retrospective covert observation, which is when a researcher writes
up their experiences in social settings in which they participated but
not as a researcher. An example would be Van Maanen (1991b), who
wrote up his experiences as a ride operator in Disneyland many years
after he had been employed there in vacation jobs. Even a
universalist such as Erikson (1967: 372) recognizes that it ‘would be
absurd … to insist as a point of ethics that sociologists should always
introduce themselves as investigators everywhere they go and should
inform every person who figures in their thinking exactly what their
research is all about’.
Situation ethics
Those who take this stance point out that almost all research involves
elements that are in some way ethically questionable. This could be
seen to occur whenever participants are not given all the details on a
piece of research, or when there is variation in the amount of
knowledge people have about the research. Punch (1994: 91), for
example, observes that ‘some dissimulation [hiding information] is
intrinsic to social life and, therefore, to fieldwork’. He quotes Gans
(1962: 44) in support of this point: ‘If the researcher is completely
honest with people about [the researcher’s] activities, they will try to
hide actions and attitudes they consider undesirable, and so will be
dishonest. Consequently, the researcher must be dishonest to get
honest data.’
We will look at each of these in turn, but we must keep in mind that
the four principles often overlap. For example, it is difficult to
imagine how the principle of informed consent could be built into an
investigation in which research participants were deceived. However,
there is no doubt that these four areas provide a useful classification
of ethical principles for social research.
Avoidance of harm for participants
Research that is likely to harm participants is seen by most people as
unacceptable. But what is harm? Harm can include physical harm;
harm to participants’ development; loss of self-esteem; stress; and
‘inducing subjects to perform reprehensible acts’, as Diener and
Crandall (1978: 19) put it. In several of the studies discussed in this
book, there has been real or potential harm to participants.
Informed consent
The issue of informed consent is in many respects the area within
social research ethics that is most hotly debated. Much of the
discussion tends to focus on what is called disguised or covert
observation (in this book we use the latter term), which can involve
either covert participant observation (see Chapter 18, Section
18.3), or simple or contrived observation (see Key concept 14.2)
in which the researcher’s true identity is unknown. The principle of
informed consent means that prospective research participants
should be given as much information as might be needed in order to
make an informed decision about whether or not they want to
participate in a study. Covert observation violates this principle
because participants are not given the opportunity to refuse to
cooperate. They are involved, whether they like it or not.
The principle of informed consent also means that, even when people
know they are being asked to participate in research, they should be
fully informed about the research process. The SRA Ethical
Guidelines suggest:
Inquiries involving human subjects should be based as far as practicable on the freely
given informed consent of subjects. Even if participation is required by law, it should
still be as informed as possible. In voluntary inquiries, subjects should not be under
the impression that they are required to participate. They should be aware of their
entitlement to refuse at any stage for whatever reason and to withdraw data just
supplied. Information that would be likely to affect a subject’s willingness to
participate should not be deliberately withheld, since this would remove from subjects
an important means of protecting their own interests.
Second, some researchers are likely to come into contact with a wide
variety of people, and ensuring that absolutely everyone has the
opportunity to give informed consent is not practicable because it
would be extremely disruptive in everyday contexts. This is a
common problem for ethnographers, who are likely to come across
people in the course of their research who are part of the social
setting but will only be briefly, lightly involved in the research so are
not given the opportunity to provide informed consent.
While this statement does not support the lack of informed consent
that is automatically associated with covert research, it is not
explicitly disapproving either. It acknowledges that covert methods
jeopardize the principle of informed consent as well as the privacy
principle (see the next subsection, ‘Privacy’), but suggests that covert
research can be used ‘where it is impossible to use other methods to
obtain essential data’—an example might be if the researcher could
not otherwise gain access to the setting. Clearly, the difficulty here is
how a researcher decides whether it is impossible to obtain the data
other than by covert work. The guidance also highlights the
importance of anonymity when informed consent cannot be gained
and suggests that ‘ideally’ researchers should try to gain consent
after the research has been conducted.
For online research the picture is even more complex. In the ESRC’s
Framework for Research Ethics it is acknowledged that
Internet research can take place in a range of settings, for example email, chat rooms,
web pages, social media and various forms of ‘instant messaging’. These can pose
specific ethical dilemmas. For example, what constitutes ‘privacy’ in an online
environment? How easy is it to get informed consent from the participants in the
community being researched? What does informed consent entail in that context?
How certain is the researcher that they can establish the ‘real’ identity of the
participants? When is deception or covert observation justifiable? How are issues of
identifiability addressed?
Privacy
This third area of ethical concern relates to our duty to protect the
privacy of participants. The right to privacy is an important principle
for many of us, and violations of that right in the name of research
are not seen as acceptable. Privacy is very much linked to the idea of
informed consent because—to the degree that participants give
informed consent on the basis of a detailed understanding of what
participation is likely to involve—the participant in a sense
acknowledges that they have surrendered their right to privacy for
that limited time. The BSA Statement makes a direct link between
informed consent and privacy in the passage we quoted in ‘Informed
consent’: ‘covert methods violate the principles of informed consent
and may invade the privacy of those being studied.’ Of course, the
research participant does not completely give up their right to
privacy by providing informed consent. For example, when people
agree to be interviewed, they will often refuse to answer certain
questions for a variety of reasons. Often, these refusals will be
because they feel the questions reach into private areas that
respondents do not want to make public, regardless of the fact that
the interview will be conducted in private. Examples might be
questions about income, religious beliefs, or sexual activities.
Covert methods are usually considered to be violations of the privacy
principle because participants are not being given the opportunity to
refuse invasions of their privacy. This kind of method also means
that participants might reveal beliefs or information that they would
not have revealed if they had known they were speaking to, or in
front of, a researcher. The issue of privacy is linked to issues of
anonymity and confidentiality in the research process (see Learn
from experience 6.4), an area that we have touched on in ‘Avoidance
of harm for participants’.
L EARN F RO M EXPERI ENCE 6 . 4
Anonymity and confidentiality
Deception
Deception occurs when researchers represent their work as
something other than what it is. Milgram’s experiment (see Research
in focus 6.2) undoubtedly involved deception. Participants were led
to believe that they were administering real electric shocks.
Deception in various degrees is probably quite widespread in
experimental research, because researchers often want to limit
participants’ understanding of what the research is about so that they
respond more naturally to the experimental treatment.
Similarly, Erikson (1967: 369) has argued that covert observation ‘is
liable to damage the reputation of sociology in the larger society and
close off promising areas of research for future investigators’.
One of the main problems with adhering to this ethical principle is
that deception is, as some writers observe, widespread in social
research (see Section 6.3 under ‘Ethical transgression is
widespread’). It is rarely possible to give participants a totally
complete account of what your research is about. As we explore in
Thinking deeply 6.1, sometimes research requires an element of
deception. As Punch (1979) found in an incident we will refer to in
Chapter 18 (see Section 18.3 under ‘Active or passive?’), when a
police officer he was working with told a suspect that Punch was a
detective, Punch could hardly announce to the suspect that he was
not in fact a police officer and give a full account of his research.
Despite taking a stance that is predominantly universalist in terms of
ethics, Bulmer (1982) recognizes that there are bound to be instances
like this and considers them justifiable. However, it is very difficult
to know where the line should be drawn.
T HI NKI NG DEEPLY 6 . 1
The role of deception in research
Ethical dilemmas
We have outlined four key ethical principles, but it will be clear by
now that adhering to these principles in practice is not as simple as it
might at first seem (for example, see Learn from experience 6.5).
This comes back to the wide variety of stances on what it means to be
ethical or unethical in a research context.
L EARN F RO M EXPERI ENCE 6 . 5
Dealing with ethical dilemmas
Ethics are not the only way in which wider issues impact upon social
research. In this final section we will consider the ways in which
politics (in the sense of status, power dynamics, and the use of power
rather than of political parties and governance) plays an important
role in all stages of social research—sometimes interlinking with
discussions of ethics and bringing up potential ethical concerns.
Being an insider/outsider
Funding of research
Research funding is unlikely to be an issue for undergraduates
conducting a research project, but for postgraduates it may be
applicable. Either way, funding is important to consider: journals
often require researchers to declare who funded their work, and you
may be reading papers and/or reports that have been funded by
external organizations. Much social research is funded by such
organizations as firms and government departments, which often
have a vested interest in the outcomes of the research. The very fact
that some research is funded while other research is not suggests
that political issues may be involved, in that organizations are likely
to want to invest in studies that will be useful to them and supportive
of their operations and worldviews. Such organizations are often
proactive, in that they may contact researchers about carrying out an
investigation or call for researchers to tender bids for an
investigation in a certain area. When social researchers participate in
these kinds of exercises, they are entering a political arena because
they are having to tailor their research concerns and even research
questions to a body that defines, or at least influences, those research
concerns and research questions.
Gaining access
Gaining access to a research setting, for example an organization, is
also a political process, as we discuss further in Chapter 18 (Section
18.3, under ‘Gaining access to a research setting’). Access is usually
mediated by gatekeepers, who are concerned about the
researcher’s motives: what the organization can gain from the
investigation, and what it will lose by participating in the research in
terms of staff time and other costs, and potential risks to its image.
Often, gatekeepers will try to influence how the investigation takes
place in terms of the kinds of questions the researcher can ask, who
can and cannot be the focus of study, the amount of time to be spent
with each research participant, the interpretation of findings, and the
form of any report to the organization itself. Reiner (2000) suggests
that the police, for example, are usually concerned about how they
are going to be represented in publications in case they are portrayed
unfavourably to agencies to which they are accountable. Singh and
Wassenaar observe that ‘ethical dilemmas can occur if the gatekeeper
is coercive in influencing participant involvement in the research’
(2016: 43). Companies are also concerned about how they are going
to be represented. All of this means that gaining access is almost
always a matter of negotiation, so it inevitably turns into a political
process. The results of this negotiation are often referred to as ‘the
research bargain’.
Working in a team
When researchers work in teams, politics may be an influencing
factor, owing to team members’ different career (and other)
objectives and different perceptions of their contributions. These
kinds of issues are unlikely to affect most undergraduate or
postgraduate research projects, but they are worth bearing in mind
given that team-based assignments are becoming increasingly
popular. Another way in which working with other researchers
connected to your institution might influence social research is that
supervisors of postgraduate research and undergraduate
dissertations may be evaluated in terms of the number of
postgraduate students they guide through to completion of a project,
or in terms of the quality of the undergraduate dissertations for
which they were responsible. These kinds of wider political processes
could be relevant to many of this book’s readers.
Publishing findings
Research can be affected by pressure to restrict the publication of
findings. Steele et al. (2019) investigated agreements between
researchers and the Coca-Cola Company. Although the content of the
contracts varied, generally they found that Coca-Cola reserved the
right to review and feedback on research before it is submitted for
publication, but that researchers had the right to not accept any
suggested changes. The primary concern for the authors was the
presence of early termination clauses, meaning that Coca-Cola could
stop funding a project before it had ended:
Although not all agreements we reviewed allow for full recall of research documents
and materials, we identified several agreements that in effect allow Coca-Cola to
terminate a study, if the findings are unfavourable to Coca-Cola.
The authors observe that they did not find any evidence of Coca-Cola
blocking the publication of negative research findings, but that these
provisions might influence the conclusions that researchers draw
from their work.
Savage argues (see also Savage and Burrows 2007) that this
jurisdiction is under threat because others now use the methods over
which sociologists claimed special expertise, and new kinds of data
about social issues (such as social media) are emerging that
sociologists play little or no role in curating. The field of research
methods has become an arena in which many groups and individuals
claim to have methodological expertise that allows them to
understand the social world. It is therefore important that, as social
researchers, we both continue the tradition of rigorous empirical
enquiry and engage in new and innovative methodological debates.
This may mean rethinking what skills and competencies should be
part of the social scientist’s toolkit in the twenty-first century and
committing to developing new expertise in emerging areas, such as
the use of social media data in social research.
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
While these are the core data-collection methods, so they are the
ones that we focus on here, you should be aware that quantitative
methods are continually evolving with new methods being regularly
developed. It is also worth noting that researchers are not limited to
one approach, and that studies often use more than one method—
this is called multi-method research. Technological innovations
have affected the kinds of quantitative analysis that we can conduct
and have led to the development of software that enables more
sophisticated forms of statistical analysis (MacInnes 2017). We
discuss some of these software packages in Chapter 15. The use of
self-completion questionnaires has also been affected by
increased technological advances (see Chapter 10). For example,
there are now a large number of software packages that can help you
produce surveys, collect data, and analyse the data collected. While
these tools still need the researcher’s input and expertise, they can
certainly simplify the quantitative research process.
Measurement
The most obvious preoccupation in quantitative research is with
measurement, which is seen to be beneficial for a number of reasons.
We will come back to measurement, and the reasons it is so
important to quantitative researchers, in detail in Section 7.5.
Causality
In most quantitative research, explanation is a key focus.
Quantitative researchers are rarely focused on just describing how
things are; they want to say why things are the way they are. For
example, if a researcher was studying racial prejudice, they would
generally want to go further than describing how much prejudice
exists in a certain group of individuals, or what proportion of people
in a sample are prejudiced. They would probably want to examine
the causes of variation in racial prejudice, perhaps explaining it in
terms of personal characteristics (such as levels of authoritarianism)
or social characteristics (such as education or social mobility). The
idea of ‘independent’ and ‘dependent’ variables reflects this
tendency to think in terms of causes and effects. Staying with our
example of racial prejudice, we could consider the concept of racial
prejudice as the dependent variable, which can be explained by
authoritarianism, an independent variable that has a causal
influence upon prejudice.
Generalization
A key concern of quantitative researchers is whether they can
generalize their findings beyond the context of their study—in other
words, whether their research has sufficient external validity
(another term we discussed in Chapter 3). As we touched on in
Chapter 1 and will discuss in more detail in Chapter 8, it is rarely
possible to assess, survey, or interview whole populations,
organizations, or sets of documentation for a study, so we usually
have to use a smaller sample taken from the larger group (the
population). If we want to be able to say that our results can be
generalized beyond the cases (for example, the people) that make up
the sample, then it needs to be as representative of the larger group
as possible. The main way researchers seek to generate a
representative group is through probability sampling, which uses
random selection to create the sample. This largely eliminates bias
from the process of selection, but even this method does not
guarantee sample representativeness because, as we will see in
Chapter 8, it can be affected by other factors.
Replication
The final quantitative research preoccupation we need to consider is
that of replication. Again, we can draw parallels with the natural
sciences, which are often shown as aiming to reduce the
contaminating influence of the scientist’s biases and values to a bare
minimum. If biases and lack of objectivity were widespread, the
natural sciences’ claim to provide a definitive picture of the world
would be undermined. In order to minimize the influence of these
potential problems, scientists may seek to replicate (to reproduce)
each other’s experiments. If the findings repeatedly couldn’t be
replicated, serious questions would be raised about their validity. As
a result, scientists are often very explicit about their procedures so
that their experiments can be replicated. Quantitative researchers in
the social sciences also regard the ability to replicate as an important
part of their activity. It is therefore often seen as important that
researchers clearly present their procedures so they can be replicated
by others, even if the research does not end up being replicated.
Figure 7.1 shows the main steps in quantitative research. While the
process is rarely this linear, the diagram represents a useful starting
point for understanding the main steps in quantitative research and
the links between them. We covered some of these steps in Chapters
1, 2, and 3.
F I G U R E 7 . 1 The process of quantitative research
The fact that we start by using existing literature and theory (Step 1)
signifies that there is a largely deductive approach to the relationship
between theory and research. Outlines of the main steps of
quantitative research, including Figure 7.1, commonly suggest that
research questions and a hypothesis (see Key concept 7.1) are
deduced from the theory and tested (Steps 2 and 3)—a process
known as deductivism. However, in quantitative research, including
some published research articles, it is worth noting that research
questions and hypotheses are not always specified and it may be that
researchers only loosely use theory to inform data collection.
Hypotheses are most likely to be presented in experimental research,
but are also often found in survey research, which is usually based on
a cross-sectional design (see Research in focus 3.5). When research
questions are formulated (Step 2) it is worth noting that these are
influenced by the values a researcher possesses and the type of
method employed. For example, in quantitative research it is more
common to refer to terms such as ‘predict’ or ‘cause’ in the research
questions in accordance with the preoccupations of quantitative
research.
KEY CO NCEPT 7 . 1
What is a hypothesis?
Step 9 simply refers to the fact that, once information has been
collected, it needs transforming into ‘data’. In quantitative research,
this is likely to mean that it is prepared so that it can be quantified.
Sometimes this can be done in a relatively straightforward way—for
example, information relating to such things as people’s ages,
incomes, number of years spent at school, and so on. For other
variables, quantification will require coding of the information—
transforming it into numbers to enable the quantitative analysis of
the data. For example, leisure activities or type of accommodation
would require you to code the different responses, turning the
categories of the variables into numeric form. If you plan to carry out
the data analysis using a software package, you will need to code the
data according to the software’s specifications.
Finally, the research must be written up—Step 12. This involves more
than relaying to others what has been found. The writing must
convince readers that the conclusions are important and the findings
robust. Once the findings have been published, they become part of
the body of knowledge (or ‘theory’). This means there is a feedback
loop from Step 12 back up to Step 1.
What is a concept?
Concepts are the building blocks of theory and the points around
which social research is conducted. Just think of the numerous
concepts that we touch on in relation to the research examples we
cite in this book:
social capital, ethnic discrimination, gender values, ideological orientation, poverty,
social class, job search methods, emotional labour, negotiated order, culture, academic
achievement, teacher expectations, abusive supervision cultural capital.
Why measure?
There are three main reasons for the focus on measurement in
quantitative research. These can be summarized as follows.
Measurement allows us to identify fine differences between
1. people’s characteristics.
How do we measure?
Devising indicators
The fact that a single question may need to be very general could
mean that it is not able to accurately reflect participants’ responses.
Alternatively, the indicator may be able to cover only one aspect of
the concept, meaning that the researcher misses out on a lot of
relevant information. For example, if you were interested in job
satisfaction, would it be sufficient to ask people how satisfied they
were with their pay? Most people would argue that there is more to
job satisfaction than satisfaction with pay. Multiple questions would
allow you to explore a wider range of aspects of this concept, such as
people’s satisfaction with conditions and with the work itself.
The indices for the scale are formed by scoring the leftmost,
most pro-welfare position as 1, and the rightmost, most anti-
welfare position as 5. The ‘neither agree nor disagree’ option is
scored as 3. The scores to all the questions in each scale are
added and then divided by the number of items in the scale,
giving indices ranging from 1 (most pro-welfare) to 5 (most anti-
welfare).
See
http://www.bsa.natcen.ac.uk/media/39290/7_bsa36_technic
al-details.pdf (accessed 20 March 2021).
Dimensions of concepts
One further approach to measurement is to consider the possibility
that the concept in which you are interested is made up of different
dimensions. This view is particularly associated with Lazarsfeld
(1958). The idea is that when the researcher seeks to develop a
measure of a concept, the different aspects or components of that
concept should be considered. The dimensions of a concept are
identified with reference to the theory and research associated with
it. We can find examples of this kind of approach in Seeman’s (1959)
assertion that there are five dimensions of alienation: powerlessness,
meaninglessness, normlessness, isolation, and self-estrangement.
People scoring high on one dimension may not necessarily score high
on other dimensions, so for each respondent you end up with a
multidimensional ‘profile’. Research in focus 7.5 demonstrates the
use of dimensions in connection with the concept of
counterproductive work behaviour.
RESEARCH I N F O CUS 7 . 5
Specifying dimensions of a concept
Reliability
As Key concept 7.3 suggests, reliability is concerned with the
consistency of measures. In this way, it can mean much the same as
it does in a non-research methods context. If you have not put on any
weight and you weigh exactly the same weight each time you step on
your weighing scales, you would say your weighing scales are
reliable: their performance does not fluctuate over time and is
consistent in its measurement. We use the term in a similar way in
social research. Key concept 7.3 outlines three different factors
associated with the term—stability, internal reliability, and inter-
rater reliability—which will we now elaborate on in turn.
Stability
Reliability is about the stability of measurement over a variety of
conditions in which the same results should be obtained. The most
obvious way of testing for the stability of a measure is the test–retest
method. This involves administering the test or measure on one
occasion and then re-administering it to the same sample on another
occasion.
Internal reliability
Validity
Measurement validity is concerned with whether a measure of a
concept really measures that concept (see Key concept 7.4). When we
discussed the reliability of weighing scales earlier, it is worth noting
that they could have been consistent in their measurement, but they
may always over-report weight by 2 kilograms. So these particular
scales might be reliable in that they are consistent, but they are not
valid because they do not correspond to the accepted conventions of
measuring weight. When people argue about whether a person’s IQ
score really measures or reflects that person’s level of intelligence,
they are raising questions about the measurement validity of the IQ
test in relation to the concept of intelligence. Whenever we debate
whether formal exams provide an accurate measure of academic
ability, we are also raising questions about measurement validity.
KEY CO NCEPT 7 . 4
What is validity?
Face validity
Construct validity
Convergent validity
Convergent validity involves comparing a measure to other
measures of the same concept that have been developed through
different methods—the fact that converging means different things
coming together might help you to remember this term. To establish
convergent validity, you need to show that the measures employed
are related to each other. In addition to using a test of concurrent
validity for their research on gambling expenditure (see Research in
focus 7.6), Wood and Williams (2007) used a diary to estimate
gambling expenditure for a subsample (a sample within a sample) of
their respondents that could then be compared to questionnaire
estimates. Respondents began the diary shortly after answering the
survey question and continued completing it for a 30-day period.
This allowed the researchers to compare what was actually spent in
the month after the question was asked (assuming the diary
estimates were correct!) with what respondents thought they spent
on gambling.
Reverse operationism
One example of the gap between the ideal type and actual research
practice is something called ‘reverse operationism’ (Bryman 1988a:
28). The model of the process of quantitative research in Figure 7.1
implies that concepts are specified and that measures and indicators
are then devised for them. This is the basis of the idea of
operationism or operationalism (introduced in Section 7.4),
which implies a deductive view of research. However, the view of
research presented in Figure 7.1 neglects the more inductive
elements that are often involved in measurement. For example,
sometimes measures are developed that in turn lead to
conceptualization.
The fact that some researchers do not thoroughly assess the quality
of their measurements does not mean that you should or can neglect
this phase in your own work. We note this tendency simply in order
to draw your attention to some of the ways in which practices we
might describe are not always followed, and to suggest why this
might be the case.
Sampling
We can make a similar observation about sampling, which we cover
in Chapter 8. As we will see, good practice here is strongly associated
with random or probability sampling. However, much research (by
necessity) is based on non-probability samples—that is, samples
that have not been selected in terms of the principles of probability
sampling. Sometimes this is due to the difficulty of obtaining
probability samples, or because of the time and cost involved.
And sometimes the opportunity to study a certain group presents
itself and is too good an opportunity to miss. Again, it is important to
be aware of these facts but they do not give us licence to ignore the
principles of sampling that we examine in Chapter 8, especially since
they have implications for the kind of statistical analysis that we can
employ (see Chapter 15).
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
But will any sample suffice? Could you simply locate yourself in a
central position on the campus and then interview the students who
come past you? Alternatively, would it be sufficient to go around the
student union asking people to be interviewed? Or to send
questionnaires to everyone on a particular course? The answer
depends on whether you want to be able to generalize (that is, to
make predictions about the broader population from your sample)
your findings to the entire student body in the university. In order to
be able to generalize your findings from your sample to the
population from which it was selected, the sample must be
representative. See Key concept 8.1 for an explanation of this and of
the other key terms concerning sampling.
KEY CO NCEPT 8 . 1
Basic terms and concepts in sampling
There are two potential sources of bias (see Key concept 8.1 for
explanations of the terms mentioned here).
1. If a non-probability or non-random sampling method is
used. If the method used to select the sample is not random,
it is likely that human judgement will affect the selection
process, making some members of the population more
likely to be selected. This source of bias can be eliminated
through the use of probability/random sampling.
If the sampling frame is inadequate. If the sampling frame
2. is not comprehensive or is inaccurate, the sample that is
used cannot represent the population, even if researchers
use a random/probability sampling method.
Non-response, where some sample members refuse to participate or
cannot be contacted, can also lead to a sample that is
unrepresentative. The problem with this is that those who agree to
participate may differ in various ways from those who do not agree to
participate, and some of the differences may be significant to the
research question. If the data are available, it may be possible to
check how far, when there is non-response, the resulting sample
differs from the population. It is often possible to do this in terms of
characteristics such as gender or age.
There are four main types of probability sample that we could use to
select participants:
3. Decide your sample size (n). We have the means and funds
to include 450 students.
There are two points worth noticing about this process. First, there is
almost no opportunity for human bias, such as selecting people who
look friendly. The selection of whom to interview is mechanical.
Second, the process is not dependent on the students’ availability.
Systematic sample
Because the university will hold information about what faculty the
student is a member of, it will be possible to ensure that students are
accurately represented in terms of their faculty membership. This
means stratifying the population by a criterion (in this case, faculty
membership) and selecting units from within each stratum using
either simple random sampling or systematic sampling. If there are
five faculties, we would have five strata, and we would select one-
twentieth (as we know that a 450-student sample means we are
selecting 1 in every 20 students) of the numbers of students in each
stratum, as shown in the right-most column of Table 8.1. The middle
column of Table 8.1 shows what the outcome of using a simple
random sample in this scenario could be—it could result in a
distribution of students across faculties that does not accurately
reflect the population, for example with higher numbers of pure
science than engineering students included in the sample, despite
the engineering faculty representing a bigger proportion of the
overall population.
Humanities 1,800 85 90
Social 1,200 70 60
sciences
Applied 1,800 84 90
sciences
• Glasgow Caledonian
• Edinburgh
• Teesside
• Sheffield
• Swansea
• Leeds Beckett
• University of Ulster
• University College London
• Southampton
• Loughborough
Notes: 95 per cent of sample means will lie within the shaded area. SE = standard
error of the mean.
and
Convenience sampling
A convenience sample is, as the name suggests, one that is available
to the researcher because of its accessibility. Imagine that a
researcher who lectures for the education faculty at a university is
interested in the kinds of qualities that teachers look for in their head
teachers. The researcher might administer a questionnaire to several
classes of students, all of whom are teachers taking a part-time
master’s degree in education. It is likely that the researcher will
receive almost all of the questionnaires back, so there will be a good
response rate. The problem with such a sampling strategy is that it is
impossible to generalize the findings because we do not know of
what population this sample is representative. They are simply a
group of teachers available to the researcher. They are almost
certainly not representative of teachers as a whole, especially given
that they are taking this particular degree programme (which may
differ from others in terms of, for example, the students it attracts
and accepts).
Snowball sampling
In certain respects, snowball sampling is a form of convenience
sample, but with this approach to sampling, the researcher makes
initial contact with a small group of people who are relevant to the
research topic and then uses them to establish contacts with others.
The problem with snowball sampling is that it is unlikely that the
sample will be representative of the population. Usually, snowball
sampling is not used within a quantitative research strategy, but
within a qualitative one. There is less concern about external
validity and the ability to generalize within a qualitative research
strategy as there would be in a quantitative research one (see
Chapter 17). In qualitative research, the approach to sampling is
more likely to be guided by a preference for what is known as
purposive sampling (this involves selecting people who ‘best fit’
the requirements of the study, according to predefined
characteristics) than by the kind of statistical sampling that we have
focused on within this chapter (purposive sampling is discussed in
detail in Section 17.3). There is a much better ‘fit’ between snowball
sampling and the purposive sampling strategy of qualitative research
than with the statistical sampling approach of quantitative research.
This does not mean that snowball sampling is irrelevant to
quantitative research: when the researcher needs to focus upon or
reflect relationships between people, tracing connections through
snowball sampling may be a better approach than conventional
probability sampling (Coleman 1958). In Learn from experience 8.1,
Scarlett explains how she used purposive sampling in a student
project.
L EARN F RO M EXPERI ENCE 8 . 1
Purposive sampling
Quota sampling
Quota sampling is fairly rare in academic social research but is used
intensively in market research and political opinion polling. Quota
sampling aims to produce a sample that reflects a population in
terms of the relative proportions of people in different categories,
such as gender, ethnicity, age groups, socio-economic groups, and
region of residence, and combinations of these categories. This might
sound similar to a stratified sample, but in quota sampling, the
sampling of individuals is not carried out randomly: the final
selection of people is left to the interviewer. Quota sampling is
especially useful if it is not possible for you to obtain a probability
sample, but you are still trying to create a sample that is as
representative as possible of the population you are studying
(Sharma 2017). It is a form of purposive sampling procedure where
cases/participants are sampled in a strategic way (see Key concept
17.1).
Once the researcher has decided on the categories they want to use
and the number of people they need from each category (known as
quotas or quota controls), they select people for inclusion in the
sample who fit these categories. As in stratified sampling, the
population might be divided into categories of, for example, gender,
social class, age, and ethnicity. Researchers might use census data to
establish the size of each of these categories in the overall
population, so that they can ensure their sample reflects these
proportions—for example, if Afro-Caribbean people make up 20 per
cent of the population, 20 per cent of the people the researchers
interview should be from this category. Researchers are often looking
for individuals who fit several categories. They might know, for
example, that if their sample is to reflect the population, they need to
find and interview five Asian, 25- to 34-year-old, lower-middle-class
females living in a particular location. The researcher usually asks
people who are available to them about their characteristics in order
to determine whether they fit a particular category. Once they have
enough people from a category (or a combination of categories) to fill
the necessary quota, they no longer need to find participants from
this group. Foster and Heneghan (2018) used quota sampling when
exploring young women’s pension planning. They did this to ensure
that their sample represented a mixture of ages and incomes,
occupations, public/private sector employment, ethnicities, marital
statuses, and whether women had children.
You will have gathered that with this type of sampling, the choice of
respondents is left to the researcher as long as all quotas are filled,
usually within a certain time period. If you have ever been
approached on the street by someone with a clipboard or tablet and
been questioned about your age, occupation, etc., before being asked
a series of questions about something, you have almost certainly
come across an interviewer with a quota sample to fill. Sometimes,
they will decide not to interview you because you do not meet the
criteria needed to fill a quota.
Quota samples do, however, result in biases more often than random
samples (Yang and Banamah 2013; Tyrer and Heyman 2016). They
under-represent people in lower social strata, those employed in the
private sector and manufacturing, and those at the extremes of
income, and they over-represent women in households with children
and from larger households. But probability samples are often biased
too—under-representing men and those in employment (Marsh and
Scarbrough 1990; Butcher 1994; Rada and Martín 2014).
An issue that has not been given much attention is whether quota
samples produce different findings from probability samples. Yang
and Banamah (2013) selected a probability sample and a quota
sample from the membership of a university student society. For the
quota sample, gender and educational level (master’s degree, PhD,
etc.) were used as quotas. In the case of the probability sample, 22.5
per cent responded more or less immediately, increasing to 42.5 per
cent after a first reminder, and to 67.5 per cent after a second and
final reminder. The questionnaire had only a small number of
questions, which covered issues such as religious participation,
personal friendships, and mutual trust among members. The
researchers found that there were only statistically significant
differences between the findings from the two samples when findings
from the quota sample were compared to the findings after the
second reminder. There were few, if any, statistically significant
differences between the findings from the two samples when the
quota sample was compared with the findings from those who had
replied immediately or those who replied after one reminder. Yang
and Banamah propose that this suggests that when a probability
sample generates a low response rate, it is likely that it does not have
a great advantage over an equivalent quota sample.
Non-response
The problem of non-response, which we briefly considered in Section
8.2, is also important in relation to sample size. Most sample surveys
attract a certain amount of non-response. It is likely that only some
members of a sample will be contactable and that out of those who
are contactable, some will refuse to participate. We saw this in the
HESA example regarding the collection of student employment data
(discussed in Section 8.2). This lack of response from people (units)
is actually called ‘unit non-response’ to distinguish it from item
non-response, which is when people (units) agree to participate
but fail, either deliberately or accidentally, to answer specific items
(questions) on a questionnaire. Here we are using the term ‘non-
response’ to refer to unit non-response. If our aim is to ensure as far
as possible that 450 students are interviewed and we think that there
may be a 20 per cent rate of non-response, then it might be sensible
to sample 550–60 individuals, as approximately 90 will be non-
respondents.
KEY CO NCEPT 8 . 2
What is a response rate?
Using pre-recruited panels to complete more than one survey has the
speed advantages of online surveys but eliminate the often lengthy
recruitment process involved in surveys where people have not
previously been recruited. However, pre-recruited panels are not
without potential limitations. In particular, Fricker (2008: 2004)
notes that ‘researchers should be aware that long-term panel
participants may respond differently to surveys and survey questions
than first-time participants (called “panel conditioning” or “time-in-
sample bias”)’. In addition, if there is significant loss of potential
respondents during the recruitment and participation stages, this
can lead to non-response issues.
Another factor that may affect response rates is how far the topic is
interesting or relevant to the sample. During the 2008 US
presidential campaign, Baumgartner and Morris (2010) conducted
an online survey of students, examining the influence of social
networking sites as sources of news upon students’ engagement with
the democratic process. The researchers achieved a respectable
response rate of 37.9 per cent. Although they found little evidence of
social networking sites having an impact on the students’ political
engagement, these platforms play a significant role in young people’s
lives, and the fact that the survey was about them may have helped
the response rate.
The length of time needed to complete an online survey (or any
survey for that matter) can influence response and completion rates
(Clark et al. 2019). Crawford et al. (2001) report the results of a
survey of students at the University of Michigan that experimented
with a number of possible influences on the response rate. Students
in the sample were initially sent an email inviting them to visit the
website, which allowed access, via a password, to the questionnaire.
Some of those emailed were led to expect that the questionnaire
would take 8–10 minutes to complete (in fact, it would take
considerably longer); others were led to expect that it would take 20
minutes. Those led to believe it would take longer were less likely to
accept the invitation, resulting in a lower response rate for this
group. However, Crawford et al. also found that those respondents
who were led to believe that it would take only 8–10 minutes were
more likely to give up on the questionnaire part of the way through,
resulting in unusable partially completed questionnaires. We
recommend piloting your questionnaire to get a clear sense of how
long it will take and being honest about the likely time it will take to
complete. Crawford et al. also found that respondents were most
likely to abandon their questionnaires part way through when in the
middle of completing a series of open-ended questions,
suggesting that it is best to minimize the number of these questions
in self-completion questionnaires.
We have seen that sampling is a key area where ‘error’ can occur in
survey research. We can think of ‘error’ as being made up of three
main factors (Figure 8.9), one of which relates to sampling
procedures.
The second and third sources of error relate to factors that are not
associated with sampling and instead relate much more closely to
concerns about the validity of measurement, which we considered in
Chapter 7. In Chapters 9–11, we will consider the steps that you can
take to keep these sources of error to a minimum in survey research.
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
Interviews are used regularly in many aspects of social life. They can
take the form of job interviews, university entrance interviews, media
interviews, or appraisal interviews. There are also research
interviews, which are the kind of interview that we will cover in this
and other chapters (including Chapters 19 and 20).
There are other ways in which coding can introduce error. The
variation that the researcher observes will not reflect the true
variation in responses if the rules for assigning answers to categories,
collectively known as the coding frame, are flawed, and even if
these rules are sound, there is a risk of variability in the ways in
which answers are categorized. As with interviewing, this variability
comes in two forms: intra-rater variability, whereby the researcher
applies the coding inconsistently in terms of the rules for assigning
answers to categories, and inter-rater variability, where if more
than one person is conducting the coding, they differ from each other
in the way they apply the rules for assigning answers to categories.
Computer-assisted interviewing
Researchers often use interviewing software to administer one of the
interview methods we have discussed, especially commercial survey
research organizations conducting market research and opinion
polls. There are two main formats for computer-assisted
interviewing, both of which we have touched on already: computer-
assisted personal interviewing (CAPI) and computer-assisted
telephone interviewing (CATI). A large percentage of telephone
interviews are conducted with the help of software. Among
commercial survey organizations, almost all telephone interviewing
is of the CATI kind, but CAPI is increasingly widely used. The main
reason for this increase is the growth in the number and quality of
software packages that provide a platform for devising interview
schedules; this makes them more suitable for use in face-to-face
interviews. Many of the large data sets used for secondary
analysis (we provide examples of these in Chapter 14) derive from
computer-assisted interviewing studies carried out by commercial or
large social research organizations.
Approaching interviewees
Approaching potential interviewees is a key part of the research
process. If your approach lacks clarity about who you are, the
research you are conducting, and why you are conducting it, you are
likely to put off participants from being involved. You have already
seen, in Chapter 8, some of the challenges involved in identifying an
appropriate sample and difficulties regarding response rates. You
will need to be polite, organized, and appreciative when you
approach potential interviewees. We recommend that you consider
Tips and skills 9.1.
T I PS AND SKI L L S 9 . 1
Approaching potential interviewees
The following tips will help you recruit interviewees for your
study.
• Participation is voluntary.
Question wording
It is crucial that each respondent is asked exactly the same questions
in structured interviews because (as we saw in Section 9.2), variation
in the ways a question is asked is a potential source of error in survey
research. While structured interviews reduce the likelihood of this
occurring, they cannot guarantee that it will not occur because there
is always the possibility that interviewers will embellish or adapt a
question when asking it. If you carefully consider question wording
when preparing the interview, you can limit the chance of this
happening.
Question order
Keeping to the set order when asking questions is also important in
surveys and requires careful planning. Varying the question order
can result in questions being accidentally omitted, because the
interviewer may forget to ask those that they have skipped during the
interview. Varying the question order may have an impact on replies:
if some respondents have been previously asked a certain question
whereas others have not, this will have introduced variability in the
asking of questions, which is a potential source of error. It is also
worth noting that response quality tends to decrease as the survey
progresses (Barge and Gehlbach 2012). You should bear this in mind
when constructing your interview schedule, to ensure that you do not
leave key questions until too late.
Quite a lot of research has been carried out on the general subject of
question order, but with limited evidence of consistent effects on
people’s responses linked to asking questions at different points in a
survey. Different effects have been shown on various occasions. A
classic study in the USA found that people were less likely to say that
their taxes were too high when they had been previously asked
whether government spending ought to be increased in a number of
areas (Schuman and Presser 1981: 32). The researchers felt that
some people perceived an inconsistency between wanting more
spending and lower taxes, and this led to them adjusting their
answers. The same researchers’ study on crime victimization in the
USA also suggested that earlier questions may affect later ones
(Schuman and Presser 1981: 45).
There are two general lessons to keep in mind when thinking about
question order in a survey:
1. Interviewers should not vary question order (unless this is
the subject of the study!);
2. You should be sensitive to the possible implications of the
effect of early questions on answers to subsequent questions.
It is worth bearing the following rules in mind when deciding on
question order.
• Early questions should be directly related to the topic of the
research, which the respondent has already been informed
about. Personal questions about age, social background, and
so on should ideally not be asked at the beginning of an
interview (although there are some variations in views on this,
with some researchers believing that straightforward
questions are a good way of ‘warming up’ participants).
• Where possible, questions that are more likely to be of most
relevance to respondents should be asked early in the
schedule, in order to attract respondents’ interest and
attention.
• Sensitive questions or ones that might be a source of anxiety
should be left till later in the survey.
These rules may seem clear and easy to apply, but question order
remains one of the more frustrating areas of structured interview
and questionnaire design because the evidence regarding its
implications is inconsistent.
presumes that the respondent voted. The possibility that this is not
in fact the case can be reflected in the fixed-choice answers that are
provided, by providing a ‘did not vote’ option. A better solution is not
to presume anything about voting behaviour but, instead, first to ask
respondents whether they voted in the last general election and then
to filter out those who did not vote. Then questions about political
party voting can just be asked of those who voted. Tips and skills 9.4
provides a simple example in connection with an imaginary study of
alcohol consumption. The key point here is that the interviewer must
have clear instructions. Without such instructions, there is the risk
that either the interviewer will ask respondents inappropriate
questions (which can be irritating for them) or the interviewer will
inadvertently fail to ask a question (which results in missing
information). In Tips and skills 9.4, the contingent questions (1a and
1b) are indented and there is an arrow to indicate that a ‘Yes’ answer
should be followed by question 1a and not question 2. These visual
aids can help to reduce the likelihood of interviewers making errors.
T I PS AND SKI L L S 9 . 4
Instructions for interviewers in the use of a filter question
Building rapport
Building rapport with the respondent is important when
interviewing, as it encourages the other person to want (or at least be
prepared) to participate in and continue with the interview.
However, while we recommend that interviewers are friendly with
respondents and put them at ease, it is important not to go too far.
Too much rapport could result in the interview going on too long and
the respondent suddenly deciding that they are spending too much
time on it. Also, a particularly friendly environment could also lead
to the respondent answering questions in a way that is designed to
please the interviewer. It could also result in the interviewer being
more tempted to stray from the more formal structured schedule.
Asking questions
Probing
Prompting
Card 6
Strongly agree
Agree
Undecided
Disagree
Strongly disagree
Card 11
1. Below 20
2. 20–29
3. 30–39
4. 40–49
5. 50–59
6. 60–69
7. 70 and over
Recording answers
We have discussed concerns about consistency in asking questions,
and interviewers also need to record answers carefully, with
respondents’ replies written down as exactly as possible. If
interviewers do not do this, they will introduce error. These errors
are less likely to take place when the interviewer simply has to
allocate respondents’ replies to a category, as in a closed-ended
question, than when they are writing down answers to open-ended
questions (Fowler and Mangione 1990). Where more than one
interviewer is recording the answers to open-ended questions, this
increases the likelihood of inter-interviewer variability, whereby
interviewers are inconsistent with each other in the ways in which
they record answers. If you record the structured interview, this
makes it easier to revisit the respondent’s replies and to ensure that
you have transcribed them accurately. While you might not consider
this necessary, especially if your structured interview only uses
closed questions, it is certainly worth investigating where the
responses deviate from this closed form (see Chapter 19 for further
details on transcription). In Chapter 11 we explore in detail the
process of recording answers and coding responses. We will focus on
strategies to limit errors in this process and consider whether you
should design codes before you begin collecting data, or after you
have collected it.
Before you move on, check your understanding of this section by
answering the following self-test questions:
The interviewer’s work does not stop as soon as they have asked the
last question. It is very important to thank respondents for giving up
their time at the end of the interview process. The interviewer also
needs to act and speak carefully following the interview in order to
avoid introducing bias. This can occur because after an interview,
respondents sometimes try to engage the interviewer in a discussion
about the purpose of the interview and might communicate anything
they are told to future respondents, which could bias the findings.
Interviewers should be friendly and appreciative of the respondent’s
time but still firm and professional, and they should resist
elaborating on the research beyond their standard statement.
Characteristics of interviewers
As we touched on in Section 9.3 under ‘In person, video interview or
telephone?’, there is evidence that interviewers’ attributes can have
an impact on respondents’ replies, although the literature makes it
difficult to generalize on this. This is partly because of the problem of
disentangling the effects of interviewers’ different attributes from
each other (race, gender, socio-economic status); the interaction
between the characteristics of interviewers and the characteristics of
respondents; and the interaction between any effects observed and
the topic of the interview. However, there is certainly some evidence
that the characteristics of interviewers can affect respondent’s
replies.
Response sets
It has been suggested that structured interviews are particularly
prone to the operation among respondents of what Webb et al.
(1966: 19) call ‘response sets’, which they define as ‘irrelevant but
lawful sources of variance’. The idea is that people respond to the
series of questions in a way that is consistent but that is irrelevant to
the concept being measured. This form of response bias is
especially relevant to multiple-indicator measures (see Chapter
7), where respondents reply to a series of related questions or items,
of the kind found in a Likert scale (see Key concept 7.2).
Acquiescence
In this context, acquiescence is the tendency shown by some
respondents to consistently agree or disagree with a set of questions
or items. For instance, if a study used a series of five Likert items
with respondents being asked to indicate the degree to which the
statement is true or false, and a respondent replied ‘Definitely true’
to all five very similar items, despite the fact that four of the five are
in a positive direction and one is in a negative direction, this implies
a form of acquiescence. One of the replies is inconsistent with the
four other answers. One of the reasons why researchers who employ
this kind of multiple-item measure use wordings that imply
opposite stances (that is, some items implying a high level of clarity
and others implying a low level of clarity) is to weed out respondents
who appear to be replying within the framework of an acquiescence
response set. Acquiescence is a form of satisficing behaviour in
surveys (see Key concept 9.2).
There are several strategies for checking and reducing the risk of
socially desirable responding. One is not to use interviewers, as there
is some evidence that self-completion forms of answering are less
prone to the problem (Tourangeau and Yan 2007; Yeager et al. 2011;
Gnambs and Kaspar 2015). It is also possible to soften questions that
are likely to produce social desirability bias, for example using
‘forgiving’ wording and phrases such as ‘Everybody does it’ (Näher
and Krumpal 2012). Minson et al. (2018) conducted a study looking
at undesirable work-related behaviours in the USA and found that
‘negative assumption’ questions that, in effect, presuppose a problem
led to greater disclosure of undesirable work-related behaviours than
when they asked ‘positive assumption’ questions that presuppose the
absence of a problem, and general questions that do not reference a
problem.
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
We have already pointed out that there are similarities between self-
completion questionnaires and structured interviews, but the
obvious difference between these methods is that an interviewer is
present in structured interviews, whereas self-completion
questionnaire respondents must read and answer the questions
themselves. Because there is no interviewer administering the self-
completion questionnaire, this research instrument has to be
particularly easy to follow, with questions that are extremely clear
and easy to answer. This means that in comparison to interviews,
self-completion questionnaires tend to
Advantages of self-completion
questionnaires compared with
structured interviews
Cheaper to administer
Quicker to administer
Disadvantages of self-completion
questionnaires compared with
structured interviews
The lower the response rate, the more questions are likely to be
raised about the representativeness of your sample and therefore the
external validity of your findings. In a sense, this is likely to be an
issue only with randomly selected samples; response rates are less of
an issue for samples that are not selected on the basis of
probability sampling, because these are not expected to be
representative of the population.
There is considerable variation in what social scientists consider to
be an acceptable response rate. Mellahi and Harris (2016) undertook
a meta-analysis of academic articles published in management
journals between 2009 and 2013. They found that among the 1,000
articles they reviewed there was considerable variation in response
rates, which ranged from 1 per cent to 100 per cent with a mean of
just under 45 per cent, despite much of the literature arguing for a
response rate of at least 50 per cent: in other words, journals were
usually willing to publish studies even when their response rates
were considerably lower than 50 per cent. We are certainly not
suggesting that response rates do not matter, but it is worth being
aware that much academic research has a response rate below the
recommended figures. The key point is to recognize and
acknowledge the implications and possible limitations of a low
response rate, and to do what you can to mitigate the issue—see Tips
and skills 10.1 and Learn from experience 10.1.
T I PS AND SKI L L S 1 0 . 1
Steps to improve response rates in self-completion
questionnaires
As you can see from Tips and skills 10.1, there are a
number of strategies researchers can employ to try
and increase the response rate in self-completion
questionnaires. Here, our panellists outline some of
the different approaches they used that they believe
worked well. For instance, Ben, whose research
explored the challenges and opportunities for data
visualization to improve understanding of datafication,
discusses different ways he promoted his research,
including its visibility. Scarlett, who focused on mental
wellbeing and social media, emphasizes the idea of
giving back to those you have researched, while
Grace, who had used self-completion questionnaires in
other work, discusses the length of the questionnaire
as well as the possible use of incentives.
I invested significant time on social media to engage mostly with
two digital networks (Twitter generally, and a large specific online
community related to my topic). With regard to the topic-related
community, I was active in other discussions and visible in other
ways so that I didn’t seem to be just trying to generate survey
engagement. Furthermore, I wrote a quite lengthy article on
Medium that explained my research and the survey, as a means of
enabling participants to fully understand the research before taking
part. Both of these factors seemed to help the success of the
survey.
Ben
When using self-completion questionnaires for my dissertation
project, I received a high response rate. I believe this was because
the questions were short and to the point. Participants are more
inclined to take part in research if there is an incentive, but as our
undergraduate dissertation projects are not funded, monetary
incentives are not possible. For my project, I emphasized to
participants that their responses were going to have an impact and
make a difference. I explained to them that the project was
partnered with local organizations who were interested in hearing
what they had to say about social media and its impact upon their
mental wellbeing. I think reiterating this to participants encouraged
them to include as much information as they could.
Scarlett
It is common that people will not fill out questionnaires that are
too time-consuming or offer no reward. Therefore, it is important to
keep your questionnaire as short as you possibly can without
compromising the data and, if possible, offer incentives for
completing the questionnaire. This is difficult in the context of
undergraduate research as you may not have access to money, but
it is worth discussing possible incentives with your supervisor.
Grace
Researchers’ fear of low response rates can mean that they try to
make their questionnaire appear as short as possible, to avoid
putting off prospective respondents, but this can make the
questionnaire more cramped and less attractive. As Dillman et al.
(2014) observe, an attractive layout is likely to enhance response
rates, whereas the kinds of tactics that are sometimes used to make a
questionnaire appear shorter than it really is—such as reducing
margins and the space between questions—can do the opposite. Also,
if questions are too close together, there is a risk that they will be
accidentally omitted, creating the problem of item non-response
resulting in missing data. This is not to say that you should leave lots
of space, as this does not necessarily result in an attractive format
and risks making the questionnaire look bulky, but there is a balance
to be struck.
Agree = 4
Undecided = 3
Disagree = 2
Strongly disagree = 1
Undecided = 3
Disagree = 4
Strongly disagree = 5
F I G U R E 1 0 . 1 A Likert item exploring the concept of ‘learning community’ from the UK’s
National Student Survey
Source: Example NSS question, The National Student Survey: Consistency, controversy and
change, Office for Students, 2020. © The Office for Students copyright 2020.
Provide clear instructions about how
to respond
You always need to be clear about how you want respondents to
indicate their replies when answering closed-ended questions. Are
they supposed to place a tick next to, circle around, or line
underneath the appropriate answer, or are they supposed to delete
inappropriate answers? Also, in many cases the respondent may
want to choose more than one answer—is this acceptable to you, and
are your instructions clear in relation to this?
If you want the respondent to choose only one answer, you might
want to include an instruction such as:
(Please choose the one answer that best represents your views by placing a tick in
the appropriate box.)
Read Learn from experience 10.2 to hear how our panellists went
about designing self-completion questionnaires.
L EARN F RO M EXPERI ENCE 1 0 . 2
Designing a self-completion questionnaire
Most of the issues we have discussed so far relate to both online and
paper-based surveys. However, we now need to turn to a number of
issues that relate particularly to digital surveys. As previously noted,
there is a crucial distinction between surveys completed by email and
surveys completed online. In the case of the former, the
questionnaire is sent via email to a respondent, whereas with an
online survey, the respondent is directed to a website in order to
answer a questionnaire. Online surveys have increasingly become the
preferred choice for researchers of all kinds, largely because of the
growing number of useful online survey sites for designing these
kinds of questionnaires.
Email surveys
It is important to distinguish between embedded email surveys
and attached email surveys. With embedded questionnaires,
questions are set out in the body of an email, although there may be
an introduction to the questionnaire followed by some marking (for
example a horizontal line) that separates the introduction from the
questionnaire itself. Respondents have to click ‘reply’ and add their
answers into the email below. They are often asked to indicate their
replies using simple notations below each question, such as an ‘×’; or
they may be asked to delete alternatives that do not apply; or when
questions are open-ended, they may be asked to type in their
answers. They then simply need to click ‘send’ to return their
completed questionnaires. An attached questionnaire is similar in
many respects, but the questionnaire arrives as an attachment to an
email that introduces it. As with embedded questionnaires,
respondents select their answers and/or type them into the
document before returning it to the researcher, normally as an email
attachment again.
Online surveys
You will almost certainly have had some personal experience with
this type of self-completion questionnaire. With surveys in this
format, prospective respondents are simply sent a URL through
which they can access and complete the questionnaire online.
Online surveys have an important advantage over email surveys in
that there is much more scope to customize their appearance, and
many useful tools and types of functionality that can enhance their
usability, not only for respondents but also for researchers when they
come to collect and analyse the data. For example, to help
respondents navigate the questionnaire, you can design it to skip
automatically to the next appropriate question when there is a filter
question (for example, ‘if Yes go to question 12, if No go to question
14’), and you can choose to only display one question at a time to the
respondent. It is also possible to dictate which questions must be
answered and which are optional.
Figure 10.2 shows part of the questionnaire from a survey about gym
use that we will examine in Chapter 15. Here, it is formatted as an
online survey, but it has been answered in the same way as in
Chapter 15, which presents it as a postal survey. As you can see, it
includes some open-ended questions (for example question 2),
where the respondent is invited to type directly into a boxed area.
The researcher can set a character limit for these answers if they
choose.
It is also worth noting that there can be technical issues with these
forms of self-completion questionnaires. For instance, if respondents
choose to complete the questionnaire in more than one sitting,
previous answers may be lost, resulting in the questionnaire being
incomplete or the respondent having to start again. These types of
technical issues can mean that researchers receive, and have to
respond to, a lot of queries.
Resource issues
Sampling-related issues
Questionnaire issues
Issues to Mode of survey administration
consider
Structured Structured Postal Email Online
in-person telephone questionnaire
interview interview
Does it give
respondents
✓✓ ✓ ✓✓✓ ✓✓✓ ✓✓✓
the opportunity
to consult
others for
Issues to
information? Mode of survey administration
consider
Structured Structured Postal Email Online
in-person telephone questionnaire
interview interview
Before you move on, check your understanding of this section by answering
the following self-test questions:
There are three main ways in which we use the term ‘diary’ in
the context of social research:
Source: Adult example, Eurostat (2019), Harmonised European Time Use Surveys
(HETUS) 2018 Guidelines, 2019 edition, page 95. Reprinted with permission.
On the other hand, diary studies can suffer from the following
problems.
Despite these problems, the data resulting from diaries are likely to
be more accurate than the equivalent data from interviews or
questionnaires. See Tips and skills 10.5 for guidance on preparing a
diary for use in data collection.
T I PS AND SKI L L S 1 0 . 5
Preparing a diary as a mode of data collection
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
Open-ended questions
While open-ended questions have both advantages and
disadvantages in survey research, the problems associated with
processing their answers mean that researchers tend to favour
closed-ended questions. Open-ended questions are more extensively
used in qualitative forms of interviewing, as we show in Chapter 19,
where researchers find them to be a particularly valuable approach
to collecting data. Let’s consider the pros and cons of this format.
Advantages
• They are useful for exploring new areas or ones where the
researcher has limited knowledge, as they enable respondents
to highlight areas that they think are significant, rather than
the researcher having to undertake the potentially time-
consuming process of identifying fixed-choice alternatives.
Disadvantages
Good _ ✓
Fair ____ 3
Poor ____ 2
Closed-ended questions
Having reflected on the advantages and disadvantages of open-ended
questions, you can probably anticipate some of the pros and cons of
using a closed-ended format. We consider them here.
Advantages
Disadvantages
Answer % Answer %
involving …
Work where there is not too much 11.7 Control of work 4.6
supervision and you make most
decisions yourself
Work that is pleasant and people are 19.8 Pleasant work 14.5
nice to work with
96% of 57.9%
sample of
sample
Answer % Answer %
involving …
Working 3.1
conditions
Benefits 2.3
Satisfaction/liking 15.6
a job
4.0% 41.9%
of of
sample sample
• Don’t know
Don’t know
(b) In fact, Jim and Margaret are prepared to move and live near Jim’s
parents, but teachers at their children’s school say that moving might
have a bad effect on their children’s education. Both children will soon
be taking O-levels [predecessors to the UK’s current GCSE
examinations, usually taken by 14- to 16-year-olds].
Stay
(c) Why do you think they should move/stay?
Probe fully verbatim
(d) Jim and Margaret do decide to go and live near Jim’s parents. A year
later Jim’s mother dies and his father’s condition gets worse so that he
needs full-time care.
Should Jim or Margaret give up their jobs to take care of
Jim’s father? IF YES: Who should give up their job, Jim or
Margaret?
Yes, Jim should give up his job
Yes, Margaret should give up her job
Agree ____
Undecided ____
Disagree ____
Over the years, numerous rules have been devised for ‘dos’ and
‘don’ts’ of asking questions in social science research. Despite the
quantity of advice available, this aspect of research is one of the most
common areas for making mistakes. There are three simple general
principles we recommend thinking about as a starting point: bearing
your research questions in mind at all times; thinking about exactly
what you want to know; and thinking about how you would answer
each question. Beyond those principles, the more specific rules we
set out in this section will help you to avoid pitfalls.
General principles
You also need to think about what forms of analysis you will use to
address your research questions and what measurement levels you
might need to use: we will discuss these issues further in Chapter 15.
It may sound obvious, but it’s crucial that you decide exactly what it
is you want to know and make sure your questions address this.
Taking account of these general principles, and the following dos and
don’ts about asking questions, should help you to avoid the more
obvious pitfalls.
When developing question, you need to make sure that their fixed-
choice answers align with and accurately represent the questions.
Thinking deeply 11.1 provides an example where the question and
answers do not match. It is important to make sure the possible
answers represent the potential responses to the question so that
they address your questions appropriately.
T HI NKI NG DEEPLY 11 . 1
Matching question and answers in closed-ended (and
double-barrelled) questions
Poor ____
Acceptable ____
Average ____
Good ____
Excellent ____
Of course, we have changed the sense slightly here because
a further problem with the question is that it is double-barrelled.
In fact, it is ‘treble-barrelled’, because it is actually asking about
three attributes of the writing in one question. This is
problematic because the reader may have very different views
about the three qualities—the writing may have been very
imaginative but not, in their view, elegant.
The important point here is to think carefully about the
connection between the question and the fixed-choice answers
you provide, because a disjunction between the two could
negatively affect the data you gain.
Good ____
Acceptable ____
Poor ____
In this case, the response choices are balanced towards a favourable
response. Excellent and Good are both positive; Acceptable is a
neutral or middle position; and Poor is a negative response, so the
answers are loaded in favour of a positive rather than a negative
reply. Another negative response choice (perhaps Very poor) is
needed here.
RESEARCH I N F O CUS 11 . 5
Providing ‘don’t know’ options
Instead, you should try to ask about actual frequency, such as:
How frequently do you usually visit the cinema?
(Please tick whichever category comes closest to the number
of times you visit the cinema)
More than once a week ____
suffers from the same problem. A child may be very helpful when it
comes to cooking but totally uninvolved in cleaning, so any answer
about how frequently they help with both activities will be
ambiguous and create uncertainty for respondents and researchers.
No ____
If Yes, which political party did you vote for?
Do you agree with the view that students should not have to
take out loans to finance higher education?
It would be easy to misread this question as asking whether students
should take out loans to finance higher education. Asking the
question in a positive format (the question above but with the word
‘not’ removed) makes it much easier to respond. Questions with
double negatives should be totally avoided, because knowing how to
respond to them is difficult. An example of this kind of question
might be:
The problem here is that many respondents will not know what is
meant by ‘alienated’, and even if they do understand it, they are
likely to have different views of its meaning.
Agree ____
Undecided ____
Disagree ____
Strongly disagree ____
Here, the problem is the use of the acronym TUC (the Trades Union
Congress in the UK). You should avoid acronyms where possible
because some people may not know what they stand for.
Avoid stretching respondents’ memories too far
Going to a gym □
Sport □
Jogging □
Long walks □
Yes No
Going to a gym □ □
Sport □ □
Jogging □ □
Long walks □ □
Many people would assume that these two ways of asking questions
are equivalent, but Smyth et al. (2006) have shown that the forced-
choice format (the second example) results in more options being
selected and, as a result, Dillman et al. (2014) advocate the use of the
forced-choice format for this kind of question situation (also see
Krosnick 2018).
Read Tips and skills 11.2 to make sure you avoid the most common
mistakes when asking questions in your research project.
T I PS AND SKI L L S 11 . 2
Avoiding common mistakes when asking questions
Yes____
No____
• This ignores the possibility that respondents will not
simply feel satisfied or not satisfied. As people vary in the
intensity of their feelings about such things, it would be
worth rephrasing the question like this:
How satisfied are you with opportunities for
promotion in the firm?
Very satisfied____
Satisfied____
Neither satisfied nor dissatisfied____
Dissatisfied____
Very dissatisfied____
• On self-completion questionnaires, provide clear
instructions about how the questions should be
answered, including specifying whether a tick, circle, or
deletion (etc.) is required. If only one response is
required, make sure you say so—for example, ‘tick the
one answer that comes closest to your view’.
• Be careful about letting respondents choose more than
one answer. Although this can sometimes be
unavoidable, questions that allow more than one reply
are often more challenging to analyse.
• Formulate closed answers that are mutually exclusive
and that include all categories. Avoid using questions like
this:
How many times per week do you typically use
public transport? Not only does the respondent not
know where to answer if their answer is 3 or 6
times; there is also no answer for someone who
would want to answer 10.
• Provide an appropriate time frame with questions. The
question ‘How much do you earn?’ is problematic
because it does not provide the respondent with a time
frame. Is it per week, per month, or per year? Another
problem is that respondents need to be told whether the
figure required should be gross (that is, before
deductions for tax, national insurance, etc.) or net (after
deductions). Given the sensitivities surrounding a
person’s salary, it would be best not to ask the question
this way anyway and instead to provide a set of income
bands on a show card (for example, below £10,000;
£10,000–£19,999; £20,000–£29,999; etc.).
• Use a format that makes it easy for respondents to
answer and that reduces the likelihood of them making
mistakes in answering.
You should not carry out piloting on people who might be members
of the sample that will be employed in your main study. If you are
intending to use probability sampling, selecting a number of
members of the population or sample in your pilot may affect the
representativeness of any subsequent sample. If possible, it is best to
find a small set of respondents who are comparable to members of
the population from which you will take the sample for your main
study. For example, if you were interested in doing a study of
students from one university, you could use a small number of
students from another university for your pilot. Hear about our
panellists’ experience of piloting questions for their projects in Learn
from experience 11.1.
L EARN F RO M EXPERI ENCE 11 . 1
Piloting questions in student projects
T I PS AND SKI L L S 11 . 3
Getting help in designing questions
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
Tutor
5. discussing topic;
6. making arrangements;
7. silence.
Student(s)
8. asking question;
Note: Each cell represents a five-second interval and each row is one minute. The number in
each cell is the code that represents a category of observed behaviour.
Sampling people
The idea that people may change their behaviour because they know
they are being observed is known as the reactive effect (Key
concept 12.3). If people adjust the way they behave because they
know they are being observed (perhaps because they want to be
viewed in a favourable way by the observer), their behaviour would
be atypical. This means we could not regard the results of structured
observation research as necessarily indicative of what happens with
no observer present. Of course, not all structured observation is
known to the participants being observed, as was the case for the
studies in Research in focus 12.1, 12.2, 12.3, and 12.4 (though this
comes with ethical considerations). McCall (1984) points to evidence
that a reactive effect occurs in structured observation, but also that,
by and large, research participants become accustomed to being
observed, so the researcher essentially becomes less intrusive the
longer they are present.
KEY CO NCEPT 1 2 . 3
The reactive effect
Our aim here is not to put you off from considering structured
observation as an approach to research, but to make you aware of
some of the challenging ethical considerations that you will need to
explore.
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
Content analysis is a research technique for the objective, systematic and quantitative
description of the manifest content of communication.
Selecting a sample
There are several phases involved in selecting a sample for content
analysis: you need to sample both the media to be analysed and the
dates or time period(s) in which you are interested. Here, we will
focus on using sampling to analyse the mass media, as the basic
principles of this process are relevant to content analysis more
generally. However, remember that content analysis can be applied
to many different kinds of document. In Chapter 8 we considered the
process of identifying a sample and the process of collecting data
from a representative sample. These considerations are also
worth thinking about when undertaking a quantitative content
analysis.
Sampling media
And then, at a more granular level: Will you analyse all news items
within your chosen papers, or only some? For example, would you
include feature articles and letters to the editor, or only news items?
Sampling dates
Significant actors
The significant actors—in other words, the main figures—in any news
item and their characteristics are often important items to code,
particularly in the context of mass-media news reporting. You might
record the following kinds of things.
• What was the context for the item? (for example, an interview,
the release of a report, or an event such as an outbreak of
hostilities or a minister’s visit to a hospital)
The main reason researchers record such details is so that they can
map the main figures involved in news reporting in an area, which
could help reveal some of the mechanics involved in producing
information for the public.
Words
Despite its limitations, CACA tools are increasingly used and widely
available in different forms. You may have already used this software
without realizing, by searching newspapers’ websites or archives—we
discuss this in Tips and skills 13.1. Researchers often use specialist
CACA software for larger scale projects. Research in focus 13.3
provides an example of a study conducted using software called
Wordsmith, and the Bligh et al. (2004) study we considered in the
section ‘Sampling dates’ used a program called DICTION to examine
whether there was a shift in the charismatic tone of President George
W. Bush’s speeches before and after the terrorist attacks of 9/11. This
software can make use of pre-existing dictionaries that have already
been set up, and it also allows users to create dictionaries for their
own needs. You can find more information about DICTION at
www.dictionsoftware.com/diction-overview/ (accessed 5
June 2019).
T I PS AND SKI L L S 1 3 . 1
Counting words in digital news reports
Dispositions
Researchers are likely to use another level of interpretation when
they want to demonstrate a disposition (a particular value position or
stance, whether positive or negative) in the texts being analysed. For
example, a researcher might focus on establishing whether
journalists reporting on an issue in the news media are positive
about or hostile towards an aspect of it, such as their stances on the
government’s handling of a food scare crisis. In the case of the
Buckton et al. (2018) study on the reporting of sugar taxation, each
item was coded not only in terms of subjects and themes, but also
whether it was positive or negative about sugar taxation. Similarly,
when examining climate change reporting in US and Spanish
newspapers (see Section 13.3 under ‘Words’), Bailey et al. (2014)
categorized the tone of epistemic markers as negative or as neutral.
In some cases in content analysis a researcher might need to infer
whether there is a judgemental stance in the item being coded, and if
so, the nature of that judgement, if there is no clear indication of this
kind of value position.
Sink and Mastro (2017) also coded in terms of beliefs in their study
of depictions of gender stereotyping in primetime television. They
sampled from primetime programmes (shown between 8 and 11 p.m.
PST) airing during a 10-week period across nine major US TV
networks. This resulted in a final sample of 89 programmes that
depicted 1,254 unique characters. Those undertaking the coding were
instructed that coding decisions should be based on how the average
American television viewer would perceive the characters’ behaviour
in terms of the conceptual definitions that the researchers provided
in the coding manual (sometimes referred to as a codebook). They
found that although some gender stereotypes seem to be declining,
others—including dominant men and sexually provocative women—
have persisted.
2. gender of perpetrator;
3. social class of perpetrator;
4. age of perpetrator;
5. gender of victim;
Coding schedule
A coding schedule is a form where all the data relating to an item
being coded will be entered. Figure 13.1 shows what this might look
like—in very simplified terms—for our example of analysing the
reporting of crime. Each column in Figure 13.1 is a dimension that
we are coding, with the column headings indicating the dimension
names. The blank cells on the coding form are the places in which
you would write codes. You would use one coding form for each
media item that you coded. The codes could then be transferred to a
digital data file and analysed with a software package like SPSS, R,
or Stata (our resources for quantitative data analysis software
contain written and video tutorials and a quick reference guide to
support you in using these each of these programs).
F I G U R E 1 3 . 1 Coding schedule
Coding manual
The coding schedule in Figure 13.1 is very bare and may not seem to
provide much information about what should be entered or where.
This is where the coding manual comes in, as it provides a statement
of instructions to coders including all the possible categories for each
dimension being coded. It will include the following elements:
• a list of all the dimensions;
• the different categories for each dimension;
the numbers (that is, codes) that correspond to each category;
•
and
You can see that the coding manual includes the occupation of both
the perpetrator and the victim. This list of categories uses
Goldthorpe et al.’s UK social class categorization (1968) and is based
on the important summary by Marshall et al. (1988: 22), but
‘unemployed’, ‘retired’, and ‘housewife’ have been added to reflect
terms that might be used in newspapers, as well as the category of
‘other’. These considerations were informed by the Office for
National Statistics (2019a), which is a UK government department
responsible for statistics related to the UK’s economy, population,
and society. The offences are categorized in terms of those used by
the police, in accordance with the rules of the UK government’s
Home Office (a ministerial department responsible for immigration,
security, and law and order), to record crimes that have been brought
to their attention. We could use much finer distinctions, but if you
were conducting this analysis for a small-scale student project and
were not planning to examine a large sample of news items, it might
be best to use these kinds of broad categories. (They also have the
advantage of being comparable to the Home Office data, which could
be illuminating.) The coding schedule and manual in Figures 13.1
and 13.2 would allow you to record two offences, as a single incident
might involve more than one offence. If there were more than two,
you would need to decide which additional offences were the most
significant—you would need to treat the main offence mentioned in
the article as the first one.
Let’s apply this coding scheme and manual to two news items that
you might be analysing: see Figures 13.3 and 13.4. Figure 13.3 is from
a UK broadsheet newspaper, the Sunday Times, whereas Figure 13.4
is from a UK tabloid newspaper, the Daily Star. (Remember that
when we discussed sampling—see Section 13.3 under ‘Selecting a
sample’—we discussed the importance of being clear on not only
what media form(s) you will be including in your analysis,
newspapers in this example, but also what types of that media form
you will look at, tabloid and broadsheet in this example.)
F I G U R E 1 3 . 3 Reporting a crime in national newspapers I
Source: Sunday Times. ‘Author’s Partner on Murder Charge’, 17 July 2016, p. 11. Reprinted
with kind permission of the publisher: The Sun / News Licensing.
F I G U R E 1 3 . 4 Reporting a crime in national newspapers II
Source: Daily Star. ‘Footie Ace Nutted in Store by Love Rival’, 8 November 2018, p. 10.
Reprinted with kind permission of the publisher: Daily Star, 2018/Reach Licensing.
You would fill in the coding of the incidents on coding schedules like
the one shown in Figure 13.1, and the data from each form would
then be entered as a row of data in a computer program such as IBM
SPSS.
The coding of the incident in Figure 13.3 would appear as in Figure
13.5; the data entered on the computer would appear as follows:
123 17 07 16 1 1 17 55 2 6 51 3 16 2
(Note that the news item in Figure 13.3 contains a second offence,
which has been coded as 16 under ‘Nature of offence II’.)
Figure 13.6 contains the form that would be completed for the item
in Figure 13.4. The following row of data would be created:
301 08 11 18 1 1 1 −1 1 17 52 2 0 2
You would complete a form like this for each news item within the
chosen period or periods of study.
You also need to consider the reliability of the coding when doing
content analysis. We noted in 13.2 that content analysis involves
devising clear rules and then applying them in a systematic way. This
means that coding must be done consistently between coders (inter-
rater reliability), and each coder must be consistent over time (intra-
rater reliability). An important reason to pilot your coding scheme is
to test for consistency between coders and, if time permits,
consistency of coding over time. The process of assessing reliability is
very similar to the process we briefly covered in the context of
structured observation—see Section 12.6.
Websites
Blogs
Online forums
Social media
In light of these concerns, there are several things that you should
keep in mind when conducting a content analysis of social media
data.
It is worth noting that this last claim should be treated with a little
caution. When we conduct content analysis with things like
newspaper articles or television programmes, there is no reactive
effect because newspaper articles are obviously not written in the
knowledge that a content analysis may be carried out on them.
However, if we are carrying out a content analysis on documents
such as interview transcripts or ethnographies then even though
the process of performing the content analysis does not itself
introduce a reactive effect, the documents may have been at least
partly influenced by such an effect.
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
High-quality data
Many of the data sets that tend to be used for secondary analysis are
of very high quality. This is due to several factors.
A further benefit of using secondary data sets is that they can enable
cross-cultural research. This type of research often appeals to social
scientists because we are becoming increasingly aware of the
processes associated with globalization and cultural differences. We
are conscious that findings produced by research conducted in a
particular country are not necessarily applicable to other countries.
However, doing research in a different country presents considerable
financial and practical issues, including the need to navigate
potentially significant language and cultural differences.
In addition, statistics are available from the Global South that can be
used in cross-cultural analysis, such as the Demographic and Health
Surveys Program, run by the US Agency for International
Development, which provides representative data on population,
health, HIV, and nutrition through more than 400 surveys in over 90
countries. The Multiple Indicator Cluster Surveys are household
surveys implemented by countries under the programme developed
by the United Nations Children’s Fund to provide internationally
comparable data on the situation of children and women. This
includes countries as diverse as Argentina, Bhutan, the Democratic
Republic of the Congo, and Iraq.
• conduct further analysis of the data that could not have been
considered by the original researchers, on the basis of new
theoretical ideas;
When collecting your own data you are likely to be very familiar with
its content, but with data collected by others, you have to allow for a
period of familiarization. This involves getting to know the range of
variables, the ways in which the variables have been coded, and
various aspects of the organization of the data. While you have no
control over the collection of the data, you still need to understand
how it was collected and what it means. This process can be quite
time-consuming and tricky with large, complex data sets, so you
should not underestimate this.
Some of the best-known data sets that you can use for secondary
analysis, such as USoc, are very big, with large numbers of both
respondents and variables, which potentially presents problems with
managing the information. Also, some of the most popular data sets
for secondary analysis are known as hierarchical data sets (these
include USoc), which means that the data are collected and
presented at the level of both the household and the individual, as
well as at other levels. Researchers therefore have to decide which
level of analysis to employ. If you decide that individual-level data is
most appropriate for your study, you will need to extract the
individual-level data from the data set. It is worth noting that
different data will apply to each level: at the household level, the
USoc survey provides data on such variables as number of cars and
number of children living at home, while at the individual level, it
provides data on income and employment.
Let’s focus on the UK Data Archive in a little more detail to give you
an idea of how it works. Everything in this archive is accessible to all
academic researchers, including students, unless any specific data
sets have had restrictions placed on them. The archives hold data on
the UK census, from the Office for National Statistics (ONS), and
they can be used to access data from other archives within Europe
and beyond. The best way to find out about the data held in the
archive is to examine its online catalogue, accessible via the UK
Data Service catalogue search page (accessed 11 December 2020).
Table 14.1 lists several large data sets that are frequently used in
social research, together with notes on the type of data they include
and the topic(s) they cover. Further information about these data
sets can be found via the UK Data Service.
TA B L E 1 4 . 1 Large data sets that are suitable for secondary analysis
British Social A more or less annual survey conducted Wide range of areas of
Attitudes since 1983 using a multi-stage stratified social attitudes and
(BSA) Survey random sample of over 3,000 behaviour. Some areas
respondents aged 18 and over. Each survey are core ones asked
comprises an hour-long interview and a about annually; others
self-completion questionnaire. See are asked about
www.natcen.ac.uk/our- irregularly.
research/research/british-social-
attitudes/ (accessed 5 February 2020).
European A survey conducted every other year since Attitudes, beliefs, and
Social Survey 2001 across Europe, involving over 30 behaviour.
(ESS) countries. Samples are selected randomly
and face-to-face interviews are
administered. See
www.europeansocialsurvey.org/abou
t/index.html (accessed 5 February 2020).
Labour Force Biennial interviews, 1973–83, and annual Hours worked; job
Survey (LFS) interviews, 1984–91, comprising a search methods;
quarterly survey of around 15,000 training; personal
addresses per quarter and an additional details such as
survey in March–May. Since 1991 it has nationality and gender.
been a quarterly survey of around 40,000
addresses. Since 1998, core questions have
also been administered in member states of
the European Union. See
www.hse.gov.uk/statistics/lfs/about.
htm (accessed 5 February 2020).
Millennium A study of 19,000 babies, and their families, Continuity and change
Cohort Study born between 1 September 2000 and 31 in each child’s family
August 2001 in England and Wales, and and its parenting
between 22 November 2000 and 11 January environment;
2002 in Scotland and Northern Ireland. important aspects of
Data were collected by interview with the child’s
parents when babies were 9 months and development.
around 3 years old. Since then, surveys
have been conducted at ages 5, 7, 11, 14, and
17 years old. See www.cls.ioe.ac.uk
(accessed 5 February 2020).
Title Data set details Topics covered
National Child Irregular but ongoing study of all 17,000 Physical and mental
Development children born in Great Britain in the week health; family;
Study of 3–9 March 1958. Since 1981 it has parenting; occupation
comprised both an interview and a and income; housing
questionnaire. There have been 10 waves of and environment.
data collection: in 1965 (when members
were aged 7 years), in 1969 (age 11), in 1974
(age 16), in 1981 (age 23), in 1991 (age 33),
in 1999–2000 (age 41–2), in 2004 (age 46),
in 2008–9 (age 50–1), in 2013 (age 55),
and in 2020 (aged 62). See
www.cls.ioe.ac.uk (accessed 5 February
2020).
Opinions and This survey, run by the UK’s ONS, involves Core questions each
Lifestyle interview data being collected 8 times a year about
Survey; year following a period when data were respondents, plus
formerly ONS collected monthly, and before that 8 times modules (asked on
Omnibus per year. The Opinions and Lifestyle Survey behalf of participating
Survey and merges the ONS Opinions Survey and the organizations) on
ONS Opinions LFS. See topics that change
Survey https://beta.ukdataservice.ac.uk/dat annually concerning
acatalogue/series/series? e.g. food safety; eating
id=2000043 (accessed 5 February 2020). behaviour; personal
finance; sports
participation;
management
organization; employee
representation.
• Reduced time and cost. The data have already been collected,
which is likely to save a researcher considerable time and
expense.
• Lower risk of reactivity. The people who are the source of the
data are not being asked questions that are part of a research
project, so there is very little risk of data being affected by
reactivity (see Key concept 12.3).
This final point, that official statistics generate data ‘without alerting
the people under study’ (Chambliss and Schutt 2019: 285), is one of
the most compelling and commonly cited reasons for the continued
use of official statistics. They can be considered a form of
unobtrusive measure, although many writers prefer to use the term
unobtrusive method (Lee 2000). We discuss this term in Key
concept 14.2.
KEY CO NCEPT 1 4 . 2
What are unobtrusive methods?
These limitations should not lead you to rule out using official figures
in your research project, given the many potential advantages
associated with doing so, but you need to be aware of these potential
issues with the reliability and validity of the data. While we have
shown that these concerns are especially prominent in relation to
official crime statistics, they are not limited to this area, as we saw in
relation to official statistics regarding levels of employment. At the
same time, it is important to recognize that issues of reliability and
validity are not only relevant when using official statistics; they
should be considered in any research project. When you come to
write up your research (a stage we cover in Chapter 25), you should
note any potential limitations of your data in your conclusion, and
you should take them into account when making claims on the basis
of your findings.
We note in Key concept 14.4 that social media is the main Big Data
focus for social scientists, and Luke, one of our authors, has argued
that the ‘big’ nature of the data that platforms like Twitter generate
presents certain challenges for researchers (see Kitchin and McArdle
2016). The factors contributing to these challenges include the huge
size of the data sets; the speed at which data are produced; and the
variety of forms they can take (text, images, audio, videos, and
hyperlinks) (Sloan 2017). Our student panellists discuss some of
these challenges in Learn from experience 14.3.
L EARN F RO M EXPERI ENCE 1 4 . 3
Using Big Data in the form of Twitter
Tinati et al. (2014) suggest that there has been a tendency for
researchers using Big Data to reduce the data available in order to
deal with its volume, resulting in smaller-scale analyses that do not
take advantage of the full potential of the data. This reduction is
usually achieved either by concentrating on a reduced sample of
users, as in a study by Lieberman et al. (2013) of police departments’
use of Facebook that focused on only the largest US police
departments; or by taking a sample of tweets or posts, usually for
content analysis, as in the study by Humphreys et al. (2014).
Scourfield et al. (2019) emphasized the need for a clear time period
in their study of media reporting of suicides, which compared the
number and characteristics of reports on suicides and on road traffic
accidents in young people during a six-month period (see Research
in focus 14.4). Tinati et al. (2014: 665) argue that, to a certain extent,
much Big Data research is ‘methodologically limited because social
scientists have approached Big Data with methods that cannot
explore many of the particular qualities that make it so appealing to
use: that is, the scale, proportionality, dynamism and relationality’.
RESEARCH I N F O CUS 1 4 . 4
Comparative approaches using Big Data
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
In this chapter, we will take you through some very basic techniques
for analysing quantitative data. We will illustrate these methods
using a small imaginary set of data based on attendance at a gym.
This is approximately the amount of data that most students will
analyse for their undergraduate dissertation or research project, but
if you had access to suitable secondary data (see Chapter 14) you
could analyse a larger sample. We recommend that in additional to
reading this chapter you also visit our statistical software
resources, which contain video tutorials and guidance documents to
help you implement these techniques using SPSS, R, or Stata, three
of the most popular statistical software packages. We also provide
guidance on conducting quantitative data analysis using
Microsoft Excel and example quantitative data sets in SPSS
format and Excel format that you can use to practice. While SPSS
is commonly used and taught on many undergraduate courses, R is
gaining prominence, so it is worth thinking carefully about which
software package would be most suitable for your study—we discuss
this in Thinking deeply 15.1.
T HI NKI NG DEEPLY 1 5 . 1
Choosing a software program
• STATA ( www.stata.com)
• R( www.r-project.org/)
• SAS ( www.sas.com/en_gb/home.html)
(accessed 30 May 2019)
SPSS Tutorial 1
[Please note: You must be using an online, browser-based eReader in order to
view this content.]
The first point is the importance of thinking about how you will
analyse your data early on in the research process. One of the biggest
mistakes people tend to make here is that they do not give any
thought to how they will analyse their data until they have collected
it, because they do not think the data analysis method will affect the
data-collection process. This assumption is to an extent
understandable, because quantitative data analysis can look like a
distinct phase that occurs quite late in the process, after the data has
been collected (see, for example, Figure 7.1, in which the analysis of
quantitative data is shown as a late step—number 10—in
quantitative research). However, it is important that you decide
early in the process what techniques you will be applying—for
example, when you are designing your questionnaire,
observation schedule, or coding frame. There are two main
reasons for this.
So, you need to be aware that decisions that you make at quite an
early stage in the research process, such as the kinds of data you plan
to collect and the size of your sample, will have implications for the
sorts of analysis that you will be able to conduct. The kinds of
variables you employ and the types of analysis you conduct will also
depend on the research questions you are trying to address in
your project. So it is important to reflect on the research questions
your project is addressing and to ensure that your analysis will be
able to address these. The link between the research questions, the
data collection, and the subsequent analysis is crucial to successfully
undertaking social research. It is also important to check the level of
analysis that your institution will expect of you in any research
project you conduct.
In some cases, you may want to describe the data using univariate
analysis (exploring one variable), or bivariate analysis (looking at
the interactions between two variables), or even multivariate
analysis (analysing three or more variables simultaneously). You
may want to use statistical tests such as those associated with
associations and linear relationships. We will touch on these
concepts later in the chapter. In Learn from experience 15.1 Jodie
explains how she approaches these types of analysis.
L EARN F RO M EXPERI ENCE 1 5 . 1
Conducting univariate, bivariate, or multivariate
analysis
The student secures the agreement of a gym close to her home and
contacts a sample of its members by post (those who had agreed to
be contacted). The gym has 1,200 members and she decides to take a
simple random sample of 10 per cent of the membership (120
members). She sends out postal self-completion questionnaires
to members of the sample, with a covering letter explaining that the
gym supports her research and that it has been ethically approved
through her university ethics procedure (see Section 6.2 under
‘Guidance from your institution’). She would have preferred to
contact the members of her sample online so that they could
complete the questionnaire this way, but the gym was not willing to
pass on members’ email addresses. (It could easily have been the
other way round and they might not have wanted to provide postal
addresses, or they might have wanted to send out the questionnaire
on the student’s behalf.) However, she does offer participants the
option of completing the questionnaire online, so that this is in effect
a mixed mode survey (postal and online), although most of those
replying opt to do so via post.
One thing she wants to know is how much time people are spending
on each of three main types of activity in the gym: cardiovascular
equipment, weights equipment, and exercises. She defines each of
these carefully in the covering letter, asking members of the sample
to keep a note of how long they are spending on each of the three
activities on their next visit. They are then requested to return the
questionnaires to her in a prepaid reply envelope or online. She ends
up with a sample of 90 questionnaires—a response rate of 75 per
cent. This is an impressive response rate, and above what you might
usually expect to receive (see Section 8.45).
You can see the data for all 90 respondents in Figure 15.2, which
presents the gym survey data. Each row represents a different
respondent, and for the moment, each of the 12 questions
(remember that question 8 has two parts to it) is labelled as a
variable number (var00001, etc.), which is a default number
assigned by SPSS. Each variable number corresponds to a question
in the gym survey data (that is, var00001 is question 1, var00002 is
question 2, etc.).
F I G U R E 1 5 . 2 Data from an imaginary student survey on gym-going
If you are entering your own data you need to assign a particular
code for missing data (which can be a different number for different
questions) to make sure that the software registers the absence of a
response so that you can take this into account during the analysis.
(Where data has already been collected, as in a secondary data set,
the missing values, like other variables, will already have been
assigned a number—a code.) There is some missing data among the
responses shown in Figure 15.2.
TA B L E 1 5 . 1 Types of variable
var00011 weimins
var00012 othmins
var00009 exercise
Frequency tables
A frequency table shows how frequently particular values occur in a
data set. For example, a frequency table can set out the number of
people, and the percentage of people, belonging to each of the
categories of a variable. As you can see from Table 15.2, a frequency
table for var00003 presents the values associated with our variable
simply and clearly. No one chose the possible choices ‘meet others’
and ‘other’, so we have not included these in the table. The values for
the remaining options are shown under an n column, indicating the
number of people who selected the option, and a % column,
indicating the percentage of the sample that this figure represents.
This format allows us to easily ‘read’ the data. We can see, for
example, that 33 members of the sample are going to the gym to lose
weight and that they represent 37 per cent (percentages are often
rounded up and down in frequency tables) of the entire sample. If
the sample is small enough, you could create a frequency table
manually. However, you can access the statistical software
tutorials on our online resources to learn how to use them to
generate a frequency table.
Reason n %
Relaxation 9 10
Lose weight 33 37
Build strength 17 19
TOTAL 90 100
Age n %
20 and under 3 3
21–30 39 44
31–40 23 26
41–50 21 24
51 and over 3 3
TOTAL 89 100
L EARN F RO M EXPERI ENCE 1 5 . 2
Recoding variables
Bar charts
The bar chart in Figure 15.4 uses the data shown in Table 15.2. Each
bar represents the number of people within each category, and the
relative height of the bars helps us see how the counts compare with
each other. We noted above that bar charts are used for nominal and
ordinal variables because they involve non-continuous, mutually
exclusive data, and this characteristic is represented by the gaps
between the bars of a bar chart.
F I G U R E 1 5 . 4 Bar chart showing the main reasons for visiting the gym (SPSS output)
Pie charts
Another way of displaying nominal and ordinal data is through a pie
chart, like the one in Figure 15.5 (which again uses the data from
Table 15.2). Like the height of bars in a bar chart, this visual
representation of the data shows the relative size of the different
categories, bringing out the size of each ‘slice’ relative to the total
sample. This diagram also provides the percentage that each slice
represents of the whole sample. Pie charts are useful in helping an
audience get a sense of the distribution of your data quickly, but in
conveying these relative differences efficiently, they also have some
limitations. These can mean some researchers prefer not to use
them. It is harder to compare the size of slices in a pie chart than to
compare the heights of bars, partly because of the way that the
components are arranged and partly because angles are harder to
compare than lengths. That is why we recommend providing the
numbers too. You should do this either by clearly labelling each
‘slice’ of the pie or providing a separate key to show what the
different colours/shadings in each ‘slice’ represent. This is
particularly useful if you are dealing with lots of slices, or have some
small frequencies (some slices might be too narrow to contain a
number). If you are drawing more than one pie chart on the same
subject (such as types of crimes in different regions), always use the
same categories, presented in the same order, to help to avoid
confusion. It is also best to use pie charts where there are not too
many categories, as too many slices can make the chart difficult to
read.
F I G U R E 1 5 . 5 Pie chart showing the main reasons for visiting the gym (SPSS output)
Histograms
Infographics
The following tips build on our authors Liam and Tom’s work in
Clark et al. (2019) and are useful when presenting your data.
• Use the correct way of presenting your data. Consider
whether the data are categorical or continuous and then
select a suitable way to present it, taking into account
your aims and objectives.
• Label the diagram clearly. Give each diagram a clear title
explaining what or who the data refer to. In any graph,
make sure both the X (horizontal) and Y (vertical) axes
have labels and also provide the units of measurement.
• The diagram should be clear. This means thinking
carefully about which bars and lines are required. Always
use the same colours for the same categories when you
have more than one pie chart or bar chart, and try to limit
the number of different shadings in a single graphic.
Include a key (legend) where necessary.
• Include the source. You will usually know where the data
came from (such as a secondary data set or your own
survey). If you did not collect the data yourself, make
sure you put a note below the diagram to this effect (for
example, ‘Source: British Social Attitudes Survey 2019’)
or include the source in the title of the diagram.
• Be consistent. This includes using the same number of
decimal places throughout for the same type of data. One
or two decimal places should be sufficient, but some
disciplines may prefer no decimal places to be used at
all. You should check your any guidance available in
relation to this.
• Totals and subtotals. Include relevant totals and subtotals
in the diagram, and check that they add up correctly.
T HI NKI NG DEEPLY 1 5 . 3
Principles of graphical integrity
Median
Measures of dispersion
The amount of variation in a sample can be just as interesting as its
typical value. It allows us to draw contrasts between comparable
distributions of values. For example, is there more or less variability
in the amount of time people are spending on cardiovascular
equipment compared to weights machines? There are two main ways
of measuring dispersion.
Range
F I G U R E 1 5 . 8 A boxplot for the number of minutes spent on the last visit to the gym
Contingency tables
Contingency tables are widely used in social research and are
sometimes referred to as cross-tabulations or cross-tabs, especially
when using a software package. A contingency table is like a
frequency table, but whereas a frequency table shows the
distribution of a single variable, a contingency table allows us to
simultaneously analyse two variables at the same time, so that we
can see relationships between them. (The name comes from the fact
that this method reveals how one variable is contingent on another.)
Contingency tables are probably the most flexible of all of the
methods of analysing relationships in that they can be used in
relation to any pair of variables, though they are not the most
efficient method for some pairs, which is why the method is not
recommended in every cell in Table 15.4. Table 15.5 shows a
contingency table for the relationship between gender and reasons
for visiting the gym; the approach works well for
dichotomous/nominal variables (though, as we discussed in
Thinking deeply 15.2, gender is increasingly seen as non-
dichotomous). If we wanted to present an interval/ratio variable in a
contingency table format, we would need to group or recode the
categories (as in our earlier example of the ages of people attending
the gym).
Reasons Gender
Male Female
No. % No. %
Relaxation 3 7 6 13
Fitness 15 36 16 33
Lose weight 8 19 25 52
Build strength 16 38 1 2
TOTAL 42 48
The fact that gender is the column variable and the reason is the row
variable in Table 15.5 tells us the student has presumed that gender
influences reasons for going to the gym (which is logical, given that
going to the gym seems unlikely to influence gender). We can see
that her assumption was correct: the table reveals clear gender
differences in reasons for visiting the gym, with females much more
likely than men to be going to the gym to lose weight. Women are
also somewhat more likely to be going to the gym for relaxation. In
contrast, men are much more likely to be visiting the gym to build
strength. There is little difference between men and women in terms
of fitness as a reason.
Line graphs
A line graph is another way of presenting bivariate data. It is
similar to a scatter diagram (which we explore shortly) but,
importantly, the consecutive points are linked by a line that ‘joins the
dots’. A line graph is a useful way of visually representing bivariate
data that is associated with interval/ratio variables. When
constructing a line graph, you should usually place the independent
variable on the horizontal axis and the dependent variable on the
vertical axis. It is best to start at zero on the Y axis or, if this is not
appropriate, at least make it clear to the reader that it does not start
at zero by including a break at the bottom of the axis (see De Vries
2018 for further discussion).
Pearson’s r
Pearson’s r is a method for examining relationships between
interval/ratio variables that focuses on the coefficient, a figure
indicating the degree of correlation between variables (we do not
discuss how to produce coefficients here, but these can be produced
easily using software such as SPSS). This method works on the
assumption that the relationship between the two variables is
broadly linear, so before using it researchers need to plot out the
values of their variables on a scatter diagram to check that they
form something like a straight line (even if they are scattered, as in
Figure 15.9). If they curve, the researcher will need to analyse the
relationships using another method.
F I G U R E 1 5 . 9 Scatter diagram showing a perfect positive relationship
Variables
1 2 3 4 5
1 10 50 7 9
2 12 45 13 23
3 14 40 18 7
4 16 35 14 15
5 18 30 16 6
6 20 25 23 22
7 22 20 19 12
8 24 15 24 8
9 26 10 22 18
10 28 5 24 10
So what does Pearson’s r tell us about the student’s findings from the
gym survey? The correlation between age (var00002) and the
amount of time spent on weights equipment (var00011) is −0.27,
implying a weak negative relationship. This suggests that the older a
person is, the less likely they are to be spending much time on such
equipment, but also that other variables clearly influence the amount
of time people are spending on this activity.
We describe the procedures for using SPSS, and the other main
software packages, to generate Pearson’s r and scatter diagrams in
our statistical software tutorials.
Spearman’s rho
Spearman’s rho, which is sometimes represented with the Greek
letter ρ, is designed for use with pairs of ordinal variables, but as
Table 15.4 suggests, it is also used when one variable is ordinal and
the other is interval/ratio. It is exactly the same as Pearson’s r in
terms of the outcome of calculating it, in that the value of rho will be
either positive or negative, varying between 0 and + or −1. If we look
at the gym study, there are three ordinal variables: var00004,
var00005, and var00006 (see Table 15.1). If we calculate the
correlation between the first two variables using Spearman’s rho, we
find that the correlation between var00004 and var00005—
frequency of use of the cardiovascular and weights equipment—is
low, at 0.2. There is a slightly stronger relationship between
var00006 (frequency of going to the gym) and var00010 (amount of
time spent on the cardiovascular equipment), with rho at 0.4. In the
latter pair, the second variable is an interval/ratio variable. When we
want to calculate the correlation between an ordinal and an
interval/ratio variable we cannot use Pearson’s r, because for that
method both must be interval/ratio levels of measurement, but we
can use Spearman’s rho (see Table 15.4). We describe the procedures
for generating Spearman’s rho, using SPSS and the other main
software packages, in our statistical software tutorials.
The phi coefficient is used for analysing the relationship between two
dichotomous variables and, like Pearson’s r, it results in a statistic
that varies between 0 and + or −1. If we use it to analyse the
correlation between var00001 (gender) and var00008 (whether
people have other sources of regular exercise), the result is 0.24,
implying that males are somewhat more likely than females to have
other sources of regular exercise, though the relationship is weak.
Time Reasons
n 9 31 33 17 90
This procedure is often used alongside a test of association between
variables called eta (η). This statistic expresses the level of
association between the two variables and, like Cramér’s V, it will
always be positive. The level of eta for the data in Table 15.7 is 0.48,
suggesting a moderate relationship between the two variables. Eta-
squared (η2) expresses the amount of variation in the interval/ratio
variable that is due to the nominal variable, and in the case of this
example, eta-squared is 22 per cent. Eta is a very flexible method for
exploring the relationship between two variables, first because it can
be employed when one variable is nominal and the other
interval/ratio, and second because it does not assume that the
relationship between variables is linear.
F I G U R E 1 5 . 1 3 A spurious relationship
Other source 64 43 58
No other source 36 57 42
n 42 23 24
30 and
Male 31– 41 and 30 and 31– 41 and
Female
under 40 over under 40 over
Age Age
Other source 70 33 75 59 50 42
No other source 30 67 25 41 50 58
n 20 9 12 22 14 12
Tables 15.8 and 15.9 illustrate how contingency tables can be used for
multivariate analysis.
Multiple regression
A regression analysis examines whether one variable is related to
another. For example, it could allow us to see whether ‘years spent in
education’ are related to ‘wealth’ in later life. The statistic that is
produced by the analysis essentially gives us a number between −1
and 1 that allows us to assess the direction and strength of that
relationship. A multiple regression analysis—a form of multivariate
analysis—is just an extension of this principle. In the social world, as
we have shown, it is rare that just one thing is related to another. For
example, there are many things in later life that may influence
‘wealth’, not just ‘years in education’. ‘General health’, ‘inheritance’,
and ‘region’ could all be factors that help to account for how wealthy
someone is. So, by including other variables in our analysis, it is
possible to account for more and more of the variation in our
dependent variable, wealth. This helps us to produce more
comprehensive descriptions of social patterns. Multiple regression
also allows us to assess the relative influence of a group of variables
(the independent variables) on our chosen target variable (the
dependent variable). In essence, it allows us to see which variables
are having more influence than others, and crucially, which of those
variables make the most contribution. As a result of all of this, the
technique is really useful for building ‘models’ of social behaviour.
In the rest of this section, we will look at the tests that can be used to
determine the degree of confidence we can have in our findings when
we explore relationships between variables. All of the tests involve
the same three stages.
Two types of error can occur when you infer statistical significance.
These are known as Type I and Type II errors. A Type I error is when
you reject the null hypothesis when it should in fact be confirmed,
meaning that your results have arisen by chance and you are falsely
concluding that there is a relationship in the population when there
is not one. A Type II error is when you accept the null hypothesis but
you should reject it, meaning that you falsely conclude that there is
not a relationship in the population. If we use a p < 0.05 level of
significance we are more likely to make a Type I error than when
using a p < 0.01 level of significance, because with 0.01 there is less
chance of falsely rejecting the null hypothesis. However, in using a
lower level of significance you increase the chance of making a Type
II error because you are more likely to confirm the null hypothesis
when the significance level is 0.01 (the chance is 1 in 100 when the
significance level is 0.01 and 1 in 20 when it is 0.05). The risks of
these errors occurring are shown in Figure 15.14.
F I G U R E 1 5 . 1 4 Type I and Type II errors
In the case of Table 15.5, gender has 2 columns and reasons for
visiting the gym has 4 rows, so the calculation will be (2 − 1) × (4 −
1), which equals 3 (1 multiplied by 3). So we know that the degrees of
freedom are 3. We can then use our chi-square figure (22.726) and
degrees of freedom (3) to determine whether our chi-square figure is
statistically significant at p < 0.0001, by using a predefined table
developed by Karl Pearson (this process is explained in Foster et al.
2014). So the chi-square value that we arrive at is affected by the size
of the table, and this is taken into account when deciding whether
the chi-square value is statistically significant or not.
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
Early The traditional In-depth studies of ‘slices of life’ that portrayed those
1900s to period under investigation as strange or alien. Heavily
1945 positivist.
(the end
of the
Second
World
War)
2004 The fractured Lincoln and Denzin (2005: 1123) speculated that the
onwards future future would hold ‘randomized field trials’ for one
group of researchers and ‘the pursuit of a socially and
culturally responsive, communitarian, justice-oriented
set of studies’ for the other.
Perhaps the most obvious issue with Denzin and Lincoln’s list is that
it ends in the early 2000s and does not acknowledge the many new
forms of and perspectives within modern qualitative research that
have emerged since. We might summarize the more recent themes
and ‘movements’ associated with qualitative research as follows.
• An emphasis on the sensory. While the sensory has been a
part of the social sciences for a long time—as evident in Georg
Simmel’s (1908) work on odour, for example, or H. S. Becker’s
(1974) observation that many nineteenth-century issues of the
American Journal of Sociology contained visual material—
there has been a recent refocusing on the importance of the
sensory in accessing and interpreting reality. This can be seen
in Pink’s work on sensory ethnography (Pink 2015) and
research that has followed Kress and Leeuwen’s exploration of
multimodality (2001).
• focus groups;
• the collection of texts and documents.
• prioritizing flexibility;
The preference for seeing through the eyes of the people studied
often goes along with the closely related goal of probing beneath
surface appearances. In attempting to take the position of the people
you are studying, you may find that they view things in a different
way from what you might have expected. This stance is clear in
Mason’s (2018) study of Somali teenagers in the north of England,
showing how ‘swagger’ plays an important role in affirming
boundaries associated with place, race, and class. ‘Swagger’ is a
particularly embodied subcultural style that incorporates clothing
and bodily movement. It is often ascribed to teenagers or youth
culture, particularly in the lower social classes, who may appear
dressed in a somewhat showy way and to be exaggeratedly confident
in their manner. It is also sometimes ascribed as a negative trait that
is associated with criminality.
Emphasis on process
Qualitative research tends to view social life in terms of processes.
This reveals itself in a number of different ways, including the way
that qualitative researchers try to show how events and patterns
unfold over time. As Pettigrew usefully puts it, process is ‘a sequence
of individual and collective events, actions, and activities unfolding
over time in context’ (1997: 338).
Ethnographic research is particularly associated with this emphasis
(see Chapter 18), and ethnographers are typically immersed in a
social setting for a long time—often years. This means that they are
able to observe the ways in which events develop and/or the ways in
which the different elements of a social system interconnect. These
might be values, beliefs, behaviours, or collective affinities. Such
findings can help us to see social life in terms of streams of
interdependent events and elements (see Research in focus 16.2 for
an example).
RESEARCH I N F O CUS 1 6 . 2
An emphasis on process and flexibility
Flexibility
Many qualitative researchers are critical of approaches to research
that involve imposing predetermined formats on the social world.
This position is largely to do with their preference for seeing through
the eyes of the people being studied. After all, if researchers use a
structured method of data collection (such as a structured
interview with closed-ended questions), they must have made
decisions about what they expect to find, and about the nature of the
social reality that they will encounter. This necessarily limits the
degree to which the researcher can try to adopt the worldview of the
people being studied. For example, Dacin, Munir, and Tracey explain
that their investigation of dining rituals at Cambridge University,
aiming to examine whether they help to perpetuate the British class
system, ‘allowed us to build our understanding of the properly
contextualized experiences of those involved in the dining ritual,
rather than imposing a particular framework upon them’ (2010:
1399).
Step 6. Writing up
findings/conclusions
There is no real difference between the significance of writing up in
quantitative research and qualitative research, so we can make
exactly the same points here as we made in relation to Step 12 in
Figure 7.1. The writer has to convince an audience about the
credibility and significance of the interpretations they offer.
Researchers are not simple messengers for the things they see and
the words they hear. The researcher has to make clear to the
audience the importance and relevance of what they have seen and
heard. Oncini does this by highlighting that his findings and chosen
methods have implications for policies on school mealtimes:
The advantage of ethnography, a method that entails long-term listening to the ways
subjects make sense of their world, has offered me insight into the perspectives of
actors at the intersection with food education policy. This study can hence suggest that
the scientific eye that guides the implementation of school meal policies might benefit
from alternative approaches involving children, cooks, teachers, and parents in the
construction of the menu.
Lincoln and Guba (1985) and Guba and Lincoln (1994) propose two
main criteria for assessing a qualitative study: trustworthiness
and authenticity.
Trustworthiness
A major reason for Guba and Lincoln’s unease about simply applying
reliability and validity to qualitative research is that these criteria
assume that it is possible to have a single, absolute account of social
reality (an approach described in Chapter 2 as realism). Instead,
Guba and Lincoln argue that there can be more than one account.
Authenticity
Tracy’s criteria
Hammersley on validity
Hammersley on relevance
The differences between the three positions mainly reflect the extent
to which the researcher accepts or rejects the realist position. Writers
on qualitative research who apply the ideas of reliability and validity
with little, if any, adaptation broadly position themselves as realists
because their approach implies that qualitative researchers can
capture social reality through their concepts and theories. Lincoln
and Guba reject this view, arguing that qualitative researchers’
concepts and theories are only representations and so there may be
other equally credible representations of the same phenomena. If we
imagine an axis with realism at one end and anti-realism at the
other, Hammersley’s position occupies a middle ground in that he
acknowledges the existence of social phenomena that are part of an
external reality but argues that it is impossible to reproduce that
reality. Most qualitative researchers nowadays probably operate
around this mid-point, though without necessarily endorsing
Hammersley’s views. They usually treat their accounts as one of a
number of possible representations rather than as definitive versions
of social reality. They also strengthen their accounts through some of
the strategies advocated by Lincoln and Guba, such as thick
descriptions, respondent validation exercises, and triangulation.
2. difficult to replicate;
3. difficult to generalize;
Too subjective
Quantitative researchers sometimes criticize qualitative research for
being impressionistic and subjective. This argument suggests that
qualitative findings rely too much on the researcher’s own, often
unsystematic, views about what is significant and important, and
also on the close personal relationships that the researcher develops
with the people studied. As qualitative research tends to begin in a
relatively open-ended way, with a gradual narrowing of focus, it is
true that there are often few clues as to why one area was given
further attention rather than another. In response, quantitative
researchers may highlight how, in the stage of research when they
are formulating questions or problems to be solved, they state their
research focus explicitly in terms of the existing literature and key
theoretical ideas.
Difficult to replicate
Although replication in the social sciences is not straightforward
regardless of the research strategy (see Chapter 7), quantitative
researchers often argue that the subjective tendencies of qualitative
research are even more problematic because they pose difficulties for
attempting to replicate this kind of study. The fact that the
qualitative approach is usually flexible and reliant upon the
researcher’s ingenuity means that it is almost impossible to conduct
a true replication. In qualitative research, the investigator is often the
main instrument of data collection, so what is observed and heard,
not to mention the focus of the data collection, is very much the
product of their preferences.
Problems of generalization
It is often suggested that the findings of qualitative investigations
have limited scope. When qualitative research is carried out with a
small number of individuals and/or in a certain organization or
locality, critics argue that it is not possible to know how the findings
can be generalized to other settings. How can just one or two cases
be representative of all cases? Can we treat Mason’s research on the
Somali teenagers who attended the youth clubs he studied as
representative of all Somali teenagers or all youth clubs? Can
Oncini’s research on Italian school canteens be representative of
schools and all school meals in another country, and is Demetry’s
(2013) study of a high-end restaurant in the USA as generalizable to
all such establishments? Can we treat interviewees who have not
been selected through probability sampling as representative?
The answer in all these cases is, of course, emphatically ‘no’. A case
study is not a sample of one drawn from a known population.
Similarly, the people who are interviewed in qualitative research are
not meant to be representative of a population, and in some cases it
may be more or less impossible to calculate the population with any
accuracy. Instead, the aim of much qualitative research is to
generalize to theory rather than to populations. It is ‘the cogency of
the theoretical reasoning’ (Mitchell 1983: 207)—in other words, the
quality of the theoretical interferences drawn from the qualitative
data—rather than statistical criteria that helps us decide whether
qualitative research findings are generalizable. As we noted in
Chapter 3, this view of generalization is called ‘analytic
generalization’ by Yin (2009) and ‘theoretical generalization’ by
Mitchell (1983).
Lack of transparency
The final common criticism made of qualitative research is that it can
lack transparency, in that it is sometimes difficult to establish what
the researcher actually did and how they arrived at the study’s
conclusions. A lack of transparency is symptomatic of problematic
practice regardless of the research strategy, but qualitative research
reports are sometimes unclear about how people were chosen for
observation or interview—this can be in sharp contrast to the
sometimes overly detailed accounts of sampling procedures in
reports of quantitative research. But it does not seem fair to suggest
that outlining the ways in which research participants are selected
goes too far towards quantitative research criteria, since readers have
a right to know how research participants were selected and why
they were sampled in a particular way (see Chapter 18).
When getting to grips with research methods, you may find it useful
to draw comparisons between quantitative and qualitative research.
This can help you to understand their broad tendencies and features.
We will outline some of the main differences and similarities in this
section, but like any typology they are broad generalizations and
should not be viewed as clear, fixed distinctions, or as the only things
you need to know about the two research strategies. There are many
variations on these themes, and studies within a particular research
strategy can differ widely from each other. It is also important to
remember that quantitative and qualitative research are not so far
apart that they cannot be combined (an idea we explore further in
Chapter 24). But despite these caveats, there is enough consistency
in the contrasts to allow us to highlight some useful distinctions.
Quantitative Qualitative
Numbers Words
Theory and concepts are tested in research Theory and concepts emerge from data
Structured Flexible
Macro Micro
Behaviour Meaning
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
The first three purposive sampling approaches in the list are ones
that are particularly likely to be employed in connection with the
selection of cases or contexts. The others are likely to be used in
connection with both the sampling of individuals and that of cases or
contexts. Arguably the most commonly-used types from this list are
theoretical sampling and snowball sampling, and we can add to these
an approach that Alan Bryman has called ‘generic purposive
sampling’. This approach is relatively open-ended and emphasizes
the generation of concepts and theories but does not have certain
characteristics of theoretical sampling, such as its cyclical, iterative
nature, and it can be used in conjunction with some of the other
types. We consider all three types in Section 17.4.
Theoretical sampling
Theoretical sampling was advocated by Glaser and Strauss (1967)
and Strauss and Corbin (1998) in the context of an approach to
qualitative data analysis that they developed known as grounded
theory (see Key concept 16.2), but it can be adapted for use with
other forms of analysis (see Chapter 23).
Pl e a s e n o t e : s o m e re a d e rs m a y f i n d t h i s
c o n t e n t , w h i c h re l a t e s t o a s t u d y o f
b e re a v e d p a re n t s , u p s e t t i n g .
Butler et al. (2018) have noted that although theoretical
sampling is a key part of grounded theory, it is often difficult to
find examples of research that detail how the approach works
in practice. Using their study of parental experiences of the
death of their child in intensive care, they demonstrate how
theoretical sampling can be directed toward three different
data-collection techniques: sampling new research sites,
adding new interview questions, and sampling for specific
characteristics.
Their study began by interviewing five parents from an initial
site. They analysed the interview transcripts using open,
focused, and theoretical coding, constant comparison, and
memos (notes of their ideas about and reactions to the data;
see Research in focus 23.1). They then applied theoretical
sampling to further develop the categories they had
discovered. In the first instance they sought out two further
research sites, and they then added a fourth toward the end of
the project. Given that the concept of ‘support’ (the help offered
to the parents by the hospital and other professionals) had
emerged as a key consideration during the early phases of
analysis, they specifically chose hospitals that provided the
same level of care as the original site but that also offered
support in the form of a family social worker. Interviewing
participants from these two new sites, they continued to
interview parents from the original site in order to use ‘constant
comparison and memoing to explore the concept of support
across all sites’ (Butler et al. 2018: 564).
At the same time as adding new research sites, their
analysis was also revealing that judging healthcare providers
was also important to participants. This was an unexpected
finding. As a result, the researchers incorporated this into the
interview schedule by noting that where a participant
mentioned ‘fantastic’ staff, they would probe for examples of
‘poor’ staff too. This enabled them to develop the categories of
‘good’ and ‘poor’ healthcare professionals.
While there is no specific requirement within grounded
theory for population representativeness, Butler et al.’s third
type of theoretical sampling was used to identify new
participant characteristics. During the process of data
collection, they noticed that there were differences in
responses from those parents where a child had died under the
age of five, or where there was significant developmental delay,
and from those where a previously healthy teenager had died.
To explore these differences further, they deliberately sought
out parents of primary- and secondary-school-aged children in
later interviews.
Butler et al. (2018) highlight that there is no single correct
way to approach theoretical sampling, but their multi-faceted
approach does provide a helpful illustration of how researchers
can make sampling decisions in relation to their emerging
theory.
When using a generic purposive sampling approach, the researcher
draws up criteria for the kinds of cases they will need to address the
research questions. They then identify appropriate cases, and then
sample units from within those cases. Generic purposive sampling
can be used in conjunction with several of the sampling strategies
identified in Key concept 17.1. It can be used in a sequential or a fixed
way, and the criteria for selecting cases or individuals can be formed
a priori or be contingent—or even a mixture of both. It would be
possible, for example, to plan a priori criteria based on sampling
from some initial socio-demographic characteristics such as gender,
ethnicity, and age, and then develop more thematic criteria based on
the emerging properties of the data.
Snowball sampling
Snowball sampling is a technique in which the researcher initially
samples a small group of people who are relevant to the research
questions, and those sampled then recommend other participants
who have experiences or characteristics that are relevant to the
research. These participants will then suggest others, and so on. Just
like a snowball, the sample gradually increases in size as the research
rolls along. Becker’s (1963) study of marijuana users, detailed in
Research in focus 17.2, used snowball sampling to very good effect,
and Learn from experience 17.2 provides an example of its effective
use within a student project.
L EARN F RO M EXPERI ENCE 1 7 . 2
Snowball sampling for a student project
Sample size
One of the problems commonly associated with qualitative sampling
is knowing how extensive the sample needs to be at the start of the
project. This is especially true if theoretical considerations are
guiding selection, but it is also often the case that as an investigation
proceeds, it becomes clear that more people and groups will need to
be interviewed than were initially anticipated.
As a guiding principle, the broader the scope of the study and the
more comparisons being made between groups, the larger the
sample size should be (Warren 2002; Morse 2004). However,
Warren (2002: 99) observes that, for a qualitative study based on
interview data to be published, the minimum number of interviews
required seems to be between 20 and 30, although some qualitative
approaches can actually use less than this. This suggests that
although there is an emphasis on the importance of sampling
purposively in qualitative research, there are still minimum levels of
acceptability of sample size. Unfortunately, the issue is not clear-cut:
there are certainly exceptions to Warren’s rule. Life story research,
for example, is often based on very intensive interviews where there
may be just one or two interviewees. Also, not all researchers agree
with Warren’s figure. Gerson and Horowitz (2002: 223) write that
‘fewer than 60 interviews cannot support convincing conclusions and
more than 150 produce too much material to analyse effectively and
expeditiously’. Adler and Adler (2012), on the other hand, advise a
range between 12 and 60 and a mean of 30.
Attending to time means that the ethnographer must make sure that
people or events are observed at different times of the day and
different days of the week. To do otherwise risks drawing inferences
about behaviour or events that are valid only for particular times or
days. For several reasons it is impossible to be an ethnographer at all
times of the day, but it is important that researchers make an effort
to schedule ethnographic encounters at different times to try and
ensure fuller coverage of the field.
It can also be important to sample in terms of context. People’s
behaviour is influenced by situational and environmental factors, so
they need to be observed in a variety of locations. For example, one
of the important features of football fandom is that it is not solely
located within stadiums. In order to understand the culture and lives
of football fans, researchers such as Millward (2009) and Pearson
(2009; 2012) had to ensure that they interacted with supporters not
just around the time of football matches, but also in a variety of
contexts (such as pubs and general socializing). This also meant
interacting with them at different times. Pearson (2012) contrasts his
experiences as a participant observer of supporters of both Blackpool
and Manchester United football clubs and notes that one significant
difference was that he was not a Blackpool resident whereas he did
live in the Manchester area. He writes:
I was now able to gain data about their behaviour outside the immediacy of the
football match. Living locally gave me access to a wider and more varied life-world of
some of the individuals … This gave me a much better idea of how their behaviour
around football changed from their behaviour in other contexts.
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
T HEO RY I NTO PRACT I CE
• writing ethnography;
There are similarly blurred lines between overt and covert strategies.
For example, an ethnographer may seek access through an overt
route but might come into contact with many people who are not
aware of the ethnographer’s status as a researcher. Cassidy’s (2014)
ethnographic research in London betting shops was not covert, but
she notes that ‘it is possible that some of the people I observed in
shops were unaware of the reason for my presence’ (2014: 172).
Another interesting case is a study by Glucksmann (1994), who in the
1970s left her academic post to work on a factory assembly line in
order to explore the reasons why feminism appeared not to be
relevant to working-class women. In a sense, she was a covert
observer, but her motives for the research were primarily political,
and she says that at the time she was undertaking the research she
had no intention of writing the book that subsequently appeared and
that was published under a pseudonym (Cavendish 1982). After the
book’s publication, it was treated as an example of ethnographic
research. Was she an overt or a covert observer (or neither, or both)?
We have said that studies of Types 1 and 2 tend to be more numerous
than Types 3 and 4. This reflects the fact that ethnographers are far
more likely to adopt an overt role than a covert one. For this reason,
most of our discussion of access issues will focus on ethnographers
aiming to employ an overt role, but it is worth our briefly reflecting
on the value of the covert role in ethnography. The advantages of this
strategy include the following.
One potential issue is that people might have suspicions about you
and your motives, and have concerns about what will happen to the
information they give you. They may, for example, see you as an
instrument of management or a local or government official. When
Baird (2018) approached potential participants for his research on
how Colombian gang members negotiate violence, he was quick to
say ‘Hey! I work at a University, not for the police or nothin’ (2018:
343). You don’t have to talk to me or if you don’t want to talk to me,
no sweat’. In her research on the British on the Costa del Sol,
O’Reilly (2000) was similarly suspected of being from the UK’s
Department of Social Security and of being a tax inspector. People
may also worry that you will report things they say or do to bosses, to
colleagues, or to peers in other kinds of environment. Van Maanen
(1991a) notes from his research on the police that if you conduct
ethnographic research among officers, you are likely to observe
activities that may be deeply discrediting and even illegal. Your
credibility among police officers will be determined by your reactions
to these situations and events.
If people have these worries, they may go along with your research to
influence your findings, or in some cases actively work to sabotage
your project by engaging in deception or misinformation, or not
allowing you access to ‘back regions’ (Goffman 1956).
Key informants
An ethnographer relies a lot on informants, but certain informants
may become particularly important during the research process.
These key informants often develop an appreciation of the
research and direct the ethnographer to situations, events, or people
likely to be helpful to the progress of the investigation. Whyte (1955),
for example, reports that his key informant ‘Doc’ said to him: ‘You
tell me what you want to see, and we’ll arrange it. When you want
some information, I’ll ask for it, and you listen. When you want to
find out their philosophy of life, I’ll start an argument and get it for
you. If there’s something else you want to get, I’ll stage an act for you’
(Whyte 1955: 292). Doc was also helpful in warning Whyte that he
was asking too many questions, when he told him to ‘go easy on that
“who,” “what,” “why,” “when,” “where,” stuff’ (Whyte 1955: 303). Key
informants can clearly be a big help to the ethnographer, but it is
important to be aware of the risks of working with them. The
ethnographer may develop an excessive reliance on the key
informant, and might begin to see social reality through the eyes of
the informant rather than through the eyes of members of the social
setting.
Field roles
Covert full Full membership of group, but the researcher’s status as a researcher is
member unknown. In closed settings such as organizations, the researcher works
as a paid employee for the group. The researcher may already be
employed in the organization or may get a job there in order to conduct
the research. In open settings, such as communities, the researcher
moves to the area for a significant length of time or, like Hardie-Bick
(see Research in focus 18.2), uses a pre-existing identity as a means of
becoming a full member for the purposes of research.
Overt full Full membership of group but the researcher’s status as a researcher is
member known. In other respects, this is the same as a covert full member.
Each role carries its own advantages and risks. The roles of full
member (covert and overt) and participating observer carry the risk
that the researcher may over-identify with the group and ‘go native’
(see Key concept 18.3), but they offer the researcher the opportunity
to get close to people and gain a more complete understanding of
their culture and values. However, the role an ethnographer takes is
not only a matter of choice—it might, for example, be determined by
their own characteristics, knowledge, or abilities. Not everyone has
the physical credentials to be a full member as a bouncer (Calvey
2019), a basketball player (Rogers 2019), or a fashion model (Mears
2011). Equally, it would have been very difficult for someone like
Gusterson (1996; see Thinking deeply 18.4) to gain full-member
access for his study of a nuclear weapons laboratory.
KEY CO NCEPT 1 8 . 3
What is ‘going native’?
Active or passive?
(Punch 1979: 8)
RESEARCH I N F O CUS 1 8 . 3
Active ethnography
One final point to make is that the roles that ethnographers take on
will probably be influenced by emotions—both their own and those
of their participants—because emotions are implicitly intertwined
with the research process. We discuss this idea further in Thinking
deeply 18.1.
T HI NKI NG DEEPLY 1 8 . 1
Emotion and fieldwork
Fieldwork sites are not just places for data collection, they are
emotional landscapes that both researcher and researched
experience in a variety of ways. Researchers may experience
enthusiasm at the start of the project; they may feel happiness
when they successfully engage with the field, and frustration
when they don’t; they may develop affection for some of the
actors in the field, and dislike for others; they may feel mentally
and physically exhausted toward the end of the engagement;
they may feel sadness (or happiness) at leaving it and its
people. Conducting her research in a large London cemetery,
Woodthorpe (2011) notes how her emotional reaction to the
people she met there became a problem:
Over time my reluctance to approach certain people became a great source of
anxiety. Was I being methodical enough in how I asked visitors to participate?
Was I skewing my sample with the people I felt confident in approaching?
These uncertainties started to impact on fieldwork more generally, as I found
myself gradually becoming more and more uneasy approaching and talking to
visitors … On some days observing children’s graves or witnessing parents
around my age attend a grave would move me to tears and render me ‘out of
action’ for some time (at best half an hour, at worst the rest of the day). On one
occasion, I was so touched by a man’s careful and concentrated nurturing of a
flowerpot by a grave that I had to leave the cemetery for a short while.
(Woodthorpe 2011: 104)
Taking field notes
Because of the frailties of human memory, ethnographers have to
take notes based on their observations. These should be fairly
detailed summaries of events and behaviour and the researcher’s
initial reflections on them. The notes need to specify key
dimensions of whatever is observed or heard.
Strategies for taking field notes are affected by the degree to which
the ethnographer enters the field with clear research questions—and
whatever you do write down needs to reflect your research focus in
some way. As we discussed in Chapter 16, most qualitative research
begins with general research questions, but there is considerable
variation in how specific those questions are. Obviously, where the
question is clearly defined from the outset, ethnographers have to
align their observations with that research focus, but at the same
time they need to maintain a fairly open mind so as to maintain the
element of flexibility that is a strength of qualitative research (see
Thinking deeply 18.2).
T HI NKI NG DEEPLY 1 8 . 2
Research questions in ethnographic research
For most ethnographers, the main equipment they will need in the
course of observation will be a notepad and a pen. However,
improvements in voice recognition software and mobile technologies
have made text and speech recording devices much more popular for
producing field notes (see Learn from experience 18.2 and Section
19.4).
L EARN F RO M EXPERI ENCE 1 8 . 2
Strategies for taking field notes
Some writers have found it useful to classify the types of field notes
that are generated in the process of conducting an ethnography. The
following classification is based on the similar categories suggested
by Adler and Adler (2009), Sanjek (1990), and Lofland and Lofland
(1995).
RESEARCH I N F O CUS 1 8 . 5
Types of field note
As Demetry points out, the field note brings out the informal
kitchen ambience created by the head chef and the use that he
made of music as a means of reinforcing that atmosphere.
• The ethnographer may simply feel that they have had enough.
Ethnographic research can be highly stressful for many
reasons: the nature of the topic, which may place the
fieldworker in stressful situations (as in research on crime);
the marginality of the researcher in the social setting and the
need to constantly manage a front; and the prolonged absence
from their normal life that is often necessary.
• The ethnographer may feel that the research questions on
which they have decided to concentrate have been answered,
so that there are no new data worth generating. Altheide’s
(1980: 310) decision to leave the various news organizations
in which he had conducted ethnographic research was
motivated by ‘the recurrence of familiar situations and the
feeling that little worthwhile was being revealed’. This is a
form of data saturation (see Chapter 17).
KEY CO NCEPT 1 8 . 4
What is auto-ethnography?
Online ethnography
Of course, the line between online and offline worlds is not clear-cut
—and many studies involve both elements. Practitioners of what
might be thought of as conventional ethnography increasingly have
to take their participants’ online interactions into account, making
use of methods and sources of data associated with online
ethnography. For example, Pearson (2012) found that numerous
online football forums and message boards were relevant to his
research, as they were places in which supporters discussed
footballing issues and arranged to meet up. Similarly, in two separate
conventional ethnographies of ‘physical spaces’ Hallett and Barber
(2014: 314) found themselves increasingly ‘pulled into online spaces
because that was where our participants were’ (emphasis in original).
In one of the studies—an ethnography of two men’s hair salons—the
authors had planned a conventional ethnography based on
observations and interviews but had to turn to online reviews when
one of the salons refused to allow clients to be interviewed. Similarly,
researchers focusing on digital worlds also need to acknowledge
these blurred lines: as both Hine (2008) and Garcia et al. (2009)
observe, even the most committed internet user has a life beyond
their device(s).
RESEARCH I N F O CUS 1 8 . 6
Covert participant observation in cyberspace
Ty p e 1 . S t u d y o f o n l in e c o mmu n it ie s o n ly, w i t h
n o p a rt ic i p a t io n
Ty p e 2 . S t u d y o f o n l in e c o mmu n it ie s o n ly, w i t h
s o me p a rt i c i p a t i o n
Ty p e 3 . S t u d y o f o n l in e c o mmu n it ie s p lu s o n l i n e
o r o f f lin e in t e rv ie w s
Same as Type 2, but in addition the researcher interviews some
of the people involved in the online interaction. The interviews
may be online or offline.
Ty p e 4 . S t u d y o f o n l in e c o mmu n it ie s p lu s o f f li n e
re s e a rc h me t h o d s ( i n a d d it io n t o o n l in e o r
o f f lin e in t e rv i e w s )
The nature of the community being studied can also influence the
approach taken. Kozinets, who developed the approach and term of
netnography (see Key concept 18.5), draws a distinction between
ethnographies of online communities and ethnographies of
communities online (Kozinets 2010). The former involve the study of
communities that have a largely online existence, such as his
research on online discussion forums of knowledgeable coffee
enthusiasts (Kozinets 2002). Another example of a community with
an exclusively online existence is Banks’s (2012, 2014) covert online
ethnography of the ‘advantage play’ subculture, whose participants
use mathematical techniques to try to reduce the risks of gambling
by taking advantage of technical weaknesses in gambling products.
Banks became an advantage player and was a covert participant
observer of an online forum for 18 months. The study shows how
participants seek to manage risk, not just of losing money, but of
being fleeced by gambling sites that take the gambler’s stake but
closes down its operations before paying out.
KEY CO NCEPT 1 8 . 5
What is netnography?
Visual ethnography
These men perform the identical tasks shown above all day long and may
fasten from eight hundred to one thousand wheels in eight hours. The
movement of the cars along the conveyor belt determines the pace of their
work and kept them close to their stations, virtually ‘chained’ to the assembly
line.
(Blauner 1964: 112)
(Fish 2017: 9)
Stacey also argues that, when the research is written up, it is the
feminist ethnographer’s interpretations and judgements that come
through and that have authority. Skeggs responds to this criticism by
acknowledging that in the case of her study, her academic career was
undoubtedly enhanced by the research, but her participants were not
in any way victims. Instead, she argues:
The young women were not prepared to be exploited; just as they were able to resist
most things which did not promise economic or cultural reward, they were able to
resist me. … They enjoyed the research. It provided resources for developing a sense of
their self-worth. More importantly, the feminism of the research has provided a
framework which they use to explain that their individual problems are part of a wider
structure and not their personal fault.
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
RESEARCH I N F O CUS 1 9 . 1
Semi-structured interviewing
RESEARCH I N F O CUS 1 9 . 2
Unstructured interviewing
Semi-structured or unstructured
interview?
Choosing whether to take a semi-structured or unstructured
approach will depend on a number of things. These include your
methodological framework, research questions, and any associated
techniques of analysis.
• Researchers who are concerned that even the most basic
interview guide will prevent them from gaining genuine access
to the worldviews of their interviewees are likely to favour an
unstructured interview.
• If the researcher is beginning the investigation with a fairly
clear focus, rather than a very general idea of wanting to do
research on a topic, they are likely to use semi-structured
interviews because this type allows them to address more
specific issues.
• If more than one person is going to carry out the fieldwork,
semi-structured interviewing is likely to be preferable because
it helps to ensure some consistency in interviewing style. (See
Research in focus 19.2 and 19.3 for examples.)
• encounters;
• stories.
After you have developed your interview guide, you should reflect on
the questions to make sure they really do cover the range of issues
that you need to address. Indeed, although flexibility is a key
characteristic of qualitative interviews, you still need to give some
thought to how you will gather information about the phenomena
you are interested in—in other words, the types of questions you will
ask.
r Not at the moment, well I suppose it depended what was on offer, the big
problem is, I did actually consider, or I considered and was considered for a
directorship at Lloyds Insurance Company, so I went down and spoke to
them, and I said to Diane [wife] before I went, it’s like two days every
month, you know you get paid thirty thousand a year, which is very nice,
but it’s two days every month and you’ve got to be there, which means if we
went away for five weeks, we’re sort of knackered and you’ve got to build
all your holidays around it, so anyway went for the interview, I didn’t get it
but on the other hand I wasn’t that enthusiastic about it.
int No.
int Uh huh. But I mean, you have been retired for ten years, haven’t you?
r Ten years, yeah but we still don’t know what we want to do. We’re drifting, I
suppose—nicely, no problems on that, but we haven’t got anything … we
keep on saying, we’ve got the money, what do we want to spend it on? We
don’t know. It’s always been that we don’t know what we want to do; we
don’t know whether we want to buy a house. We do look at them and say
we don’t want another house. We don’t really want another car—can’t be
bothered about that. I should give my car away! And things like that, so …
no, we don’t know what we want to do.
Practical preparations
As well as preparing an interview guide, you will also need to
consider some practical issues that can influence the direction and
flow of the interview.
• Make sure you are familiar with the setting in which the
interviewee works or lives. This will help you to understand
what the interviewee is saying in their own terms.
• Try to show the interviewee that you have a compelling reason
for wanting to examine the topic that you are addressing: why
it is important, why you selected them to be interviewed, and
so on.
• Get hold of a good-quality recording device and microphone.
Qualitative researchers nearly always record and then
transcribe their interviews (see ‘Recording and transcription’).
A good microphone is important, because many interviews are
let down by poor recording. Make sure you are familiar with
how to use the equipment before beginning your interviews.
• Try to ensure that the interview takes place somewhere quiet,
so that there is little chance of background noise affecting the
quality of the recording. The setting should also be private so
interviewees do not have to worry about being overheard.
Before you move on, check your understanding of this section by
answering the following self-test questions:
• Balanced: does not talk too much, which may make the
interviewee passive, and does not talk too little, which
may result in the interviewee feeling that they are not
talking along the right lines.
The balance between being active but not too intrusive is difficult to
strike, and an interviewer must be very responsive to not only what
the interviewee is saying (or not saying), but also what they are
doing. Something like body language may indicate that the
interviewee is becoming anxious about a line of questioning. An
ethically sensitive interviewer will not want to place undue pressure
on the person they are talking to and will need to be prepared to cut
short a line of questioning if it is clearly a source of concern.
wife Well, I thought the different countries at Epcot were wonderful, but I need
to say more than that, don’t I?
husbandThey were very good and some were better than others, but that was
down to the host countries themselves really, as I suppose each of the
countries represented would have been responsible for their own part, so
that’s nothing to do with Disney, I wouldn’t have thought. I mean some of
the landmarks were hard to recognize for what they were supposed to be,
but some were very well done. Britain was OK, but there was only a pub
and a Welsh shop there really, whereas some of the other pavilions, as I
think they were called, were good ambassadors for the countries they
represented. China, for example, had an excellent 360-degree film showing
parts of China and I found that very interesting.
interviewer Did you think there was anything lacking about the content?
husband Well I did notice that there weren’t many black people at World
Showcase, particularly the American Adventure. Now whether we were
there on an unusual day in that respect I don’t know, but we saw plenty of
black Americans in the Magic Kingdom and other places, but very few if
any in that World Showcase. And there was certainly little mention of black
history in the American Adventure presentation, so maybe they felt
alienated by that, I don’t know, but they were noticeable by their absence.
husband Well thinking about it now, because I hadn’t really given this any
consideration before you started asking about it, but thinking about it now, it
was only really representative of the developed world, you know, Britain,
America, Japan, world leaders many of them in technology, and there was
nothing of the Third World there. Maybe that’s their own fault, maybe they
were asked to participate and didn’t, but now that I think about it, that does
come to me. What do you think, love?
wife Well, like you, I hadn’t thought of it like that before, but I agree with you.
The sequence begins with the interviewer asking what would be
considered a ‘direct question’ in terms of Kvale’s (1996) nine types.
The initial reply is very bland and does little more than reflect the
interviewees’ positive feelings about their visit to Disney World. The
wife acknowledges this when she says ‘but I need to say more than
that, don’t I?’ Interviewees often know that they are expected to be
expansive in their answers, and as this sequence occurred around
halfway through the interview, the interviewees had realized by this
point that more details were expected. There is a slight hint of
embarrassment that the answer has been so brief and not very
illuminating. The husband’s answer is more expansive but not
particularly enlightening, so the interviewer uses probing questions
to gather more information.
The interviewer’s first prompt (‘Did you think there was anything
lacking about the content?’) produces a more interesting response
from the husband: he begins to suggest that black people might be
under-represented in attractions such as the American Adventure, a
dramatic production that tells the story of America through a debate
between two audio-animatronic figures—Mark Twain and Benjamin
Franklin. The second prompt (‘So did you think there were any
special emphases?’) produces further useful reflection, this time
carrying the implication that ‘Third World’ (developing) countries
are under-represented in World Showcase in the Epcot Centre (an
area of the theme park that claims to provide a country-by-country
world tour, showcasing national culture, history, architecture,
cuisine etc.). The couple are clearly aware that it is the prompting
that has led to these reflections when they say: ‘Well thinking about
it now, because I hadn’t really given this any consideration before
you started asking about it’ and ‘Well, like you, I hadn’t thought of it
like that before.’ This is the whole point of prompting—to get the
interviewee to think more about the topic and to give them the
opportunity to provide a more detailed response. It is not a leading
question, because the interviewer didn’t ask ‘Do you think that the
Disney company fails to recognize the significance of black history
(or ignores the developing world) in its presentation of different
cultures?’
Methods of interviewing
A ‘standard’ qualitative interview could be described as involving
spoken questions and being conducted face-to-face, at a single
location. Here, we discuss some alternative methods of conducting
this kind of interview: using ‘props’ such as vignettes and visual
media to provide a basis for discussions; using the mobile interview
approach (conducting interviews on the move); and conducting the
interview remotely, via telephone, email or online messaging, or
video call.
Vignettes and photo-elicitation
Although there may be times when you want to ask fairly general
questions, these are usually best avoided. Indeed, Mason (2002)
suggests that when they are used, it often only leads interviewees to
ask the interviewer to clarify what they meant. For example, the
question ‘How do you think you are affected by advertising?’ is likely
to be met with the response ‘In what way?’ Vignette questions can be
a useful alternative, outlining a specific situation that can lead to a
more focused discussion. Vignettes (which we cover in Section 11.4)
are brief descriptions of an event or episode, and you can use them to
ground interviewees’ views and accounts of behaviour in particular
situations (Barter and Renold 1999). Researchers usually devise
them before the interview, and they can be read aloud to
interviewees, or given to them in written format. In either case, the
point is to present interviewees with concrete and realistic scenarios
so that researchers can assess how certain contexts might affect
behaviour. Research in focus 19.6 provides two examples of studies
that made effective use of vignettes to facilitate questioning.
RESEARCH I N F O CUS 1 9 . 6
Studies using vignettes
Telephone interviewing
• They are unlikely to work well with interviews that will last a
long time, as it is much easier for the interviewee to end a
telephone interview than one conducted in person. (This is
more significant for qualitative than quantitative interviews,
as they are often time-consuming for interviewees.)
On the other hand, Curasi also found that the interviews that
produced the least detailed data were online interviews. This and the
other differences could be to do with the fact that, whereas a
qualitative face-to-face interview is spoken, the parallel online
interview is typed. The significance of this difference in the nature of
the respondent’s mode of answering has not been fully appreciated.
However, it could be that the immediacy of face-to-face interviews
provides greater opportunity for the interviewer and interviewee to
offer verbal and non-verbal cues that facilitate greater interaction
and further information.
• There are obvious time and cost savings as there is no need for
interviewer or interviewee to travel, which is a major
advantage with geographically dispersed samples.
• Digital audio files, for example .wav ones, are huge, so they
require a lot of disk space for storage. It is also important to
think about storage in relation to the ethical requirements of
the research. This might mean paying particular attention to
the anonymity of the transcripts, but also the security of any
recording/transcription. Many word-processing packages
have password protection capacity, as do many storage
platforms, and you should use these where possible to provide
multiple layers of encryption.
• There are competing formats for both digital files and voice-
to-text software, which can cause compatibility problems.
T HI NKI NG DEEPLY 1 9 . 2
The usefulness of interview transcripts
Once you have gone through all of your interview topics and
exhausted any other avenues of exploration, it is time to think about
ending the interview. There are a number of ways to do this, but it is
often a good idea to say something along the lines of ‘Well, I think
that’s everything I wanted to talk about, is there anything that you’d
like to ask me?’ This allows the interviewee some space to clarify
anything they might have said, as well as giving them the
opportunity to ask about any part of the research that interests them.
Then it is a good idea to remind them what will happen to their data
and confirm whether they are happy to be contacted about the
project in the future. Finally, thank them for their time.
It’s also worth remembering that an audio transcript will also not tell
you everything about an interview. So after an interview, try to make
notes about:
• how the interview went (whether the interviewee was
talkative, cooperative, nervous, etc.);
• body language;
• any other feelings about the interview (did it open up new
avenues of interest?);
This will help you remember the contextual detail of the interview
when you conduct your analysis.
It is also worth reflecting how issues of your own identity might have
influenced the direction and flow of the interview. What was the
rapport like during the interview? What sort of role did you take
during the interview—for example, a naïve student, expert, or friend
—and how did this seem to impact on the discussion? How might
characteristics of your identity such as gender, race, or class have
enhanced or inhibited the conversation and what participants may
have been willing to reveal? These things do not always matter, but
in some contexts they can and they are worth reflecting upon
because they can help you evaluate the research process.
There are two special forms of interviewing known as life history and
oral history interviews. Both tend to be much more unstructured
than semi-structured interviews. Indeed, when taking either of these
approaches the interviewer purposely takes less control over the
direction and focus of the interview. In some instances the
researcher might encourage the interviewee to talk around some
loose themes and to change direction as and when necessary; in
others the content can be completely dependent on the interviewee.
Both life history and oral history approaches are worth considering
further, as they have been particularly popular with qualitative
researchers.
• Love and work. For example: ‘How did you end up in the type
of work you do or did?’
• Vision of the future. For example: ‘Is your life fulfilled yet?’
• Closure questions. For example: ‘Do you feel that you have
given a fair picture of yourself?’
(Atkinson 1998: 43–53)
Becker and Geer (1957: 28) state that the ‘most complete form of the
sociological datum [the singular form of “data”] … is the form in
which the participant observer gathers it’. Trow (1957: 33), however,
argued that ‘the problem under investigation properly dictates the
methods of investigation’. In this book we take the latter view. We
would suggest that every research method we discuss is appropriate
for researching some issues and questions but not others. As we
noted earlier, comparing interviewing and participant observation is
in a sense an artificial exercise because the latter is usually carried
out as part of ethnographic research, meaning that it is usually
accompanied by interviewing as well as other methods. In other
words, participant observers often support their observations with
methods of data collection that allow them to access important areas
that cannot easily be observed. However, the aim of this comparison
is to give you an idea of the advantages and disadvantages of the two
methods alone, as these are factors that you might want to take into
account when deciding how to plan a study and even how to evaluate
existing research.
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
The focus group method has been gaining popularity in the social
sciences since the 1980s, but it is by no means a new technique
(Tadajewski 2016). It has been used for years in market research,
where it is employed for purposes like testing responses to new
products and advertising initiatives. In fact, there is a significant
body of literature within market research which looks at the practices
associated with focus group research and their implementation (e.g.
Calder 1977). Most social science focus group researchers undertake
their work within the traditions of qualitative research, meaning
that they are explicitly concerned with revealing how the group
participants view the social world in relation to their own life
experiences. The researcher will therefore aim to provide a setting
that allows participants to express their beliefs and perspectives.
With this aim, the person who runs the focus group session—usually
called the moderator or facilitator—is expected to guide the
discussion but not to be too intrusive.
20.2 What is a focus group?
As we noted in Section 20.1, the focus group (see Key concept 20.1)
has become a popular method within social science. Although it
might initially sound like a group interview it differs from this
method in a number of ways, including the following.
KEY CO NCEPT 2 0 . 1
What is the focus group method?
However, the distinction between the focus group method and the
group interview is by no means clear-cut, and the terms are often
used interchangeably. Nonetheless, the definition proposed in Key
concept 20.1 provides a useful starting point.
There are many different reasons why researchers use focus groups.
These reasons fall under three main categories:
The fact that focus groups give participants the capacity to shape the
direction of the discussion is a key reason for its appeal to, and
increasing use by, feminist researchers. Indeed, Walters (2020)
argues that because the researcher has to relinquish control—and
their position as ‘expert’—to participants, the method can help to
redress the hierarchical nature of the research relationship.
Evidently, this resonates with the goal of reciprocity within feminist
research. Other researchers also point toward the dialogic nature of
focus group discussion, where participants are able to transform
social knowledge when they talk and think together (Caretta and
Vacchelli 2015). The contextual and non-hierarchical capacity of the
focus group can facilitate diversity of opinion and reveal how
collective sense is made (Merryweather 2010). Nevertheless, we
should not imply that focus groups are inherently participatory.
Gender, social status, and other intersectional dimensions of
identity could mean that some people do not want to share their
thoughts. There is always a danger that some voices might prevail
over others. On the opposite side, it is also worth highlighting that
the general tendency for consensus within human interaction can
also inhibit a healthy and constructive multivocality where diversity
of opinions is stimulated (Merryweather 2010). So while focus group
discussion can provide unique insight, and be very useful in
addressing issues of power in the research relationship, we cannot
take these things for granted.
• why and how you should record and transcribe focus group
sessions.
Number of groups
As Guest et al. (2017a) highlight, there is little clear guidance about
how many groups will be needed for a research study. It is unlikely
that just one group will meet a researcher’s needs because there is
always the possibility that the responses are particular to that one
group. This would make any qualitative assessment of
generalizability impossible (see Section 16.7). Equally, there are
strong arguments for saying that too many groups will be a waste of
time. Calder (1977) suggests that when the moderator reaches the
point when they can fairly accurately anticipate what the next group
is going to say, there are probably enough groups. This is very similar
to the idea of data saturation that we introduced in Chapter 17 (see
Key concept 17.2), and can be seen in Research in focus 20.2. In
other words, once new themes are no longer emerging there is not
much point continuing with data collection. The problem, of course,
is that you cannot be sure from the outset when this will happen. The
number of focus groups you require will depend on the needs of the
research questions and the data you collect, so it is probably best
not to arrange a long series of focus groups in case not all of them are
necessary. At the same time, you need to make sure that you have
arranged enough groups for important themes to emerge.
RESEARCH I N F O CUS 2 0 . 2
How many focus groups are enough?
It is also worth noting that the more groups you have, the more
complicated the analysis will be. For example, Skoglund (2019)
reports that in her study of how kindergarten workers intervene in
physical conflicts between children, 12 focus groups that were an
hour in length with a total of 47 practitioners produced 240 pages of
transcription.
Size of groups
How big should focus groups be? There is some variation in the
composition of focus groups. Generally, the consensus is that there
must be at least four members, with Morgan (1998a) suggesting that
the typical group size is 6 to 10 members. One major problem for
focus group practitioners is that sometimes people agree to
participate but do not turn up on the day. It is almost impossible to
control this other than by deliberately over-recruiting, a strategy that
some writers do recommend (e.g. Wilkinson 1999: 188).
r1 It was actually reading about what had happened to people. It wasn’t all
facts and figures. I know it was, but it has in the first sentence, where it
says ‘I turned the key and experienced a sinking feeling’. You can relate to
that straight away. It’s how you’d feel.
r2 I suppose for me the pure sciences seem to have more control of what they
are looking at because they keep control of more. Because with social
sciences there are many different aspects that could have an impact and
you can’t necessarily control them. So it seems more difficult to pin down
and therefore to some extent controversial.
r3 Pure science is more credible because you’ve got control over test
environments, you’ve got an ability to test and control factually the outcome
and then establish relationships between different agents or whatever. I
think in social science it’s always subject to interpretation. … I think if you
want to create an easy life and be unaccountable to anybody, to obtain
funding and spend your time in a stress-free way then one of the best
things to do is to work in funded research and one of the best areas to do it
in is in social science.
There are a number of things that you can do within the focus
groups to help with the process of recording and transcribing.
• Make sure the room in which the group takes place is
relatively free from external noise. This will help to ensure
that the recording is clear.
[general laughter]
stephanie All right then but no running in the corridors and make sure you
wash your hands afterwards.
[general laughter]
Selecting participants
Who should participate in a focus group? This will depend on the
research question. Some projects will not require participants of a
particular kind, so there is little if any restriction on who might be
appropriate. However, this is a fairly unusual situation and normally
some restriction is required. For example, for their research on
people’s knowledge about heart attacks, Morgan and Spanish (1985:
257) recruited participants in the 35–50 age range, because they
‘would be likely to have more experience with informal discussions of
our chosen topic’, but excluded anyone who had had a heart attack or
who was uneasy about discussing the topic. Research in focus 20.5
describes various recruitment strategies used by researchers.
RESEARCH I N F O CUS 2 0 . 5
Recruiting focus group participants
Asking questions
Although participants have quite a lot of control over how focus
group discussions unfold, the researcher needs to consider what
sorts of topics they want addressed during a session, and how to ask
the questions on those topics. These issues are similar to the
considerations in qualitative interviewing about how unstructured an
interview should be (see Chapter 19).
Some researchers prefer to use just one or two very general questions
to stimulate discussion, with the moderator intervening as necessary.
For example, in their research on knowledge about heart attacks,
Morgan and Spanish (1985) asked participants to discuss just two
topics. One topic was ‘Who has heart attacks and why?’; here the
moderator encouraged participants to talk about people they knew
who had had heart attacks. The second topic was ‘What causes and
what prevents heart attacks?’
mod So if someone provides an indicator which says the economy is improving you
won’t believe it?
f They’ve been saying it for about ten years, but where? I can’t see anything!
• the fact that the session will be recorded and the reasons for
doing so;
At the end, moderators should thank the group members for their
participation and explain very briefly what will happen to the data
they have supplied. If they want to arrange a further session with the
same group, they will need to take steps to coordinate this.
geoff walsh: He hasn’t lost a Prime Minister yet [laughs]. Anyway, Dr Killer went to
inspect the kitchen before the state dinner. … He came back ashen-faced and said,
‘There’s a toilet in the middle of the kitchen. My advice is don’t eat anything.’ So
Paul spent the night with the menu in front of him and basically dodging, because
he had a view that you could pick up hepatitis or something. That would be the end
of your career.
grahame morris: It’s still the standard advice of Graham Killer now: anything that
might have been near water, lettuce or anything, don’t eat it. Brush your teeth out
of bottled water or whiskey. So, he’s still giving the same advice and he’s still
keeping PMs alive.
(Rhodes and Tiernan 2015: 128)
Munday (2006) suggests that the way in which focus groups can
highlight a consensus, as well as the mechanics of that consensus
(how it was reached), makes this method a powerful tool for research
into collective identity. She gives the example of her research on
social movements and, in particular, a focus group with members of
a Women’s Institute (WI) in which she asked about the movie
Calendar Girls, based on the nude calendar made by the members of
the WI in Rylestone (a village in Yorkshire, England) some years
previously. Munday writes that she asked the question because she
felt it might encourage them to discuss the traditional image of WIs
as old-fashioned and ‘out of touch’. Instead, the women chose to
discuss Rylestone WI and the impact that the calendar’s notoriety
had on its members. Later, the participants had the following
interaction:
alice It might appeal more to the younger ones than perhaps the older members don’t
you think? … Although I suppose they were middle-aged ladies themselves.
jane Oh yes.
mary Oh no no.
jane ()
june I mean it was very well done because you never saw anything you wouldn’t want
to.
Munday notes that the discussion of the movie did not revolve
around dispelling the traditional image of WIs, but instead on
dispelling a traditional image of older women, while at the same time
recognizing that the women’s respectability was not compromised. A
sense of collective identity surrounding gender emerged that was
different from how the researcher had anticipated the discussion
would develop. This particular example demonstrates the potential
of the focus group to reveal how collective ideas emerge within social
interaction and how they are then subsequently reinforced by the
group.
p6: I don’t think it’s realistic to become an established player after the first season.
That would be really hard.
p7: If you have the skills, I think one should be able to be there.
p8: I’m thinking that all of us come from the major clubs and to get established in one
of those clubs after the first season will be very hard.
Online focus groups that do not use video also do not allow
researchers to gain non-verbal data, such as facial expression.
Underhill and Olmstead (2003) compared data from synchronous
online focus groups with parallel data from face-to-face groups and
found little difference in terms of data quantity or quality. In
contrast, Brüggen and Willems (2009) found that when they
compared synchronous online focus group findings with those
deriving from face-to-face ones in a market research area, the latter
produced richer data. Abrams et al. (2015) compared the quality of
data deriving from two face-to-face, two online audiovisual (using
video calling applications), and two online text-only focus groups.
They found that face-to-face focus groups produced considerably
more topic-related data. Coders rated the data from the face-to-face
and the online audiovisual face-to-face groups as superior to online
text-only ones in terms of depth, with the latter groups also found to
be inferior in terms of the numbers of words they produced.
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
Heritage (1987: 258) has written: ‘it is assumed that social actions
work in detail and hence that the specific details of interaction
cannot simply be ignored as insignificant without damaging the
prospects for coherent and effective analyses.’ This assumption
explains CA’s emphasis on the very specific details that actually help
to constitute conversation, including what people say and how they
say it, but also things like the lengths of pauses and the sounds that
occur in all conversations.
1 P: ↑.hhhh ↑hee:::
2 E: speak out ↑WHO:: ARE ↑YOU: [IN THIS BODY?]
10 HER LIFE,
11 P: .hhh I ↑have ↑para↑↑lysed ↑he:r not to a↑chieve
anything, .hhh
12 ↑HHHH ↑.hhhhh HH
Rowan argues that the evangelist’s initial question (2) for
the spirit/demon to identify itself is fundamental to the
successful performance of the exorcism. Even though the
participant’s response does not specify the identity of the
spirit/demon and instead articulates intended action (3–4), that
is enough for the evangelist to determine that they are
addressing something demonic. This ‘reality’ is then twice
reinforced by the suggestion that the spirit/demon has
paralysed the participant from succeeding in life (6–8 and 11).
Rowan draws two main inferences from this interaction.
First, giving an identity to a ‘demon’ is an integral part of
performing a deliverance. Without it, the ceremony cannot
proceed. The emphasis that the evangelist places on ‘who’ and
‘you’ in the initial ‘who are you?’ question highlights the implicit
importance of identification.
Second, the successful categorization of ‘demon’ within the
interaction provides an external explanation for negative life
events. The link between identity and determination uncovered
by the question-and-answer structure (an example of what is
called an adjacency pair—see the next section on ‘Basic tools
of conversation analysis’) then allows all those present at the
ceremony to separate the participant from any negative
experiences of the self and give outside agency to that
adversity.
In other words, Rowan uses CA to uncover a conversational
strategy that allows this group to construct a social reality.
Through the question and answer pattern of identifying a
‘demon’ residing within a participant, the group are able to see
the participant as separate from and not responsible for their
actions—their behaviour is determined by something external
with a will of its own. All of this particular reality is achieved
through conversation.
T I PS AND SKI L L S 2 1 . 1
Basic notational symbols in conversation analysis
We:ll A colon indicates that the sound that occurs directly before the colon is
prolonged. More than one colon means further prolongation, for
example : : : :
you and An underline indicates an emphasis in the speaker’s talk (in the extract
knowing in Research in focus 21.1 this is shown with capital letters).
Turn-taking
Adjacency pairs
Preference organization
The second phase in an adjacency pair does not always follow the
first. It is always anticipated, but some responses are clearly
preferential to others and sometimes people might fail to comply
with the expected response. Conversation analysts call this process
preference organization. A simple example of this in action
occurs when someone makes a request. Accepting the invitation does
not usually require justification, but it is generally expected that a
refusal will need to be justified simply because an invitation has been
made. This means that acceptance is the preferred response and the
refusal with justification is the dispreferred response (otherwise why
make the invitation at all!). Conversation analysts try to discover the
preference structure of an adjacency pair by examining the responses
to an initial statement.
Repair mechanisms
59 R: athletics
60 H: I knew you’d say that if I’d Jeezuz I should have put some
62 J: huh- huh-huh
64 be weird?
65 R: yeh
O’Grady et al. point out that at turns 364 and 367, the
doctor dictates the same words the patient and her niece had
used. This, combined with the element of humour that he
injects, reinforces that he has been listening and wants to
reduce the patient’s embarrassment about gaining weight. At
the same time, the reference to looking ‘despairing’ introduces
an element of empathy that reinforces that he takes the
patient’s concerns very seriously. As the authors argue,
although trust was developed gradually in this encounter, the
‘co-constructed consultation letter’ is a ‘final means to
strengthen trust’ (O’Grady et al. 2014: 79).
This is meant as a very rough guide for the general research process
and there will be variations on this theme. But it is true enough to
say that there is a general process that again starts with the
identification of an appropriate research question; then you make a
selection of sources that will help you respond to those questions;
and then you analyse the discourse within that data—which you then
write up.
Producing ‘facts’
Rhetorical devices
Quantification rhetoric
One phrase that the authors highlight is: ‘those three curable types
are amongst the rarest cancers—they represent around 1 per cent of a
quarter of a million cases of cancers diagnosed each year. Most
deaths are caused by a small number of very common cancers’
(Potter and Wetherell 1994: 53). They particularly note the ‘1 per
cent of a quarter of a million’ used to describe the advances that have
been made in three relatively rare forms of cancer. Potter et al. note
that this phrase incorporates two quantitative expressions that are
designed to legitimate the argument that cancer research is relatively
ineffective: the first is a relative expression (a percentage) and the
second is an absolute frequency (quarter of a million). This change in
the register of quantification is important because it allows the
programme-makers to emphasize the low cure levels (just 1 per cent)
compared with the large number of new cases of cancer. They could
have pointed to the absolute number of people who are cured, but
the impact would have been less. Also, the 1 per cent is not being
contrasted with the actual figure of 243,000 but with the more
approximate phrase ‘a quarter of a million’. Not only does the latter
citation allow the figure to increase by 7,000 in people’s minds; a
quarter of a million also sounds larger. This is a powerful use of
quantification rhetoric. However, it is worth noting that the
programme-makers were aware that they needed to be accountable
for the position they took. Potter and Wetherell’s (1994: 61)
transcript of notes from an editing session suggest that the producers
were keen to ensure that they provided a credible account and could
defend their inference about the 1 per cent. Discourse analysts are
often interested in this element of accountability, and finding it often
involves examining the details through which accounts are
constructed.
Re f e re n c e
Fairclough, N. (1995). Critical Discourse Analysis: The
Critical Study of Language. London: Longman.
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
• mass-media documents;
This chapter will explore a fairly diverse set of data sources. This
includes letters, diaries, autobiographies, newspapers, magazines,
websites, blogs, and photos. The common factor is that these types of
documents have not been produced at the request of a social
researcher; they already exist ‘out there’, waiting to be assembled
and analysed. This type of data might sound very appealing,
especially if you are conducting student research, but it is important
to be aware that using documents for the purpose of research is not
necessarily any less time-consuming than using other forms of
qualitative data. Documents are also not necessarily any easier to
process or analyse for the purpose of data collection. In fact, the
opposite can be true: the search for documents relevant to your
research can be a frustrating and lengthy process, and once they have
been collated it often takes considerable interpretative skill to
analyse their meaning. Documents often exist and make sense in
relation to other documents, and these chains of meaning often need
attention in their own right.
22.2 The nature of ‘documents’
• Analysis. How can you record your data, and what kinds
of analysis will it allow you to conduct?
Whether you find relevant documents that are available to you can
be a question of how hard you look (see Learn from experience 22.1).
L EARN F RO M EXPERI ENCE 2 2 . 1
Discovering documents
Visual objects
There is a growing interest in the visual within social research, as we
highlighted in Chapter 16 and discussed in more depth in Chapter 18
(see Section 18.5 under ‘Visual ethnography’). The photograph is the
most obvious manifestation of this trend, in that photos are
becoming objects of interest in their own right, rather than being
thought of as incidental to the research process. But, as with the
personal documents we discussed above, we need to make a
distinction between visual objects that are created as part of
fieldwork and those that already exist (see Chapter 18). Our focus
here is on those visual objects, like photographs, that exist before the
researcher uses them for the purposes of social research. (We will
discuss TV and film as ‘documents’ in Section 22.5, ‘Mass-media
documents’.)
As Scott argues, this means not only that the photo must not be
taken at face value when used as a research source, but also that we
need to have considerable additional knowledge of the social context
in order to probe beneath the surface. Given this, are photos any use
to a researcher at all? We consider the role of photos within social
research in Thinking deeply 22.1.
T HI NKI NG DEEPLY 2 2 . 1
What roles can photos play in social research?
Aside from the fact that the two boys in top hats were from Harrow,
not Eton, they had dressed for a special party that the parents of one
of them were organizing following the cricket match. This was not
their standard school uniform. The boys were waiting for a car to
arrive to take them to the party and it was late, possibly accounting
for them apparently ignoring the ‘toughs’ and staring into the
distance—they were looking out for their transport. The smartly
dressed boys were not in fact ‘toffs’ (aristocrats)—the father of one of
them was a professional soldier. Nor were the three boys ‘toughs’.
They attended a local Church of England school and had been
hanging around at Lord’s trying to make some money by carrying
bags or opening car doors (and had been successful in that respect).
Also, as Jack notes, the trio are not unkempt—they are simply
wearing open-necked shirts and informal clothes that were typical of
working-class boys of their day. In contrast, the two Harrow pupils
were in special clothes rather than what was typical uniform at the
fee-paying schools of their day.
Websites
Nearly every organization now has a website through which they
strive to present a particular image and share a considerable amount
of information. This can make them valuable sources of data for
social researchers. Sillince and Brown (2009) examined the websites
of all English and Welsh police constabularies between October 2005
and March 2006 to explore how the constabularies constructed
particular organizational identities. Their analysis revealed that
organizational identity was constructed through three core themes:
Social media
Social media are another type of virtual document that can be used
for the purposes of social research. Again, it is possible for social
media accounts to be either personal or official, depending on the
account that is posting the information. Unfortunately, one of the
major advantages of this data is also one of its main drawbacks. The
content can provide very rich data, but the vast quantity of material
available can also make it difficult to construct an appropriate and
manageable sample. However, we have already seen from the
examples of quantitative content analysis we discussed in Chapter 13
(for example Research in focus 13.2) that this kind of investigation is
certainly feasible. Another study we considered in Chapter 13 was
that conducted by Greaves et al. (2014), which is of interest here too
because the researchers used both quantitative and qualitative
content analysis. Greaves et al. were interested in the frequency of
tweets relating to acute NHS hospitals in England and also in their
content, especially in relation to care quality. They write:
We prospectively collected tweets aimed at NHS hospitals from the Twitter streaming
application-programming interface (API) for a year. We identified tweets aimed at
NHS hospitals by using ‘mentions’, where a tweet includes the ‘@username’ of a
Twitter user.
All of this means that we need to recognize documents for what they
are—namely, texts or images created with distinctive purposes in
mind—and not as simply reflections of reality. So, if a researcher
wants to use documents to help them understand aspects of an
organization and its operations—in other words, to tell them
something about an underlying reality—they are likely to need to
strengthen their analysis by employing other sources of data
regarding that reality and the contexts within which the documents
are generated. These other sources will help the researcher develop a
contextual understanding of the documents and their significance.
We can see this with Tom’s study of Myra Hindley’s prison files
(Clark 2019—see Research in focus 22.2). As Tom notes: ‘Hindley’s
version of her life-history is not definitive, inevitable, or necessarily
falsifiable’ (Clark 2019: 13). The autobiographical letter had a life
history of its own, and that history needs to be understood in relation
to other documents that were generated by significant individuals,
the general administration of her sentence, and the wider penal
policy of the time.
Approaches to interpreting
documents
Two common approaches for analysing documents within a
qualitative or mixed methods strategy are qualitative content
analysis and semiotics—although it is also worth highlighting that
discourse analysis is also often used in conjunction with
documents. We cover this approach in depth in Chapter 21, but
Research in focus 22.12 provides an example of how researchers can
use critical discourse analysis to interpret the kinds of document
we have considered in this chapter.
RESEARCH I N F O CUS 2 2 . 1 2
Using critical discourse analysis to interpret digital
media
Semiotics
Semiotics is generally known as the ‘science of signs’. It is an
approach to analysing symbols in everyday life and treats
phenomena as texts, so it can be used in relation to not only
documentary sources but all kinds of other data. Research in focus
22.15 is an example of a study from a semiotic perspective.
RESEARCH I N F O CUS 2 2 . 1 5
A semiotic analysis
Semiotics often aims to uncover the hidden meanings that lie within
texts (‘texts’ being very broadly defined). Let’s take as an example the
resumé or curriculum vitae (CV) that you might use to gain
employment. The typical CV contains features such as personal
details; education; previous and current jobs; and administrative
responsibilities and experience. We can treat the CV as a system of
interlocking signifiers that signify, at the level of denotative meaning,
a summary of the individual’s experience. This is its sign-function. At
the connotative level, however, it also serves an indication of an
individual’s value, particularly in connection with their potential
employability. Each CV can be interpreted in many different ways
and is therefore polysemic, but there is a code that means that
certain attributes of CVs are seen as especially desirable, so these
elements have more universally-attributed meanings. Job seekers
are well aware of this latter point, so will usually devise their CVs to
try to emphasize and amplify the desired qualities in accordance with
the particular job they are applying for.
The main strength of semiotics is the way that it invites the analyst to
try to see beyond and beneath the apparent ordinariness of everyday
life and its manifestations. However, although writers will aim to
provide a compelling explanation of the semiotics of the texts they
analysed, the analysis provided can sometimes feel a little arbitrary.
This sensation is probably unfair to the approach, because the results
of a semiotic analysis are probably no more arbitrary than any
interpretation of documentary materials or other data. In fact,
proponents of semiotics could argue that a sense of arbitrariness in
interpretation is inevitable, given that the principle of polysemy lies
at the heart of semiotics.
It is always useful to make sure that you explicitly reflect upon and,
where appropriate, record your reflections on these issues when you
write up your research.
So, while documents are certainly not without limitations for the
purposes of qualitative research, and working with them requires
caution and some consideration of potential ethical concerns, they
provide the social researcher with some fantastic opportunities to
think creatively about data collection and analysis.
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
Following the publication of their initial text, Glaser and Strauss had
something of a disagreement and began to develop different versions
of grounded theory. Glaser felt that the approach to grounded theory
that Strauss was promoting (most notably in Strauss 1987, and
Strauss and Corbin 1990) was too prescriptive and put too much
emphasis on the development of concepts rather than of theories
(Glaser 1992). However, Strauss’s writings became more prominent,
and his version is largely the one that is followed today. There is,
however, considerable controversy about what grounded theory
should consist of and the processes that it should follow, and other
approaches have subsequently been developed (see Bryant and
Charmaz 2019, for example).
Open coding: ‘the process of breaking Initial coding: ‘When researchers conduct
down, examining, comparing, initial coding …, they compare data with
conceptualizing and categorizing data’ data; stay close to and remain open to
(Strauss and Corbin 1990: 61). This exploring what they interpret is happening
process of coding produces concepts, in the data; construct and keep their codes
which are later grouped and turned into short, simple, precise and active; and move
categories. quickly but carefully through the data’
(Thornberg and Charmaz 2014: 156). This
often involves assigning codes to each line of
text.
Axial coding: ‘a set of procedures Focused coding: ‘As a result of doing initial
whereby data are put back together in coding, the researcher will eventually
new ways after open coding, by making “discover” the most significant or frequent
connections between categories’ initial codes that make the most analytical
(Strauss and Corbin 1990: 96). This is sense. In focused coding …, the researcher
done by linking codes to contexts, to uses these codes, identified or constructed as
consequences, to patterns of interaction, focused codes, to sift through large amounts
and to causes. of data’ (Thornberg and Charmaz 2014: 158).
‘Focused coding requires decisions about
which initial codes make the most analytic
sense to categorize your data incisively and
completely’ (Charmaz 2006: 57–8).
The Strauss and Corbin approach The Charmaz approach
Let’s consider the two approaches in turn. We can see that Strauss
and Corbin (1990) distinguish between three types of coding
practice: open coding, axial coding, and selective coding. Each
relates to a different point in the development of categories in
grounded theory. While the phase of open coding is relatively
exploratory, the idea of axial coding it is sometimes seen as closing
off the dynamic nature of qualitative data analysis and has been
controversial because of this. Linvill and Warren (2020) provide a
useful example of the move from open coding to axial coding in their
analysis of tweets that appear to emanate from ‘troll factories’: large-
scale and organized attempts to spread disinformation through
bogus social media accounts. They looked at nearly 3 million tweets
associated with 1,858 Twitter handles (accounts) and write:
First, we engaged in a process of unrestricted open coding, examining, comparing, and
conceptualizing the content. We considered elements of tweets including the hashtags
employed by a handle, cultural references within tweets, as well as issues and
candidates for which a handle advocated … We conducted axial coding to identify
patterns and interpret emergent themes. Through axial coding, we identified links and
relationships between codes and, through both inductive and deductive reasoning,
built a frame to better understand the data. To verify the validity of results, near the
end of axial coding, peer debriefing was conducted.
So, although there are differences in the way these researchers advise
conducting the coding process in grounded theory, in both
approaches there is agreement that it involves a movement from
generating codes that stay close to the data to more selective and
theoretically elaborate ways of conceptualizing the phenomena being
studied.
Related to this criticism is the fact that researchers are often required
to articulate the implications of their planned investigation before
data collection begins. For example, an academic making a bid for
research funding or a student writing a dissertation proposal is
usually required to demonstrate how their research will build upon
what is already known. Researchers are also likely to have to
demonstrate that they have a reasonably well defined research
question. Both of these things are generally discouraged in grounded
theory.
There are also some more practical difficulties with grounded theory.
The time it takes to transcribe recordings of interviews, for example,
can make it difficult for researchers to fulfil the requirements of both
constant comparison and saturation. The time and effort it would
take to achieve theoretical saturation would also make it unsuitable
for most time-limited student projects. It is also debatable whether
grounded theory really results in theory. As we have previously
suggested, it provides a rigorous approach for generating concepts,
but it is often difficult to see what theory, in the sense of an
explanation of something, a researcher is putting forward. In many
instances, researchers simply organize data into a series of related
themes rather than examining them to reveal their underlying
properties. Indeed, while the goal of grounded theory is the
generation of formal theory, most applications of the approach are
substantive in character. In other words, they are about the specific
social phenomenon being researched and not the broader range of
phenomena of which they are symptomatic (though, of course, they
may have such broader applicability).
The main steps to follow when coding qualitative data are as follows.
Husband They were very good and some were better not critical aesthetic
than others, but that was down to the host of Disney critique
countries themselves really, as I suppose each content
of the countries represented would have been critique
responsible for their own part, so that’s
nothing to do with Disney, I wouldn’t have
thought. I mean some of the landmarks were
hard to recognize for what they were supposed
to be, but some were very well done. Britain
was OK, but there was only a pub and a Welsh
shop there really, whereas some of the other
pavilions, as I think they were called, were
good ambassadors for the countries they
represented. China, for example, had an
excellent 360 degree film showing parts of
China and I found that very interesting.
Husband Well I did notice that there weren’t many black visitors’ ethnicity
people at World Showcase, particularly the ethnicity critique
American Adventure. Now whether we were
there on an unusual day in that respect I don’t
know, but we saw plenty of black Americans in
the Magic Kingdom and other places, but very
few if any in that World Showcase. And there
was certainly little mention of black history in
the American Adventure presentation, so
maybe they felt alienated by that, I don’t know,
but they were noticeable by their absence.
You should remember when coding that any incident of data can,
and sometimes should, be coded in more than one way. Don’t worry
about generating what seem to be too many codes—at least, not in
the early stages of your analysis. Some codes will become and remain
useful and others will not. The important thing is to be as inventive
and imaginative as possible; you can focus on tidying things up later.
• fragmenting the data can mean that you lose valuable context;
• it can be difficult to reducing the codes you initially generate
to a manageable number.
T I PS AND SKI L L S 2 3 . 2
Getting to a manageable number of codes
Figure 23.3 is a matrix that draws on the coded text in Table 23.2
and that would be used for representing the data on the theme
‘Ideological critique’. The four subthemes are presented, and the
researcher would copy brief excerpts from the data into the
appropriate cell. The data shown in two of the cells comes from
Interviewee 4, and you can see that it also specifies where these
quotes are located within the transcript (in response to Question 14).
Ritchie et al. (2003) advise that, when inserting material into cells,
the researcher should
F I G U R E 2 3 . 3 The Framework approach to thematic analysis
Meta-ethnography
Meta-ethnography, introduced by Noblit and Hare in 1988, was one
of the first types of evidence synthesis developed specifically for
qualitative research, and it remains popular in healthcare research.
Its aim is to achieve an interpretative synthesis of qualitative
research (see Research in focus 23.8) and other secondary sources.
Like meta-analysis in quantitative research, we can use it to
synthesize and analyse information about a phenomenon that has
already been extensively studied. But this is where the similarity with
quantitative analysis ends, because meta-ethnography ‘refers not to
developing overarching generalizations but, rather, translations of
qualitative studies into one another’ (Noblit and Hare 1988: 25).
Noblit and Hare base their approach on the idea that all social
science explanation is comparative, involving the researcher in a
process of translating existing studies into their own worldview.
RESEARCH I N F O CUS 2 3 . 8
A meta-ethnography
Thematic synthesis
Thematic synthesis essentially applies thematic analysis, which we
discussed in Section 23.6, to existing studies in a particular domain.
Although thematic synthesis tends to be used for synthesizing
qualitative studies, it can include quantitative studies too (Kavanagh
et al. 2012). However, to simplify this discussion, we will focus here
on its use in qualitative synthesis.
• linking tools
• coding tools
• query tools
Evaluating CAQDAS
CAQDAS differs from quantitative data analysis software mainly in
terms of the types of data it can help to process and the forms of
analysis it can support. Some writers have commented that CAQDAS
does not and cannot help with decisions about the coding of textual
materials or about the interpretation of findings (Woods et al. 2016),
but this is not unique to qualitative data analysis software. In
quantitative research, the investigator sets out the crucial concepts
and ideas in advance rather than generating them out of their data.
Also, this comment seems to imply that the use of quantitative data
analysis software such as SPSS is purely mechanical, which is not
true: once the analyses have been conducted, the researcher still has
to interpret them. In fact, both the choice of variables and the
techniques of analysis are areas that require a considerable amount
of interpretative expertise. Both forms of software require creativity.
A number of issues with the use of CAQDAS have been raised over
the years, which we can summarize as follows.
In respect to the second point, it has also long been suggested that
the code-and-retrieve process that underpins many CAQDAS
platforms can result in the fragmentation of data (Weaver and
Atkinson 1994). As a result, the narrative flow of interview
transcripts and events recorded in field notes may be lost. This is
something that is a standard difficulty for qualitative researchers,
regardless of the software they use, but the problem when using
CAQDAS packages is that researchers might not be fully aware of just
how much the narratives have become fragmented in the code-and-
retrieve process. It is certainly the case that the process of coding text
into chunks that are then retrieved and put together into groups of
related fragments does risk decontextualizing data. It is very easy for
the coding process to produce collections of reorganized fragments.
An awareness of context is crucial for many qualitative researchers
so the prospect of losing this detail is likely to be problematic if left
unchecked.
Benefits of CAQDAS
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
In this chapter we use the term in the latter way—to refer to research
that involves both quantitative and qualitative methods. We explore
some of the reasons for and against using multiple methods together,
the theoretical and philosophical issues that this approach prompts
us to consider, and the forms that mixed methods research can take.
We conclude by looking at the characteristics of good mixed methods
research, which are important for understanding and evaluating
existing research but which we should also bear in mind when using,
or considering using, this strategy.
24.2 Why mix methods?
17. Other/unclear.
Triangulation
We first came across the idea of triangulation in Key concept 16.3.
When applied to mixed methods research, triangulation refers to the
practice of cross-checking results that we have gained using a
method associated with one research strategy against the results
we have gained using a method associated with the other research
strategy. It is an adaptation of the argument by such writers as Webb
et al. (1966) that we can increase confidence in the findings from a
study using a quantitative research strategy by using more than one
way of measuring a concept. Usually, researchers conducting a
triangulation exercise compare quantitative and qualitative findings.
McCrudden and McTigue (2019) did this in their study of high school
students’ belief bias (the tendency to believe things that fit with our
own values, prior beliefs, and prior knowledge) relating to climate
change. The authors present an integrated results matrix in which
they displayed quantitative and qualitative results side by side. They
summarize the separate strands and show how the quantitative and
qualitative findings support each other. For example, they show that
students in a survey who did not rate the strength of arguments
differently based on their own beliefs also exhibited this tendency in
the qualitative phase of the research, in that they were able to
independently assess the quality of the arguments presented.
Research in focus 24.1 provides two further examples of studies that
used mixed methods in order to employ a triangulation approach, as
well as for completeness (see the next section). A researcher might
plan a triangulation exercise from the start of a project, or they might
decide to conduct it only when the data have been collected.
RESEARCH I N F O CUS 2 4 . 1
Mixed methods for triangulation and completeness
Completeness
Often, a researcher undertakes mixed methods research because they
believe that using both quantitative and qualitative methods will give
them a more complete answer to their research question or set of
research questions. This rationale implies that the gaps left by
methods associated with one research strategy can be filled by
methods associated with the other strategy. A common example is
when researchers conducting an ethnography use structured
interviewing or a self-completion questionnaire to gather
more data because they cannot access everything they need to know
about through participant observation. So, they employ different
methods in order to gain a more complete empirical description of
what they are researching. Similarly, researchers might need to use
alternative methods because qualitative interviewing can leave some
questions unanswered (for example, systematic information about
the social backgrounds of people in a particular setting), or because
of the difficulty of gaining access to certain groups of people.
Sampling
Another common use of mixed methods is when researchers employ
quantitative strategies to select people who they then study
qualitatively (for example through unstructured or semi-
structured interviews). In Fenton et al.’s (1998) interesting study
of how social science research is reported in the British mass media,
the researchers used a (quantitative) content analysis of media
content as a source of data in its own right but also as a way for the
researchers to identify journalists who had reported relevant
research, who they could interview using a semi-structured approach
—in other words, they used their content analysis to help with
purposive sampling for qualitative interviewing. Another example
of mixed methods within the study is the way that the researchers
first conducted a survey of social scientists’ views about media
coverage and their practices, and then used the survey replies to
identify two groups of social scientists—those with particularly high
and those with low levels of media coverage of their research—who
they then interviewed using a semi-structured approach.
You will have noticed that up to this chapter, this book has been
structured around the distinction between quantitative and
qualitative research. This distinction is there mainly for historical
reasons, which are largely rooted in epistemological assumptions
(that is, ideas about what constitutes valid knowledge) of what is
involved in the different methodological traditions. As we have seen,
there are a huge number of research methods, and thinking of them
as belonging to two broad groups can also be a useful way of
organizing the different approaches, even though it is a crude way of
grouping them. However, the distinction between quantitative and
qualitative methods is not as robust as it may at first appear. More
worryingly, this categorization tends to set one group ‘against’ the
other, and this can be highly problematic. Research methods, and
approaches to social research in general, are much more flexible than
the qualitative/quantitative distinction suggests. Indeed, while there
are many differences between the qualitative and quantitative
research strategies, research can be—and increasingly is—conducted
that goes beyond this distinction. Jodie reflects on the
quantitative/qualitative ‘divide’ in Learn from experience 24.2.
L EARN F RO M EXPERI ENCE 2 4 . 2
Comparing and contrasting quantitative and
qualitative methods
• numbers vs words;
• the qualitative characteristics in quantitative research;
• the quantitative characteristics of qualitative research, that is,
quasi-quantification; and
• the ways in which some digital data, such as social media and
other related Big Data, are neither purely quantitative nor
qualitative and are increasingly seen as ‘hybrid’.
Numbers vs words
Quasi-quantification
Another angle from which to consider the ‘divide’ is the fact that
numbers are not completely absent from qualitative research.
Qualitative researchers engage in what we could call ‘quasi-
quantification’ through the use of terms like ‘many’, ‘frequently’,
‘rarely’, ‘often’, and ‘some’. If they are making these kinds of
references to quantity, the qualitative researcher will have some idea
of the relative frequency of the phenomena being referred to.
However, as expressions of quantities they are imprecise, and it is
often difficult to work out why they are used at all. One alternative is
for qualitative researchers to engage in a limited amount of
quantification when it is appropriate, for example when an
expression of quantity would strengthen their argument. Silverman
(1984; 1985) quantified some of the data he gained from observing
doctor–patient interactions in the UK National Health Service and
private oncology clinics in order to bring out the differences between
the two types of clinic. This quantification allowed him to show that
patients in private clinics could exert more influence over what went
on in the consultations, and he argues that some quantification of
findings from qualitative research can help to uncover the generality
of the phenomena being described. However, it is important to note
Silverman’s warning that such quantification should reflect research
participants’ own ways of understanding their social world (in
keeping with the preoccupations of qualitative research—see Section
16.3).
When you think about what data actually are, it becomes even more
difficult to always divide them neatly according to whether they are
only ‘qualitative’ or only ‘quantitative’. To an archaeologist, pieces of
pottery become data—items to be classified, measured, and recorded
carefully for further exploration. For us as social scientists, the
digitization of information has radically transformed the kinds of
data we can use to better understand the social world. Increasingly,
in a world where Big Data and algorithms pervade all areas of
everyday life, social science itself is changing. Instead of thinking of
data as ‘quantitative’ or ‘qualitative’, in the current digital era we can
think of data—and Big Data in particular—in terms of the ‘four Vs’:
its volume, variety, veracity, and velocity. YouTube and social media
data are streamed in ‘real time’, involve numbers that are
automatically logged in terms of their metrics (for example the
number of ‘likes’), and also involve words (for example comments, or
the number of words in a tweet on Twitter). As the world changes, so
too do the data that constitute it. On the one hand, we can think of
these new kinds of digital data as ‘hybrid’ forms of data—data that
are neither purely qualitative nor purely quantitative. On the other
hand, the methods that people use to collect them and make sense of
them are increasingly mathematical, computational, and
quantitative. These new kinds of ‘hybrid’ data arguably mess up the
quant/qual divide all over again, albeit in a contemporary context.
Source: Reproduced from Creswell, J. and Plano Clark, V. (2018). Designing and
Conducting Mixed Methods Research, 3rd edn. London: SAGE.
We will now list five factors you will want to take into account when
evaluating or considering mixed methods research. The reflections
we list are influenced by writings about indicators of quality in mixed
methods research (for example Bryman et al. 2008; Bryman 2014;
O’Cathain et al. 2008), but they can also act as practical guidance to
students and researchers thinking about doing mixed methods
research for the first time. This is not an exhaustive list of all possible
quality criteria or considerations that can or have been applied to
mixed methods research (such as the list in O’Cathain 2010), but
rather a summary of the main considerations that come up in
discussions of this approach. It is important to note that each item
on the list interrelates with the others. Overall, the decision to
conduct a mixed methods project and your ability to deliver high
quality research depends on all of these issues and how they work
together as a whole. For this reason, we have not ranked the five
items in order of importance; they are simply a set of interdependent
issues that you should consider if you are thinking about doing a
mixed methods project. If you do decide to use mixed methods, then
you will need to explicitly address each of these issues when you
write up your research. The issues are:
Further considerations
In addition to the five considerations we have just looked at, you
might find it useful to refer to the A–Z list of questions in Tips and
skills 24.1 as a checklist for evaluating existing mixed methods
research and/or devising your own mixed methods study (Sammons
and Davis 2017). None of the points we have raised so far are
intended to put you off using mixed methods. Instead, they are there
to help you think carefully about whether the approach is right for
you and your project. Each project is unique and will require a
bespoke approach. This is the case for all research, not just mixed
methods projects. The trick is to weigh up the pros and cons and
make careful, realistic assessments, rather than becoming set on a
particular approach and forging ahead without thinking through the
potential implications. The other important thing—and again, this
applies to all research—is to make sure that all this careful thought is
evident in your write-up. This means stating the type of mixed
methods design you used (see Section 24.4 and Figure 24.2) and
providing a sufficiently detailed account of aspects such as sampling,
the design and administration of your research instruments, and the
analysis of data for both the quantitative and the qualitative
components of the study. (Often researchers provide more detail on
one element or too little detail on both.) You should explain and
clearly justify the decisions you made in connection with these
issues. We discuss the particular considerations for writing up a
mixed methods study—including making sure that your overall
rationale for mixing methods is clear—in Chapter 25, where we will
walk through an example.
T I PS AND SKI L L S 2 4 . 1
A–Z for evaluating mixed methods studies
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
• Start early
• Be persuasive
• Maintain an argument
One very general point that we want to emphasize is that if you enjoy
your research, this can have a big impact on the quality and
persuasiveness of your write-up, as well as helping you stay
motivated during this phase of the project. Our panellists discuss this
in Learn from experience 25.1.
L EARN F RO M EXPERI ENCE 2 5 . 1
Enjoying your research
Start early
Students sometimes underestimate the amount of time it will take to
write up their research and delay starting this phase until after data
collection. Clearly, you cannot write up your findings until you know
what they are, but there are good reasons for starting to think about
writing up early on, and even drafting parts of the write-up. For
example, although your literature review can be iterative and you
will probably revise it throughout your project, it is a really good idea
to produce an early draft while you are doing your initial reading.
You might also be able to produce a draft of the methodology section,
as you will have decided what methods you are going to use quite
early in the project, before you begin to gather data.
Be persuasive
This point is crucial. Writing up your research is not simply a matter
of reporting your findings and drawing some conclusions. Your
write-up will contain many other features, such as references to the
literature on which you drew, and explanations of how you did your
research and conducted your analysis. But above all, you must be
persuasive. Simply saying ‘This is what I found; isn’t it interesting?’
is not enough. You must convince your readers that your findings
and conclusion are significant and that they are plausible, and this
cannot be achieved with lifeless, uncertain-sounding writing. You
can use some of the same rhetorical strategies (see Key concept
25.1) employed by social researchers to persuade readers of the value
of their work, and if you start writing early, you can develop your
style throughout your project.
KEY CO NCEPT 2 5 . 1
What is rhetoric?
Maintain an argument
A research write-up should be organized around an argument that
links all aspects of the research process and runs through all its
stages: problem formulation; literature review; choice and
implementation of research methods; discussion; and conclusion
(see Figure 25.1 for an illustration of this). Too often, students make
a series of points without explaining what each point contributes to
the overall argument that they are trying to present. If you do this,
you are very vulnerable to the ‘So what?’ question.
F I G U R E 2 5 . 1 The role of an argument in a dissertation
An argument will naturally emerge if you think about what your
claim to knowledge is and try to organize your writing to support and
enhance that claim. Consider the story you want to tell about your
research and your findings, and the key point or message that you
want your readers to take away when they finish reading your work.
Try to avoid tangents and irrelevant material that may mean that
your readers lose track of your argument. One way to give readers a
clearer sense of an argument is to provide some signposts of where
you intend to go in your dissertation and why. (See ‘Signpost your
work’ earlier in this section.)
• ‘it has been shown that’ rather than ‘I have shown that’
• ‘a thematic analysis was carried out’ rather than ‘I carried out
a thematic analysis’
Get feedback
In Chapter 4 we recommended that you make the most of your
supervisor when planning your research project (see Section 4.2
under ‘Working with your supervisor’). The same ideas apply in this
closing phase of the project: try to get as much feedback on your
writing as possible and make sure that you respond positively to the
points people make about your drafts. Your supervisor is likely to be
your main source of feedback, but what supervisors are allowed to
comment on varies between institutions. Give your supervisor the
maximum amount of draft content that regulations will allow. Give
them plenty of time to provide feedback: other students will want
your supervisor to comment on their work too, and if the supervisor
feels rushed, their comments may be less helpful. You could also ask
others on your degree programme to read your drafts and comment
on them, including to help you with proofreading (see Tips and skills
25.1). They may ask you to do the same. Your friends’ comments may
be very useful, but your supervisor’s comments are the main ones
you should seek out.
T I PS AND SKI L L S 2 5 . 1
Proofreading your dissertation
Title page
Your write-up should always begin with a title page, and you should
check your institution’s rules about what information to enter here.
Acknowledgements
The acknowledgements give you the opportunity to thank various
people for the help they gave you in the course of your research
project. For example, you might want to acknowledge gatekeepers
who gave you access to an organization (see Chapter 6), people who
have read your drafts and provided you with feedback, and/or your
supervisor for their advice. The acknowledgements section is
generally quite short, but it serves an important purpose in thanking
people for their support.
List of contents
Your institution may have recommendations or rules about the form
that the list of contents should take—for example, its level of detail
regarding chapter headings and titles for subsections. This guidance
might also tell you how to present lists of tables and figures. If you
look at the start of this book you’ll see that we present two lists of
contents—one brief and one detailed. You will not have to present
two versions in your work, but you can see that when all the
subheadings are included in the detailed version the list of contents
can become very long. If you are not sure how to proceed, then check
with your supervisor.
Abstract
An abstract is a brief summary of your dissertation. Not all
institutions require this component, so do check whether it is
definitely required for your write-up. Journal articles usually have
abstracts, so if you do need to provide one there are many examples
that you can draw on for guidance.
Introduction
The introduction is where you set out the context of your research
and get the reader’s attention. A good introduction will set the tone
for the rest of the work. You should explain what you are writing
about and why it is important, and you should outline your research
questions, aims, and/or objectives. In explaining these, you can
express them in a fairly open-ended way if your dissertation is based
on qualitative research, but you will need to identify them clearly.
Describing your research aims as totally open-ended can mean that
your write-up lacks focus. You might also indicate—in quite general
terms—the theoretical approach or perspective you will be taking and
why.
When explaining what you are writing about and why, it is not
enough to say that you have a long-standing personal interest in the
topic: you need to show its importance within the field. Quoting
other sources can help to demonstrate the importance of an issue.
For example, a project on the gender pay gap might quote recent
studies that demonstrate the difference in hourly wages, or a project
on hate crime might quote police figures showing a recent rise in this
type of offence.
Literature review
We provide detailed advice on how to go about writing your
literature review in Chapter 5. While you will have listed your
research questions or aims in your introduction, this chapter of your
dissertation is where you demonstrate their relationship with the
existing literature. One important thing to consider is that you
should avoid using long quotes from other sources where possible.
Try to paraphrase the work of others or, if you’re referencing a more
general point, use multiple references to support your argument. If
you’re unsure of how to do this, have a look at the literature reviews
we discuss later, in Section 25.4, to see how other authors make the
literature work for them.
Research methods
We use the term ‘research methods’ to refer to a chapter of the write-
up in which you need to outline a number of issues and elements:
• your research design
• problems of non-response
• note-taking
When discussing each of these issues, you should describe and justify
the choices you made, such as why you used an online questionnaire
rather than a structured interview approach, why you focused
upon a particular population for sampling purposes, or why a
particular case was appropriate for your case study. You will
usually need to provide a copy of your data-collection tool in an
appendix, but in the research methods section of your write-up you
should comment on things such as your style of questioning or
observation and why you asked the things you did.
This chapter is also where you will discuss the ethical issues around
your project. When discussing ethics you should do more than just
report that your project received ethical clearance. You should
highlight any ethical difficulties or complexities, such as whether
your research participants are from a vulnerable population or
whether there is any risk of harm to either your participants or
yourself. If your study required informed consent from
participants, then you should explain how you obtained this—you
may even include your informed consent protocol in an appendix
(discussed later in this section). You may also discuss any
appropriate data security measures that you took, such as using
encrypted devices and/or anonymizing participants’ responses.
You should also remember that methodological choices are informed
decisions that you need to justify through reference to the academic
literature. We have read countless student projects in which the
literature reviews are very effective and well referenced, but the
methods chapters barely reference any literature. This can be a
significant weakness in student work because it demonstrates a lack
of consideration when designing a study. In this textbook we cite
many writers on the advantages and disadvantages of particular
research designs and data-collection approaches. Remember to look
up their work and reference it!
Results
It is in the results chapter of your write-up that you will present most
of your findings. If you intend to have a separate ‘Discussion’
chapter, you will probably want to present the findings in the results
section quite simply, with little commentary on their implications or
reference to literature. However, if you will not have a separate
‘Discussion’ chapter, you will need to use the results section not only
to set out your findings but also to reflect on their significance: for
your own research questions/aims, and in relation to the existing
literature in your area of study. Our panellists reflect on their
decisions to have separate or combined results and discussions
chapters in Learn from experience 25.4.
L EARN F RO M EXPERI ENCE 2 5 . 4
Results and discussion—integrated or separate?
It is worth noting that the first point above (the importance of not
including all your results) is particularly relevant for qualitative
research. As one experienced qualitative researcher has put it: ‘The
major problem we face in qualitative inquiry is not to get data, but to
get rid of it!’ (Wolcott 1990a: 18). Wolcott goes on to say that the
‘critical task in qualitative research is not to accumulate all the data
you can, but to “can” [get rid of] most of the data you accumulate’
(Wolcott 1990a: 35). You have to accept that a lot of the rich data you
obtain will have to be discarded. If you do not do this, you will
probably lose the sense of an argument in your work and your
account of your findings might seem too descriptive and not
sufficiently analytical. This is why it is important to use research
questions or aims as a focus and to relate your findings back to them.
For the same reason, it is also important to keep in mind the
theoretical ideas and the literature that have influenced your
thinking and will also have shaped your research questions.
Conclusion
Like the conclusion of an essay, the conclusion of your write-up is
where you should draw together the main ideas from your findings.
This section should cover the following elements.
Appendices
In your appendices you might want to include such things as your
questionnaire, coding frame, observation schedule, letters you sent
to members of the sample, and letters you sent to and received from
gatekeepers if you had to secure the cooperation of an organization.
You might also include data that you have summarized in your
results chapter so that a reader can review it if they want to.
References
The references section should include all references cited in the text.
For the format of this section you should follow the guidance
provided by your department. The most common format is usually a
variation of the Harvard method (see Chapter 5), such as the one
used in this book.
Like all journal articles, this one has an abstract. Its main content is
structured as follows.
1. Introduction
2. Context
5. Findings
6. Discussion
7. Conclusion
Introduction
The opening three sentences of the introduction attempt to grab our
attention, to give a clear indication of the article’s focus, and to
highlight the gap in knowledge that it is going to address. This is
what the authors write:
The recent surge in social media uptake and the programmatic availability of vast
amounts of ‘public’ online interactional data to researchers have created fundamental
methodological and technical challenges and opportunities for social science. These
challenges have been discussed methodologically, conceptually and technically (see
Burnap et al., 2015b; Edwards et al., 2013; Ruppert et al., 2013; Savage and Burrows,
2009; Sloan et al., 2013; Williams et al., 2013, 2017). However, there is an additional
dimension that has received limited engagement in the sociology literature: the
challenge of ethics (see Beninger et al., 2014; Metcalf and Crawford, 2016; Townsend
and Wallace, 2016; Williams, 2015).
In just over 100 words, the authors have set out what the article is
about and its significance. Let’s look at what each sentence achieves.
So, by the end of the third sentence, the reader knows what the
article is about and why the authors feel that their work will make a
valuable contribution to the literature on this subject. The
introduction closes by telling us exactly what the paper is going to do
(signposting) and the argument that is being made:
This article presents an analysis of Twitter users’ perceptions of research conducted in
three settings (university, government and commercial), focusing on expectations of
informed consent and the provision [of] anonymity in publishing user content. The
central arguments of the article are that ethical considerations in social media
research must take account of users’ expectations, the effect of context collapse and
online disinhibition on the behaviours of users, and the functioning of algorithms in
generating potentially sensitive personal information.
The clarity of purpose is essential here because the paper does not
set out specific research questions; it just states its intentions (aims)
and what it is going to investigate. In our walk-through of the main
components of a social science write-up we noted that the
introduction should outline the research questions or aims (see
under ‘Introduction’), and you will often come across articles like
this one in which the question is not clearly stated in favour of a
statement of what the paper is going to do. As we have said, the
Williams et al. study and the other examples we consider in this
section are not presented as ideals, and it is worth reflecting on
whether a specific research question (rather than what could be
called a research ‘aim’ or ‘objective’) would have made the article
clearer. Thinking deeply 25.1 explores this in more depth.
T HI NKI NG DEEPLY 2 5 . 1
Where are the research questions?
RQ1) How concerned are Twitter users about their data being
used by researchers?
Context
One thing to note particularly in this section is the way in which the
authors use the literature to support their argument. Sometimes they
make an important point that is supported by multiple sources, for
example ‘The impact of the increase in social interaction via the
machine interface on sociality has been discussed for some time
(Beer, 2009; Lash, 2001, 2007)’ (Williams et al. 2017b: 1151). At
other times they focus on the work of a single study, for example
‘Tinati et al. (2014) propose that social media in general, and
specifically Twitter, offer potential for exploring empirical work that
begins to unpack new social relations that are orientated around
digital subjects and objects’ (Williams et al. 2017b: 1151). Whichever
approach they use, you will notice that they are making the
literature work for them rather than being led by it. In other words,
they are not simply summarizing what other people have written—
they are using the work of others to advance their argument in a
clear and persuasive way.
When you look at this section you can see that its purpose is to
summarize the current ethical guidance in the area of social media
research, consider the legal implications, and review the academic
literature in this area. In this section the authors identify four key
ethical areas that are relevant for this study:
• the ethical standards of learned societies;
• the legal considerations in relation to data protection (now
GDPR);
The authors are setting the scene and establishing the current
thinking in each of these domains. Where appropriate, they are also
identifying gaps in current knowledge—notably in the final section,
where they state ‘To date no academic research has statistically
modelled the predictors of the views of users towards the use of their
Twitter posts in various settings’ (Williams et al. 2017b: 1154).
The authors choose to split this section into three parts. The first
part, ‘Data and Modelling’, discusses how the researchers collected
their data, how they identified their sample, and what the impact of
these methods might be. It discusses the use of non-probability
sampling and justifies this choice with reference to the
methodological literature. It also contains some technical details of
the statistical modelling the authors used. The key thing to note is
that the methodological considerations are all transparent and
justified. The sections on ‘Dependent Measures’ and ‘Independent
Measures’ list the questions that were asked to respondents and what
the response options were (the variables). The authors use a table
(Table 1) to clearly summarize this information, which is a common
way of presenting quantitative data in articles. It is also very efficient
and easy to read—imagine if you had to include all of that
information in written form and how much space it would take up.
Findings
Although the paper might seem quite technical and there are a lot of
results, this is where the authors pull out the most interesting
findings. Rather than simply reporting what is in the table, the
authors interpret the data for readers in line with their argument.
For example, in Table 2 they look at how certain characteristics
predict levels of concern about using Twitter data, comparing
responses for university, government, and commercial settings. This
is a complicated table that includes multiple statistical models, but
the authors make the content accessible to non-technical readers by
interpreting the results, for example: ‘Lesbian, gay and bisexual
(LGB) respondents were more likely to express concern over their
Twitter posts being used in government … and commercial settings
… compared to heterosexual respondents’ (Williams et al. 2017b:
1157).
Discussion
The authors then link this to a similar finding using data from a
Eurobarometer survey (a series of public opinion surveys that are
regularly conducted in European Union member states) published in
2011. The authors use this to strengthen the findings of their study.
In the second paragraph the authors point out that, although Twitter
data is considered to be in the public sphere and therefore available
as a resource for research, they have found that users of Twitter have
a very different view of how their data should be treated. Once this
problem has been stated, the authors move on to link their findings
with existing literature and to explore the issue in more depth. The
final part of the discussion provides some recommendations for
researchers and ethics committees regarding the use of Twitter data,
as well as a very clear framework (presented as a flowchart, which we
include in Chapter 6—see Figure 6.1) to help with decisions around
reproducing tweets in published work.
Conclusion
Lessons
3. Methodology
4. Findings
5. Discussion
6. Conclusion
The structure is not dissimilar to the one used for the Williams et al.
(2017b) study and it is one that is fairly typical for an empirical
paper:
Introduction → Literature review → Research design/methods → Results →
Discussion → Conclusions
Introduction
Pickup Basketball
Findings
For all of the three themes there is considerable use of passages from
both field notes and the interview transcripts. For example, the
subsection of community spirit uses the words of Lamar, ‘an African
American freshman [first-year student] at University B’:
You just have to realize where you are, at the end of the day. Nobody wants to lose,
[but] some people out here, they’re on scholarships and you don’t want to lose those
types of things over a pickup game. It doesn’t really mean anything.
Thinking deeply 25.2 looks in greater detail into the use of verbatim
quotations in social science publications.
T HI NKI NG DEEPLY 2 5 . 2
How verbatim quotations are used in write-ups
Discussion
Conclusion
In this section, the author returns to many of the ideas and themes
that informed their research rationale. For example, at one point
they assess the implications of their findings for some of the main
concepts that drove the investigation, such as the notion of a
reflexive quasi-subject:
In university pickup basketball, my research reveals a space that is racially diverse,
almost exclusively male, unsupervised, and adversarial by design, yet ordered by an
inclusive brand of masculinity. It is unclear whether the players are conscious of their
gender expression, or if instead they are regulating their conduct in service of other
concerns like school spirit or the desire to avoid costly discipline. It is also unclear
whether these norms might be found in pickup basketball settings that lack the formal
structure and community of a university. Regardless, to scholars advocating a more
nuanced and tolerant version of American masculinity, these findings should be
encouraging.
In their final paragraph the author makes clear what they regard as
the principal contribution of their research, which revolves around
the notion that ‘dominance-based machismo may eventually lose its
grip on gender hegemony’ (Rogers 2019: 745)—and that the study
provides further evidence of that trend.
Lessons
Let’s consider the lessons we can learn from this article, and other
existing articles, about writing up qualitative research.
The second feature implies that a good mixed methods article will be
greater than the sum of its parts—in other words, that the elements
will be more effective when combined and interacting with each
other than they would be if used separately. This addresses a
tendency that some writers have identified (e.g. Bryman 2007;
O’Cathain et al. 2007) for some mixed methods researchers not to
link their two sets of findings, which means that they cannot get the
maximum value from the research strategy. As Creswell and
Tashakkori put it:
The expectation is that, by the end of the manuscript, conclusions gleaned from the
two strands are integrated to provide a fuller understanding of the phenomenon under
study. Integration might be in the form of comparing, contrasting, building on, or
embedding one type of conclusion with the other.
(2007: 108)
Creswell and Plano Clark (2018: 275–6) provide some more useful
advice for writing up mixed methods research, especially in relation
to its structure. They suggest that a mixed methods journal article
should be structured along the following lines.
• Introduction. This could include a statement of the research
problem or issue; an examination of the literature on the
problem/issue; an examination of the problems with the
existing literature, which might include indicating why a
mixed methods approach would be beneficial (perhaps
because much of the previous research is based mainly on just
quantitative or qualitative research); and the specific research
questions.
3. Theoretical Framework
4. Aim
5. Method and Materials
6. Results
7. Discussion
8. Conclusion
Introduction
By the end of this section we know quite a lot about the research, and
this has been communicated in a very succinct and clear way.
Previous Research
In the ‘Previous Research’ section the authors review the existing
literature and locate their study within a wider context. They begin
by outlining the current understanding around the prevalence and
nature of cancer in young adults, moving swiftly on to the issue faced
by survivors. In the second paragraph the authors review the state of
research on cancer rehabilitation, and in the third paragraph they
criticize previous evaluation programmes (for example those using
randomized control trials) and argue that mixed methods studies
could be valuable in this area. Justifying the need for their study,
they first identify potential for mixed methods studies in this area:
the mixed methods approach is viewed as an upcoming approach for evaluating
complex interventions, including evaluation of both processes and several outcome
responses for capturing effects (Curry, Nembhard, & Bradley, 2009; Kroll & Morris,
2009).
They then point out that few studies of this kind have looked at
cancer rehabilitation (justifying this statement with reference to two
other studies). This highlights the need for more work in this area.
Rather cleverly, the authors then go on to discuss three mixed
methods studies in a closely related area to highlight the potential for
this method. They draw this conclusion:
Supported by these studies, mixed methods seem to corroborate results, capture the
complexity of the phenomenon studied, and [enrich] the interpretation of one type of
result by using another … type (Creswell & Plano Clark, 2011; Kroll & Morris, 2009;
Plano Clark et al., 2014).
They then point out that they could not find any mixed methods
studies that evaluated complex cancer rehabilitation interventions.
In this way, they have demonstrated the power and potential of
mixed methods in this area while still maintaining that there is a gap
to be filled by their research.
Theoretical Framework
This is a short section in which the authors clarify how they are
defining quality of life (QOL). It is very important to make sure that
readers understand what the researcher means by the conceptual
terms they use. Even if the reader disagrees with the definition, at
least the concept has been properly defined.
Aim
Short subsections can be quite effective when you want to grab the
attention of the reader. Here, the authors clearly state the aim of the
study and set out their research question:
The overall aim of this study is to develop an enriched understanding of both the
rehabilitation process and outcomes in evaluating a complex rehabilitation program
for YACS using a longitudinal mixed methods approach. The study addresses the
following research question: Will participation in a complex rehabilitation program
tailored for YACS improve their QOL outcomes and how do they describe their
rehabilitation process over time?
Results
The quantitative results in Table 4 are very detailed, but you don’t
need to understand the intricacies of the analysis to appreciate the
clear way in which the authors have written this part up. The first
subsection on ‘Baseline Results’ starts with a summary of the
quantitative analysis by providing a summary of the findings that the
authors want to draw our attention to; then they link this to the
relevant qualitative data using the four themes.
Discussion
The authors situate their study within the wider academic literature,
highlighting where their findings are aligned to previous work and
what their study adds. The final section makes some important
observations about the difficulty of evaluating complex interventions
using a longitudinal mixed methods approach and the issue of
funding. The authors reflect on key issues, such as the implications of
sample size, how generalizable their findings might be, and how to
balance objectivity and active involvement. On this latter point the
authors discuss the strategies they used to try to achieve this balance.
They finish with a list of factors that demonstrate the feasibility and
advantages of studying this topic using their chosen mixed methods
approach.
Conclusion
This paragraph restates the mixed methods nature of the study and
what insight the strategy has offered within the study, using the
article as a platform to call for further research in this area.
Lessons
Let’s consider the key lessons to be learned from this article and
other articles reporting on mixed methods research.
Before you move on, use the flashcard glossary to help you
recall the key terms we have introduced in this chapter.
Abawi, O., Welling, M. S., van den Eynde, E., van Rossum, E. F. C.,
Halberstadt, J., van den Akker, E. L. T., and van der Voorn, B.
(2020). ‘COVID-19 Related Anxiety in Children and Adolescents with
Severe Obesity: A Mixed-Methods Study’, Clinical Obesity, 10(6).
Abbas, A., Ashwin, P., and McLean, M. (2016). ‘The Influence of
Curricula Content on English Sociology Students’ Transformations:
The Case of Feminist Knowledge’, Teaching in Higher Education,
21(4): 442–56.
Abramova, O., Baumann, A., Krasnova, H., and Lessmann, S. (2017).
‘To Phub or Not to Phub: Understanding Off-Task Smartphone
Usage and Its Consequences in the Academic Environment’,
Proceedings of the 25th European Conference on Information
Systems (ECIS), Guimarães, Portugal, 5–10 June.
Abrams, K. M., Wang, Z., Song, Y. J., and Galindo-Gonzalez, S.
(2015). ‘Data Richness Trade-Offs Between Face-to-Face, Online
Audiovisual, and Online Text-Only Focus Groups’, Social Science
Computer Review, 33(1): 80–96.
Ackroyd, S. (2009). ‘Research Designs for Realist Research’, in D.
Buchanan and A. Bryman (eds), Handbook of Organizational
Research Methods. London: Sage.
Adler, P. A. (1985). Wheeling and Dealing: An Ethnography of an
Upper-Level Drug Dealing and Smuggling Community. New York:
Columbia University Press.
Adler, P. A., and Adler, P. (1987). Membership Roles in Field
Research. Sage University Paper Series on Qualitative Research
Methods, 6. Newbury Park, CA: Sage.
Adler, P. A., and Adler, P. (2008). ‘Of Rhetoric and Representation:
The Four Faces of Ethnography’, Sociological Quarterly, 49: 1–30.
Adler, P. A., and Adler, P. (2009). ‘Using a Gestalt Perspective to
Analyze Children’s Worlds’, in A. J. Puddephatt, W. Shaffir, and S.
W. Kleinknecht (eds), Ethnographies Revisited: Constructing
Theory in the Field. London: Routledge.
Adler, P. A., and Adler, P. (2012). Contribution to S. E. Baker and R.
Edwards (eds), How Many Qualitative Interviews is Enough?
Expert Voices and Early Career Reflections on Sampling and Cases
in Qualitative Research. National Centre for Research Methods
Review Paper, http://eprints.ncrm.ac.uk/2273/ (accessed 27
February 2021).
Ahuvia, A. (2001). ‘Traditional, Interpretive, and Reception Based
Content Analyses: Improving the Ability of Content Analysis to
Address Issues of Pragmatic and Theoretical Concern’, Social
Indicators Research, 54(2): 139–72.
Akard, T., Wray, S., and Gilmer, M. (2015). ‘Facebook Ads Recruit
Parents of Children with Cancer for an Online Survey of Web-Based
Research Preferences’, Cancer Nursing, 38(2): 155–61.
Akerlof, K., Maibach, E. W., Fitzgerald, D., and Cedeno, A. Y. (2013).
‘Do People “Personally Experience” Global Warming, and if so How,
and Does it Matter?’, Global Environmental Change, 23: 81–91.
Alaszewski, A. (2018). ‘Diaries’, in L. M. Given (ed.), The SAGE
Encyclopaedia of Research Methods. London: Sage.
Alatinga, K. A., and Williams, J. J. (2019). ‘Mixed Methods Research
for Health Policy Development in Africa: The Case of Identifying
Very Poor Households for Health Insurance Premium Exemptions in
Ghana’, Journal of Mixed Methods Research, 13(1): 69–84.
Al-Hindi, K. F., and Kawabata, H. (2002). ‘Toward a More Fully
Reflexive Feminist Geography’, in P. Moss (ed.), Feminist
Geography in Practice: Research and Methods. Oxford: Blackwell,
103–15.
Allison, G. T. (1971). Essence of Decision: Explaining the Cuban
Missile Crisis. Boston: Little, Brown.
Altheide, D. L. (1980). ‘Leaving the Newsroom’, in W. Shaffir, R. A.
Stebbins, and A. Turowetz (eds), Fieldwork Experience: Qualitative
Approaches to Social Research. New York: St Martin’s Press.
Altheide, D. L., and Schneider, C. J. (2013). Qualitative Media
Analysis, 2nd edn. Los Angeles: Sage.
Alvesson, M., and Kärreman, D. (2000). ‘Varieties of Discourse: On
the Study of Organization through Discourse Analysis’, Human
Relations, 53(9): 1125–49.
Alwin, D., Baumgartner, E., and Beattie, B. (2018). ‘Number of
Response Categories and Reliability in Attitude Measurement’,
Journal of Survey Statistics and Methodology, 6(2): 212–39.
Amrhein, V., Greenland, S., and McShane, B. (2019). ‘Scientists Rise
Up Against Statistical Significance’, Nature, 567: 305–9.
Anderson, A., Baker, F., and Robinson, D. (2017). ‘Precautionary
Savings, Retirement Planning and Misperceptions of Financial
Literacy’, Journal of Financial Economics, 126(2): 383–98.
Anderson, B. (2016). ‘Laundry, Energy and Time: Insights from 20
Years of Time-Use Diary Data in the United Kingdom’, Energy
Research and Social Science, 22: 125–36.
Anderson, E. (1978). A Place on the Corner. Chicago: University of
Chicago Press.
Anderson, E. (2006). ‘Jelly’s Place: An Ethnographic Memoir’, in D.
Hobbs and R. Wright (eds), The SAGE Handbook of Fieldwork.
London: Sage.
Andrade, L. L. de (2000). ‘Negotiating from the Inside: Constructing
Racial and Ethnic Identity in Qualitative Research’, Journal of
Contemporary Ethnography, 29(3): 268–90.
Antonucci, L. (2016). Student Lives in Crisis: Deepening Inequality
in Times of Austerity. Bristol, UK: Policy Press.
Anzaldúa, G. (1987).Borderlands/la frontera, vol. 3. San Francisco:
aunt lute books.
Aresi, G., Cleveland, M. J., Vieno, A., Beccaria, F., Turrisi, R., and
Marta, E. (2020). ‘A Mixed Methods Cross-Cultural Study to
Compare Youth Drinking Cultures in Italy and the USA’, Applied
Psychology: Health and Well-Being, 12(1): 231–55.
Armstrong, D., Gosling, A., Weinman, J., and Marteau, T. (1997).
‘The Place of Inter-Rater Reliability in Qualitative Research: An
Empirical Study’, Sociology, 31: 597–606.
Armstrong, G. (1993). ‘Like that Desmond Morris?’, in D. Hobbs and
T. May (eds), Interpreting the Field: Accounts of Ethnography.
Oxford: Clarendon Press.
Aronson, E., and Carlsmith, J. M. (1968). ‘Experimentation in Social
Psychology’, in G. Lindzey and E. Aronson (eds), The Handbook of
Social Psychology. Reading, MA: Addison-Wesley.
Asch, S. E. (1951). ‘Effect of Group Pressure upon the Modification
and Distortion of Judgments’, in H. Guetzkow (ed.), Groups,
Leadership and Men. Pittsburgh, PA: Carnegie Press.
Askins, K. (2016). ‘Emotional Citizenry: Everyday Geographies of
Befriending, Belonging and Intercultural Encounter’, Transactions
of the Institute of British Geographers, 41(4): 515–27.
Athey, N., Boyd, N., and Cohen, E. (2017). ‘Becoming a Medical
Marijuana User: Reflections on Becker’s Trilogy—Learning
Techniques, Experiencing Effects, and Perceiving Those Effects as
Enjoyable’, Contemporary Drug Problems, 44(3): 212–31.
Atkinson, P. (1981). The Clinical Experience. Farnborough, UK:
Gower.
Atkinson, P. (1990). The Ethnographic Imagination: Textual
Constructions of Society. London: Routledge.
Atkinson, P. (2006). Everyday Arias: An Operatic Ethnography.
Oxford: AltaMira.
Atkinson, P., and Coffey, A. (2011). ‘Analysing Documentary
Realities’, in D. Silverman (ed.), Qualitative Research: Issues of
Theory, Method and Practice, 3rd edn. London: Sage.
Atkinson, P., and Delamont, S. (2006). ‘Rescuing Narrative from
Qualitative Research’, Narrative Inquiry, 16(1): 164–72.
Atkinson, P., Coffey, A., and Delamont, S. (2003). Key Themes in
Qualitative Research: Continuities and Changes. Walnut Creek, CA:
AltaMira.
Atkinson, R. (1998). The Life Story Interview. Beverly Hills, CA:
Sage.
Atkinson, R. (2004). ‘Life Story Interview’, in M. S. Lewis-Beck, A.
Bryman, and T. F. Liao (eds), The Sage Encyclopedia of Social
Science Research Methods. Thousand Oaks, CA: Sage.
Atrey, S. (2019). Intersectional Discrimination. Oxford: Oxford
University Press.
Attride-Stirling, J. (2001). ‘Thematic Networks: An Analytic Tool for
Qualitative Research’, Qualitative Research, 1: 385–404.
Badcock, P., Kent, P., Smith, A., Simpson, J., Pennay, D., Rissel, C.,
de Visser, R., Grulich, A., and Richters, J. (2017). ‘Differences
between Landline and Mobile Phone Users in Sexual Behaviour
Research’, Archives of Sexual Behavior, 46(6): 1711–21.
Badenes-Ribera, L., Frias-Navarro, D., Bonilla-Campos, A., Pons-
Salvador, G., and Monterde-i-Bort, H. (2015). ‘Intimate Partner
Violence in Self-Identified Lesbians: A Meta-analysis of its
Prevalence’, Sexuality Research and Social Policy, 12(1): 47–59.
Bahr, H. M., Caplow, T., and Chadwick, B. A. (1983). ‘Middletown
III: Problems of Replication, Longitudinal Measurement, and
Triangulation’, Annual Review of Sociology, 9: 243–64.
Bailey, A., Giangola, L., and Boykoff, M. T. (2014). ‘How
Grammatical Choice Shapes Media Representations of Climate
(Un)certainty’, Environmental Communication, 8: 197–215.
Baird, A. (2018). ‘Becoming the Baddest: Masculine Trajectories of
Gang Violence in Medellín’, Estudios Socio-Jurídicos, 20(2): 9–48.
Ball, S. J. (1981). Beachside Comprehensive: A Case Study of
Secondary Schooling. Cambridge: Cambridge University Press.
Bampton, R., and Cowton, C. J. (2002). ‘The E-Interview’, Forum:
Qualitative Social Research, 3(2), www.qualitative-
research.net/index.php/fqs/article/view/848/1842 (accessed 27
February 2021).
Banks, J. (2012). ‘Edging Your Bets: Advantage Play, Gambling,
Crime and Victimisation’, Crime Media Culture, 9: 171–87.
Banks, J. (2014). ‘Online Gambling, Advantage Play, Reflexivity and
Virtual Ethnography’, in K. Lumsden and A. Winter (eds), Reflexivity
in Criminological Research: Experiences with the Powerful and the
Powerless. Basingstoke, UK: Palgrave Macmillan.
Banks, M., and Zeitlyn, D. (2015). Visual Methods in Social
Research, 2nd edn. London: SAGE.
Bansak, K., Hainmueller, J., and Hangartner, D. (2016). ‘How
Economic, Humanitarian, and Religious Concerns Shape European
Attitudes toward Asylum Seekers’, Science, 354(6309): 217–22.
Barbour, R. (2007). Doing Focus Groups. London: Sage.
Barbour, R. S. (2014). ‘Analysing Focus Groups’, in U. Flick (ed.),
The SAGE Handbook of Qualitative Data Analysis. London: Sage,
313–26.
Barge, S., and Gehlbach, H. (2012). ‘Using the Theory of Satisficing
to Evaluate the Quality of Survey Data’, Research in Higher
Education, 53(2): 182–200.
Barnard-Wills, D. (2016). Surveillance and Identity: Discourse,
Subjectivity and the State. London: Routledge.
Barter, C., and Renold, E. (1999). ‘The Use of Vignettes in Qualitative
Research’, Social Research Update, 25,
https://sru.soc.surrey.ac.uk/SRU25.html.
Bauman, Z. (1978). Hermeneutics and Social Science: Approaches to
Understanding. London: Hutchison.
Baumgartner, J. C., and Morris, J. S. (2010). ‘MyFaceTube Politics:
Social Networking Web Sites and Political Engagement of Young
Adults’, Social Science Computer Review, 28: 24–44.
Bazeley, P. (2013). Qualitative Data Analysis: Practical Strategies.
London: Sage.
Beardsworth, A. (1980). ‘Analysing Press Content: Some Technical
and Methodological Issues’, in H. Christian (ed.), Sociology of
Journalism and the Press. Keele, UK: Keele University Press.
Beardsworth, A., Bryman, A., Ford, J., and Keil, T. (n.d.). ‘“The Dark
Figure” in Statistics of Unemployment and Vacancies: Some
Sociological Implications’, discussion paper, Department of Social
Sciences, Loughborough University.
Becker, H. S. (1958). ‘Problems of Inference and Proof in Participant
Observation’, American Sociological Review, 23: 652–60.
Becker, H. S. (1963). Outsiders: Studies in the Sociology of
Deviance. New York: Free Press.
Becker, H. S. (1967). ‘Whose Side are We On?’, Social Problems, 14:
239–47.
Becker, H. S. (1974). ‘Photography and Sociology’, Studies in Visual
Communication, 1(1): 3–26.
Becker, H. S. (1982). ‘Culture: A Sociological View’, Yale Review, 71:
513–27.
Becker, H. S. (1986). Writing for Social Scientists: How to Start and
Finish Your Thesis, Book, or Article. Chicago: University of Chicago
Press.
Becker, H. S., and Geer, B. (1957). ‘Participant Observation and
Interviewing: A Comparison’, Human Organization, 16: 28–32.
Becker, L. A. (n.d.) Effect Size Calculators,
https://lbecker.uccs.edu/ (accessed 27 November 2020).
Becker, S., Bryman, A., and Sempik, J. (2006). Defining ‘Quality’ in
Social Policy Research: Views, Perceptions and a Framework for
Discussion. Lavenham, UK: Social Policy Association,
www.social-
policy.org.uk/downloads/defining%20quality%20in%20social%20p
olicy%20research.pdf (accessed 27 February 2021).
Becker, S., Sempik, J., and Bryman, A. (2010). ‘Advocates, Agnostics
and Adversaries: Researchers’ Perceptions of Service User
Involvement in Social Policy Research’, Social Policy and Society, 9:
355–66.
Belk, R. W., Sherry, J. F., and Wallendorf, M. (1988). ‘A Naturalistic
Inquiry into Buyer and Seller Behavior at a Swap Meet’, Journal of
Consumer Research, 14: 449–70.
Bell, C. (1969). ‘A Note on Participant Observation’, Sociology, 3:
417–18.
Bell, C., and Newby, H. (1977). Doing Sociological Research.
London: George Allen & Unwin.
bell hooks (1989). Talking Back: Thinking Feminist, Thinking Black.
Boston: South End Press.
Bengtsson, M., Berglund, T., and Oskarson, M. (2013). ‘Class and
Ideological Orientations Revisited: An Exploration of Class-Based
Mechanisms’, British Journal of Sociology, 64: 691–716.
Beninger, K., Fry, A., Jago, N., Lepps, H., Nass, L., and Silvester, H.
(2014). Research Using Social Media: Users’ Views. London:
NatCen, www.natcen.ac.uk/media/282288/p0639-research-
using-social-media-report-final-190214.pdf (accessed 1 March 2021).
Benson, M. (2011). The British in Rural France: Lifestyle Migration
and the Ongoing Quest for a Better Way of Life. Manchester, UK:
Manchester University Press.
Benson, M., and Jackson, E. (2013). ‘Place-Making and Place
Maintenance: Performativity, Place and Belonging among the Middle
Classes’, Sociology, 47: 793–809.
Benson, M., and Lewis, C. (2019). ‘Brexit, British People of Colour in
the EU-27 and Everyday Racism in Britain and Europe’, Ethnic and
Racial Studies, 42(13): 2211–28.
Berelson, B. (1952). Content Analysis in Communication Research.
New York: Free Press.
Beresford, P. (2019). ‘Public Participation in Health and Social Care:
Exploring the Co-production of Knowledge’, Frontiers in Sociology,
3: 41.
Bernhard, L., and Kriesi, H. (2019). ‘Populism In Election Times: A
Comparative Analysis of 11 Countries in Western Europe’, West
European Politics, 49(6): 1188–208.
Berthoud, R. (2000). ‘Introduction: The Dynamics of Social Change’,
in R. Berthoud and J. Gershuny (eds), Seven Years in the Lives of
British Families: Evidence on the Dynamics of Social Change from
the British Household Panel Survey. Bristol, UK: Policy Press.
Beullens, K., and Schepers, A. (2013). ‘Display of Alcohol Use on
Facebook: A Content Analysis’, Cyberpsychology, Behavior, and
Social Networking, 16: 497–503.
Bharti, S., Vachha, B., Pradhan, R., Babu, K., and Jena, S. (2016).
‘Sarcastic Sentiment Detection in Tweets Streamed in Real Time: A
Big Data Approach’, Digital Communications and Networks, 2(3):
108–21.
Bhaskar, R. (1989). Reclaiming Reality: A Critical Introduction to
Contemporary Philosophy. London: Verso.
Bianchi, A., Biffignandi, S., and Lynn, P. (2016). ‘Web-CAPI
Sequential Mixed-Mode Design in a Longitudinal Survey: Effects on
Participation Rates, Sample Composition and Costs’. Economic and
Social Research Council Understanding Society Working Paper
Series no. 2016–08,
www.understandingsociety.ac.uk/sites/default/files/downloads/wor
king-papers/2016-08.pdf (accessed 27 February 2021).
Bickerstaff, K., Lorenzoni, I., Pidgeon, N., Poortinga, W., and
Simmons, P. (2008). ‘Framing the Energy Debate in the UK: Nuclear
Power, Radioactive Waste and Climate Change Mitigation’, Public
Understanding of Science, 17: 145–69.
Billig, M. (1992). Talking of the Royal Family. London: Routledge.
Billig, M. (1996). Arguing and Thinking: A Rhetorical Approach to
Social Psychology. Cambridge: Cambridge University Press.
Billig, M. (2013). Learn to Write Badly: How to Succeed in the
Social Sciences. Cambridge: Cambridge University Press.
Birt, L., Scott, S., Cavers, D., Campbell, C., and Walter, F. (2016).
‘Member Checking: A Tool to Enhance Trustworthiness or Merely a
Nod to Validation?’ Qualitative Health Research, 26(13): 1802–11.
Bisdee, D., Daly, T., and Price, D. (2013). ‘Behind Closed Doors:
Older Couples and the Gendered Management of Household Money’,
Social Policy and Society, 12: 163–74.
Black, A., Burns, N., Mair, F., and O’Donnell, K. (2018). ‘Migration
and the Media in the UK: The Effect on Healthcare Access for
Asylum Seekers and Refugees’, European Journal of Public Health,
28 (suppl. 1, May), https://doi.org/10.1093/eurpub/cky047.081.
Blaikie, A. (2001). ‘Photographs in the Cultural Account: Contested
Narratives and Collective Memory in the Scottish Islands’,
Sociological Review, 49: 345–67.
Blaikie, N. (2004). ‘Abduction’, in M. S. Lewis-Beck, A. Bryman, and
T. F. Liao (eds), The Sage Encyclopedia of Social Science Research
Methods. Thousand Oaks, CA: Sage.
Blaikie, N., and Priest, J. (2019). Designing Social Research: The
Logic of Anticipation. Cambridge: Polity Press.
Blair, J., Czaja, R., and Blair, E. (2014). Designing Surveys: A Guide
to Decisions and Procedures. London: Sage.
Blatchford, P. (2005). ‘A Multi-Method Approach to the Study of
School Size Differences’, International Journal of Social Research
Methodology, 8: 195–205.
Blatchford, P., Bassett, P., Brown, P., and Webster, R. (2009). ‘The
Effect of Support Staff on Pupil Engagement and Individual
Attention’, British Educational Research Journal, 36: 661–86.
Blatchford, P., Chan, K. W., Galton, M., Lai, K. C., and Lee, J. C.
(2016). Class Size: Eastern and Western Perspectives. Abingdon,
UK: Routledge.
Blatchford, P., Edmonds, S., and Martin, C. (2003). ‘Class Size, Pupil
Attentiveness and Peer Relations’, British Journal of Educational
Psychology, 73: 15–36.
Blauner, R. (1964). Alienation and Freedom. Chicago: University of
Chicago Press.
Bligh, M. C., and Kohles, J. C. (2014). ‘Comparing Leaders Across
Contexts, Culture, and Time: Computerized Content Analysis of
Leader–Follower Communications’, Leadership, 10: 142–59.
Bligh, M. C., Kohles, J. C., and Meindl, J. R. (2004). ‘Charisma under
Crisis: Presidential Leadership, Rhetoric, and Media Responses
Before and After the September 11th Terrorist Attacks’, Leadership
Quarterly, 15: 211–39.
Blommaert, L., Coenders, M., and van Tubergen, F. (2014). ‘Ethnic
Discrimination in Recruitment and Decision Makers’ Features:
Evidence from Laboratory Experiment and Survey Data using a
Student Sample’, Social Indicators Research, 116: 731–54.
Bloor, M. (1997). ‘Addressing Social Problems through Qualitative
Research’, in D. Silverman (ed.), Qualitative Research: Theory,
Method and Practice. London: Sage.
Bloor, M. (2002). ‘No Longer Dying for a Living: Collective
Responses to Injury Risks in South Wales Mining Communities,
1900–47’, Sociology, 36: 89–105.
Bloor, M., Frankland, S., Thomas, M., and Robson, K. (2001). Focus
Groups in Social Research. London: Sage.
Blumer, H. (1954). ‘What is Wrong with Social Theory?’, American
Sociological Review, 19: 3–10.
Blumer, H. (1956). ‘Sociological Analysis and the “Variable”’,
American Sociological Review, 21: 683–90.
Blumer, H. (1962). ‘Society as Symbolic Interaction’, in A. M. Rose
(ed.), Human Behavior and Social Processes. London: Routledge &
Kegan Paul.
Bobbitt-Zeher, D. (2011). ‘Connecting Gender Stereotypes,
Institutional Policies, and Gender Composition of Workplace’,
Gender and Society, 25: 764–86.
Bock, A., Isermann, H., and Knieper, T. (2011). ‘Quantitative Content
Analysis of the Visual’, in E. Margolis and L. Pauwels (eds.), The
SAGE Handbook of Visual Research Methods. London: SAGE, 265–
82.
Bock, J. J. (2018). ‘Migrants in the Mountains: Shifting Borders and
Contested Crisis Experiences in Rural Germany’, Sociology, 52(3):
569–86.
Boepple, L., and Thompson, J. K. (2014). ‘A Content Analysis of
Healthy Living Blogs: Evidence of Content Thematically Consistent
with Dysfunctional Eating Attitudes and Behaviors’, International
Journal of Eating Disorders, 47: 362–7.
Bogdan, R., and Taylor, S. J. (1975). Introduction to Qualitative
Research Methods: A Phenomenological Approach to the Social
Sciences. New York: Wiley.
Booker, C., Kelly, Y., and Sacker, A. (2018). ‘Gender Differences in
the Associations Between Age Trends of Social Media Interaction
and Well-Being Among 10–15 Year Olds in the UK’. BMC Public
Health, 18 (article no. 321), https://doi.org/10.1186/s12889-018-
5220-4.
Bors, D. (2018). Data Analysis for the Social Sciences: Integrating
Theory and Practice. London: Sage.
Bosnjak, M., Neubarth, W., Couper, M. P., Bandilla, W., and
Kaczmirek, L. (2008). ‘Prenotification in Web-Based Access Panel
Surveys: The Influence of Mobile Text Messaging Versus E-mail on
Response Rates and Sample Composition’, Social Science Computer
Review, 26: 213–23.
boyd, d (2014). It’s Complicated: Social Lives of Networked Teens.
New Haven, CT: Yale University Press.
Braekman, E., Berete, F., Charafeddine, R., Demarest, S., Drieskens,
S., Gisle, L., Molenberghs, G., Tafforeau, J., Van der Heyden, J., and
Van Hal, G. (2018). ‘Measurement Agreement of the Self-
Administered Questionnaire of the Belgian Health Interview Survey:
Paper-and-Pencil versus Web-Based Mode’, PLoS ONE, 13(5):
e0197434.
Brandtzaeg, P. (2017). ‘Facebook is No “Great Equalizer”: A Big Data
Approach to Gender Differences in Civic Engagement Across
Countries’, Social Science Computer Review, 35(1): 103–25.
Brannen, J., and Nilsen, A. (2006). ‘From Fatherhood to Fathering:
Tradition and Change Among British Fathers in Four-Generation
Families’, Sociology, 40: 335–52.
Brannen, J., Lewis, S., Nilsen, A., and Smithson, J. (eds) (2002).
Young Europeans, Work and Family: Futures in Transition.
London: Routledge.
Brannen, J., O’Connell, R., and Mooney, A. (2013). ‘Families, Meals,
and Synchronicity: Eating Together in British Dual Earner Families’,
Community, Work and Family, 16: 417–34.
Braun, V., and Clarke, V. (2006). ‘Using Thematic Analysis in
Psychology’, Qualitative Research in Psychology, 3: 77–101.
Braun, V., and Clarke, V. (2013). Successful Qualitative Research: A
Practical Guide for Beginners. London: Sage.
Brayfield, A., and Rothe, H. (1951). ‘An Index of Job Satisfaction’,
Journal of Applied Psychology, 35: 307–11.
Bregoli, I. (2013). ‘Effects of DMO Coordination on Destination
Brand Identity: A Mixed-Method Study of the City of Edinburgh’,
Journal of Travel Research, 52: 212–24.
Brewster, Z. W., and Lynn, M. (2014). ‘Black–White Earnings Gap
among Restaurant Servers: A Replication, Extension, and
Exploration of Consumer Racial Discrimination in Tipping’,
Sociological Inquiry, 84: 545–69.
Brick, J., and Williams, D. (2013). ‘Explaining Rising Nonresponse
Rates in Cross-Sectional Surveys’, Annals of the American Academy
of Political and Social Science, 645(1): 36–59.
Bridgman, P. W. (1927). The Logic of Modern Physics. New York:
Macmillan.
Briggs, C. L. (1986). Learning How to Ask: A Sociolinguistic
Appraisal of the Role of the Interview in Social Science Research.
Cambridge: Cambridge University Press.
British Academy. (2015). Count Us In: Quantitative Skills for a New
Generation. London: British Academy.
Britton, J. (2018). ‘Muslim Men, Racialised Masculinities and
Personal Life’, Sociology, 53(1): 36–51.
Brooks, R., and Waters, J. (2015). ‘The Hidden Internationalism of
Elite English Schools’, Sociology, 49: 212–28.
Brotsky, S. R., and Giles, D. (2007). ‘Inside the “Pro-Ana”
Community: A Covert Online Participant Observation’, Eating
Disorders, 15: 93–109.
Brown, D. (2003). The Da Vinci Code. New York: Doubleday.
Bruce, C. S. (1994). ‘Research Students’ Early Experiences of the
Dissertation Literature Review’, Studies in Higher Education, 19:
217–29.
Brüggen, E., and Willems, P. (2009). ‘A Critical Comparison of
Offline Focus Groups, Online Focus Groups and E-Delphi’,
International Journal of Market Research, 51: 363–81.
Bryant, A., and Charmaz, K. (eds) (2019). The Sage Handbook of
Current Developments in Grounded Theory. London: Sage.
Bryman, A. (1974). ‘Sociology of Religion and Sociology of Elites’,
Archives de sciences sociales des religions, 38: 109–21.
Bryman, A. (1988a). Quantity and Quality in Social Research.
London: Routledge.
Bryman, A. (1988b). Doing Research in Organizations. London:
Routledge.
Bryman, A. (1994). ‘The Mead/Freeman Controversy: Some
Implications for Qualitative Researchers’, in R. G. Burgess and C.
Pole (eds), Issues in Quantitative Research, Studies in Qualitative
Methodology, vol. 4. Greenwich, CT: JAI Press.
Bryman, A. (1995). Disney and his Worlds. London: Routledge.
Bryman, A. (1999). ‘Global Disney’, in P. Taylor and D. Slater (eds),
The American Century. Oxford: Blackwell.
Bryman, A. (2003). ‘McDonald’s as a Disneyized Institution: Global
Implications’, American Behavioral Scientist, 47: 154–67.
Bryman, A. (2004). The Disneyization of Society. London: Sage.
Bryman, A. (2006a). ‘Integrating Quantitative and Qualitative
Research: How is it Done?’, Qualitative Research, 6(1): 97–113.
Bryman, A. (2006b). ‘Paradigm Peace and the Implications for
Quality’, International Journal of Social Research Methodology, 9:
111–26.
Bryman, A. (2007). ‘Effective Leadership in Higher Education: A
Literature Review’, Studies in Higher Education, 32: 693–710.
Bryman, A. (2008). ‘Why Do Researchers
Integrate/Combine/Mesh/Blend/Mix/Merge/Fuse Quantitative and
Qualitative Research?’, in M. M. Bergman (ed.), Advances in Mixed
Methods Research. London: Sage.
Bryman, A. (2012). Contribution to S. E. Baker and R. Edwards
(eds), How Many Qualitative Interviews is Enough? Expert Voices
and Early Career Reflections on Sampling and Cases in Qualitative
Research. National Centre for Research Methods Review Paper,
http://eprints.ncrm.ac.uk/2273/ (accessed 27 February 2021).
Bryman, A. (2014). ‘June 1989 and Beyond: Julia Brannen’s
Contribution to Mixed Methods Research’, International Journal of
Social Research Methodology, 17: 121–31.
Bryman, A., and Burgess, R. G. (1994a). ‘Developments in
Qualitative Data Analysis: An Introduction’, in A. Bryman and R. G.
Burgess (eds), Analyzing Qualitative Data. London: Routledge.
Bryman, A., and Burgess, R. G. (1994b). ‘Reflections on Qualitative
Data Analysis’, in A. Bryman and R. G. Burgess (eds), Analyzing
Qualitative Data. London: Routledge.
Bryman, A., and Burgess, R. G. (1999). ‘Introduction: Qualitative
Research Methodology—A Review’, in A. Bryman and R. G. Burgess
(eds), Qualitative Research. London: Sage.
Bryman, A., and Cramer, D. (2011). Quantitative Data Analysis with
IBM SPSS 17, 18 and 19: A Guide for Social Scientists. London:
Routledge.
Bryman, A., Becker, S., and Sempik, J. (2008). ‘Quality Criteria for
Quantitative, Qualitative and Mixed Methods Research: The View
from Social Policy’, International Journal of Social Research
Methodology, 11: 261–76.
Buckle, A., and Farrington, D. P. (1984). ‘An Observational Study of
Shoplifting’, British Journal of Criminology, 24: 63–73.
Buckton, C., Patterson, C., Hyseni, L., Katikireddi, S., Lloyd-
Williams, F., Elliott-Green, A., Capewell, S., and Hilton, S. (2018).
‘The Palatability of Sugar-Sweetened Beverage Taxation: A Content
Analysis of Newspaper Coverage of the UK Sugar Debate’, PLoS
ONE, 13(12): e0207576.
Bulmer, M. (1979). ‘Concepts in the Analysis of Qualitative Data’,
Sociological Review, 27: 651–77.
Bulmer, M. (1982). ‘The Merits and Demerits of Covert Participant
Observation’, in M. Bulmer (ed.), Social Research Ethics. London:
Macmillan.
Burawoy, M. (1979). Manufacturing Consent. Chicago: University of
Chicago Press.
Burawoy, M. (2003). ‘Revisits: An Outline of a Theory of Reflexive
Ethnography’, American Sociological Review, 68: 645–79.
Burger, J. M. (2009). ‘Replicating Milgram: Would People Still Obey
Today?’, American Psychologist, 64: 1–11.
Burgess, R. G. (1983). Inside Comprehensive Education: A Study of
Bishop McGregor School. London: Methuen.
Burgess, R. G. (1984). In the Field. London: Allen & Unwin.
Búriková, Z., and Miller, D. (2010). Au Pair. Cambridge: Polity.
Burnap, P., Gibson, R., Sloan, L., Southern, R., and Williams, M.
(2016). ‘140 Characters to Victory? Using Twitter to Predict the UK
2015 General Election’, Electoral Studies, 41: 230–3.
Butcher, B. (1994). ‘Sampling Methods: An Overview and Review’,
Survey Methods Centre Newsletter, 15: 4–8.
Butler, A. E., Copnell, B., and Hall, H. (2018). ‘The Development of
Theoretical Sampling in Practice’, Collegian, 25(5): 561–6.
Butler, T., and Robson, G. (2001). ‘Social Capital, Gentrification and
Neighbourhood Change in London: A Comparison of Three South
London Neighbourhoods’, Urban Studies, 38: 2145–62.
Cahill, M., Robinson, K., Pettigrew, J., Galvin, R., and Stanley, M.
(2018). ‘Qualitative Synthesis: A Guide to Conducting a Meta-
ethnography’, British Journal of Occupational Therapy, 81(3): 129–
37.
Calder, B. J. (1977). ‘Focus Groups and the Nature of Qualitative
Marketing Research’, Journal of Marketing Research, 14: 353–64.
Callaghan, G., and Thompson, P. (2002). ‘“We Recruit Attitude”: The
Selection and Shaping of Routine Call Centre Labour’, Journal of
Management Studies, 39: 233–54.
Calvey, D. (2019). ‘The Everyday World of Bouncers: A Rehabilitated
Role for Covert Ethnography’, Qualitative Research, 19(3): 247–62.
Camerer, C. F. (1997). ‘Taxi Drivers and Beauty Contests’,
Engineering and Science, 60: 11–19.
Campbell, D. T. (1957). ‘Factors Relevant to the Validity of
Experiments in Social Settings’, Psychological Bulletin, 54: 297–312.
Campbell, N., Ali, F., Finlay, A., and Salek, S. (2015). ‘Equivalence of
Electronic and Paper-Based Patient-Reported Outcome Measures’,
Quality of Life Research, 24(8): 1949–61.
Caretta, M. A., and Vacchelli, E. (2015). ‘Re-thinking the Boundaries
of the Focus Group: A Reflexive Analysis on the Use and Legitimacy
of Group Methodologies in Qualitative Research’, Sociological
Research Online, 20(4): 58–70.
Carson, A., Chabot, C., Greyson, D., Shannon, K., Duff, P., and
Shoveller, J. (2017). ‘A Narrative Analysis of the Birth Stories of
Early‐Age Mothers’, Sociology of Health & Illness, 39(6): 816–31.
Carter-Harris, L., Bartlett Ellis, R., Warrick, A., and Rawl, S. (2016).
‘Beyond Traditional Newspaper Advertisement: Leveraging
Facebook-Targeted Advertisement to Recruit Long-Term Smokers
for Research’, Journal of Medical Internet Research, 18(6), e117.
Cassidy, R. (2014). ‘“A Place for Men to Come and Do Their Thing”:
Constructing Masculinities in Betting Shops in London’, British
Journal of Sociology, 65: 170–91.
Catterall, M., and Maclaran, P. (1997). ‘Focus Group Data and
Qualitative Analysis Programs: Coding the Moving Picture as well as
Snapshots’, Sociological Research Online, 2,
www.socresonline.org.uk/2/1/6 (accessed 27 February 2021).
Cavendish, R. (1982). Women on the Line. London: Routledge &
Kegan Paul.
Cerulo, K. A. (2014). ‘Reassessing the Problem: Response to
Jerolmack and Khan’, Sociological Methods and Research, 43: 219–
26.
Chambliss, D., and Schutt, R. (2019). Making Sense of the Social
World: Methods of Investigation. London: Sage.
Charlton, T., Coles, D., Panting, C., and Hannan, A. (1999).
‘Behaviour of Nursery Class Children before and after the Availability
of Broadcast Television: A Naturalistic Study of Two Cohorts in a
Remote Community’, Journal of Social Behavior and Personality,
14: 315–24.
Charlton, T., Gunter, B., and Coles, D. (1998). ‘Broadcast Television
as a Cause of Aggression? Recent Findings from a Naturalistic
Study’, Emotional and Behavioural Difficulties, 3: 5–13.
Charmaz, K. (1983). ‘The Grounded Theory Method: An Explication
and Interpretation’, in R. M. Emerson (ed.), Contemporary Field
Research: A Collection of Readings. Boston: Little, Brown.
Charmaz, K. (2000). ‘Grounded Theory: Objectivist and
Constructivist Methods’, in N. K. Denzin and Y. S. Lincoln (eds),
Handbook of Qualitative Research, 2nd edn. Thousand Oaks, CA:
Sage.
Charmaz, K. (2002). ‘Qualitative Interviewing and Grounded Theory
Analysis’, in J. F. Gubrium and J. A. Holstein (eds), Handbook of
Interview Research: Context and Method. Thousand Oaks, CA:
Sage.
Charmaz, K. (2004). ‘Grounded Theory’, in M. S. Lewis-Beck, A.
Bryman, and T. F. Liao (eds), The Sage Encyclopedia of Social
Science Research Methods. Thousand Oaks, CA: Sage.
Charmaz, K. (2006). Constructing Grounded Theory: A Practical
Guide through Qualitative Analysis. London: Sage.
Chavez, Christina (2008). ‘Conceptualising from the Inside:
Advantages, Implications and Demands on Insider Positionality’,
Qualitative Report, 13(3): 474–94.
Chin, M. G., Fisak, B., Jr, and Sims, V. K. (2002). ‘Development of
the Attitudes toward Vegetarianism Scale’, Anthrozoös, 15: 333–42.
Christensen, D. H., and Dahl, C. M. (1997). ‘Rethinking Research
Dichotomies’, Family and Consumer Sciences Research Journal,
25(3): 269–85.
Cicourel, A. V. (1964). Method and Measurement in Sociology. New
York: Free Press.
Cicourel, A. V. (1968). The Social Organization of Juvenile Justice.
New York: Wiley.
Cicourel, A. V. (1982). ‘Interviews, Surveys, and the Problem of
Ecological Validity’, American Sociologist, 17: 11–20.
Clairborn, W. L. (1969). ‘Expectancy Effects in the Classroom: A
Failure to Replicate’, Journal of Educational Psychology, 60: 377–
83.
Clark, A. (2013). ‘Haunted by Images? Ethical Moments and
Anxieties in Visual Research’, Methodological Innovations Online,
8(2): 68–81,
https://journals.sagepub.com/doi/10.4256/mio.2013.014 (accessed
28 February 2021).
Clark, A., and Emmel, N. (2010). ‘Realities Toolkit #13: Using
Walking Interviews’,
http://eprints.ncrm.ac.uk/800/1/2009_connected_lives_methods_
emmel_clark.pdf (accessed 27 February 2021).
Clark, T. (2008). ‘“We’re Over-Researched Here!” Exploring
Accounts of Research Fatigue within Qualitative Research
Engagements’, Sociology, 42(5): 953–70.
Clark, T. (2019). ‘“Normal Happy Girl” Interrupted: An
Auto/biographical Analysis of Myra Hindley’s Public Confession’,
Deviant Behavior,
https://doi.org/10.1080/01639625.2019.1689047.
Clark, T., and Foster, L. (2017). ‘“I’m Not a Natural Mathematician”:
Inquiry-Based Learning, Constructive Alignment and Introductory
Quantitative Social Science’, Teaching Public Administration, 35(3):
260–79.
Clark, T., Foster, L., and Bryman, A. (2019). How to Do Your
Dissertation or Social Science Research Project. Oxford: Oxford
University Press.
Clarke, V., and Braun, V. (2013). ‘Teaching Thematic Analysis’, The
Psychologist, 26: 120–3.
Clayman, S., and Gill, V. T. (2004). ‘Conversation Analysis’, in M.
Hardy and A. Bryman (eds), Handbook of Data Analysis. London:
Sage.
Clifford, J., and Marcus, G. E. (eds.) (1986). Writing Culture: The
Poetics and Politics of Ethnography. School of American Research
Advanced Seminar. Oakland, CA: University of California Press.
Cloward, R. A., and Ohlin, L. E. (1960). Delinquency and
Opportunity: A Theory of Delinquent Gangs. New York: Free Press.
Cobanoglu, C., Ward, B., and Moreo, P. J. (2001). ‘A Comparison of
Mail, Fax and Web-Based Survey Methods’, International Journal of
Market Research, 43: 441–52.
Coenen, M., Stamm, T. A., Stucki, G., and Cieza, A. (2012).
‘Individual Interviews and Focus Groups in Patients with
Rheumatoid Arthritis: A Comparison of Two Qualitative Methods’,
Quality of Life Research, 21(2): 359–70.
Coffey, A. (1999). The Ethnographic Self: Fieldwork and the
Representation of Reality. London: Sage.
Coffey, A. (2014). ‘Analysing Documents’, in U. Flick (ed.), The SAGE
Handbook of Qualitative Data Analysis. London: Sage, 367–80.
Coffey, A., and Atkinson, P. (1996). Making Sense of Qualitative
Data: Complementary Research Strategies. Thousand Oaks, CA:
Sage.
Coffey, A., Holbrook, B., and Atkinson, P. (1994). ‘Qualitative Data
Analysis: Technologies and Representations’, Sociological Research
Online, 2, www.socresonline.org.uk/1/1/4.html (accessed 28
February 2021).
Cohen, L., Manion, L., and Morrison, K. (2017). Research Methods
in Education, 8th edn. Abingdon, UK: Routledge.
Cohen, R. S. (2010). ‘When It Pays to be Friendly: Employment
Relationships and Emotional Labour in Hairstyling’, Sociological
Review, 58: 197–218.
Cohen, S. (1972). Folk Devils and Moral Panics. London:
MacGibbon and Kee.
Colditz, J. B., Welling, J., Smith, N. A., James, A. E., and Primack, B.
A. (2019). ‘World Vaping Day: Contextualizing Vaping Culture in
Online Social Media Using a Mixed Methods Approach’, Journal of
Mixed Methods Research, 13(2), 196–215,
https://doi.org/10.1177/1558689817702753.
Coleman, C., and Moynihan, J. (1996). Understanding Crime Data:
Haunted by the Dark Figure. Buckingham, UK: Open University
Press.
Coleman, J. S. (1958). ‘Relational Analysis: The Study of Social
Organization with Survey Methods’, Human Organization, 16: 28–
36.
Collins, M. (1997). ‘Interviewer Variability: A Review of the Problem’,
Journal of the Market Research Society, 39: 67–84.
Collins, P. (2015). ‘Osborne’s Brave New Britain is a Pipe Dream’,
The Times, 16 January,
www.thetimes.co.uk/tto/opinion/columnists/article4324673.ece
(accessed 12 October 2020).
Collins, P. H. (1990). ‘Black Feminist Thought in the Matrix of
Domination’, in P. H. Collins (ed.), Black Feminist Thought:
Knowledge, Consciousness, and the Politics of Empowerment.
Boston: Unwin Hyman, 221–38.
Collins, R. (1994). Four Sociological Traditions, rev. edn. New York:
Oxford University Press.
Connelly, R., Gayle, V., and Lambert, P. S. (2016). ‘Ethnicity and
Ethnic Group Measures in Social Survey Research’, Methodological
Innovations 9, https://doi.org/10.1177/2059799116642885.
Converse, P. D., Wolfe, E. W., Huang, X., and Oswald, F. L. (2008).
‘Response Rates for Mixed-Mode Surveys Using Mail and E-
Mail/Web’, American Journal of Evaluation, 29: 99–107.
Cook, T. D., and Campbell, D. T. (1979). Quasi-Experimentation:
Design and Analysis for Field Settings. Boston: Houghton Mifflin.
Cooper, H. (2016). Research Synthesis and Meta-Analysis: A Step-
by-Step Approach. London: Sage.
Corbin, J., and Strauss, A. (2015). Basics of Qualitative Research:
Techniques and Procedures for Developing Grounded Theory, 4th
edn. Los Angeles: Sage.
Corden, A., and Sainsbury, R. (2006). Using Verbatim Quotations in
Reporting Qualitative Social Research: Researchers’ Views. York,
UK: Social Policy Research Unit, University of York,
www.york.ac.uk/inst/spru/pubs/pdf/verbquotresearch.pdf
(accessed 28 February 2021).
Cortazzi, M. (2001). ‘Narrative Analysis in Ethnography’, in P.
Atkinson, A. Coffey, S. Delamont, J. Lofland, and L. Lofland (eds),
Handbook of Ethnography. London: Sage.
Corti, L. (1993). ‘Using Diaries in Social Research’, Social Research
Update, 2, http://sru.soc.surrey.ac.uk/SRU2.html (accessed 28
February 2021).
Couper, M. P. (2000). ‘Web Surveys: A Review of Issues and
Approaches’, Public Opinion Quarterly, 64: 464–94.
Couper, M. P. (2008). Designing Effective Web Surveys. Cambridge:
Cambridge University Press.
Couper, M. P., Traugott, M. W., and Lamias, M. J. (2001). ‘Web
Survey Design and Administration’, Public Opinion Quarterly, 65:
230–53.
Cowls, J., and Bright, J. (2017). ‘International Hyperlinks in Online
News Media’, in N. Brügger and R. Schroeder (eds), The Web as
History. London: UCL Press, 101–16.
Crawford, S. D., Couper, M. P., and Lamias, M. J. (2001). ‘Web
Surveys: Perceptions of Burden’, Social Science Computer Review,
19: 146–62.
Crenshaw, K. (1989). ‘Demarginalizing the Intersection of Race and
Sex’, University of Chicago Legal Forum, 1989(1/Article 8): 139–67.
Crenshaw, K. (1991). ‘Mapping the Margins: Intersectionality,
Identity Politics, and Violence Against Women of Color’, Stanford
Law Review, 43(6): 1241–99.
Creswell, J. W., and Plano Clark, V. L. (2011). Designing and
Conducting Mixed Methods Research, 2nd edn. Thousand Oaks, CA:
Sage.
Creswell, J. W., and Plano Clark, V. L. (2018). Designing and
Conducting Mixed Methods Research—International Student
Edition, 3rd edn. Thousand Oaks, CA: Sage.
Creswell, J. W., and Tashakkori, A. (2007). ‘Developing Publishable
Mixed Methods Manuscripts’, Journal of Mixed Methods Research,
1: 107–11.
Croll, P. (1986). Systematic Classroom Observation. London:
Falmer.
Crompton, R., and Birkelund, G. (2000). ‘Employment and Caring in
British and Norwegian Banking’, Work, Employment and Society,
14: 331–52.
Crook, C., and Light, P. (2002). ‘Virtual Society and the Cultural
Practice of Study’, in S. Woolgar (ed.), Virtual Society? Technology,
Cyberbole, Reality. Oxford: Oxford University Press.
Curasi, C. F. (2001). ‘A Critical Exploration of Face-to-Face
Interviewing vs Computer-Mediated Interviewing’, International
Journal of Market Research, 43: 361–75.
Curtice, J. (2016). ‘A Question of Culture or Economics? Public
Attitudes to the European Union in Britain’, Political Quarterly, 87:
209–18.
Cyr, J. (2016). ‘The Pitfalls and Promise of Focus Groups as a Data
Collection Method’, Sociological Methods & Research, 45(2): 231–
59.
Dacin, M. T., Munir, K., and Tracey, P. (2010). ‘Formal Dining at
Cambridge Colleges: Linking Ritual Performance and Institutional
Maintenance’, Academy of Management Journal, 53: 1393–418.
Dai, N. T., Free, C., and Gendron, Y. (2019). ‘Interview-Based
Research in Accounting 2000–2014: Informal Norms, Translation
and Vibrancy’, Management Accounting Research, 42: 26–38.
Dale, A., Arber, S., and Proctor, M. (1988). Doing Secondary
Analysis. London: Unwin Hyman.
Davidson, J., Paulus, T., and Jackson, K. (2016). ‘Speculating on the
Future of Digital Tools for Qualitative Research’, Qualitative
Inquiry, 22(7): 606–10.
Davis, K., Randall, D. P., Ambrose, A., and Orand, M. (2015). ‘“I Was
Bullied Too”: Stories of Bullying and Coping in an Online
Community’, Information, Communication and Society, 18: 357–75.
de Bruijne, M., and Wijnant, A. (2014). ‘Improving Response Rates
and Questionnaire Design for Mobile Web Surveys’, Public Opinion
Quarterly, 78(4): 951–62.
de Certeau, M. (1984). The Practice of Everyday Life. Berkeley:
University of California Press.
De Leeuw, E. D., and Hox, J. P. (2015). ‘Survey Mode and Mode
Effects’, in U. Engel, B. Jann, P. Lynch, A. Scherpenzeel, and P.
Sturgis (eds), Improving Survey Methods: Lessons from Recent
Research. New York: Routledge.
de Rada, V., and Domínguez, J. (2015). ‘The Quality of Responses to
Grid Questions as used in Web Questionnaires (Compared with
Paper Questionnaires)’, International Journal of Social Research
Methodology, 18(4): 337–48.
De Vaus, D. (2013). Surveys in Social Research. London: Routledge.
De Vries, R. (2018). Critical Statistics: Seeing Beyond the Headlines.
Basingstoke, UK: Palgrave.
Deacon, D. (2007). ‘Yesterday’s Papers and Today’s Technology:
Digital Newspaper Archives and “Push Button” Content Analysis’,
European Journal of Communication, 22: 5–25.
Deacon, D., Fenton, N., and Bryman, A. (1999). ‘From Inception to
Reception: The Natural History of a News Item’, Media, Culture and
Society, 21: 5–31.
Deakin, H., and Wakefield, K. (2014). ‘Skype Interviewing:
Reflections of Two PhD Researchers’, Qualitative Research, 14(5):
603–16.
Delamont, S., and Atkinson, P. (2004). ‘Qualitative Research and the
Postmodern Turn’, in C. Hardy and A. Bryman (eds), Handbook of
Data Analysis. London: Sage.
Delamont, S., and Hamilton, D. (1984). ‘Revisiting Classroom
Research: A Continuing Cautionary Tale’, in S. Delamont (ed.),
Readings on Interaction in the Classroom. London: Methuen.
Demasi, M. A. (2019). ‘Facts as Social Action in Political Debates
about the European Union’, Political Psychology, 40(1): 3–20.
Demetry, D. (2013). ‘Regimes of Meaning: The Intersection of Space
and Time in Kitchen Cultures’, Journal of Contemporary
Ethnography, 42: 576–607.
Denscombe, M. (2006). ‘Web-Based Questionnaires and the Mode
Effect: An Evaluation Based on Completion Rates and Data Contents
of Near-Identical Questionnaires Delivered in Different Modes’,
Social Science Computer Review, 24: 246–54.
Denzin, N. K. (1968). ‘On the Ethics of Disguised Observation’,
Social Problems, 15: 502–4.
Denzin, N. K. (1970). The Research Act in Sociology. Chicago:
Aldine.
Denzin, N. K., and Lincoln, Y. S. (1994). ‘Introduction: Entering the
Field of Qualitative Research’, in N. K. Denzin and Y. S. Lincoln
(eds), Handbook of Qualitative Research. Thousand Oaks, CA: Sage.
Denzin, N. K., and Lincoln, Y. S. (2000). Handbook of Qualitative
Research, 2nd edn. Thousand Oaks, CA: Sage.
Denzin, N. K., and Lincoln, Y. S. (2005a). Handbook of Qualitative
Research, 3rd edn. Thousand Oaks, CA: Sage.
Denzin, N. K., and Lincoln, Y. S. (2005b). ‘Introduction: The
Discipline and Practice of Qualitative Research’, in N. K. Denzin and
Y. S. Lincoln (eds), Handbook of Qualitative Research, 3rd edn.
Thousand Oaks, CA: Sage.
Denzin, N. K., and Lincoln, Y. S. (2011). ‘Preface’, in N. K. Denzin
and Y. S. Lincoln (eds), Handbook of Qualitative Research, 4th edn.
Los Angeles: Sage.
Department of Health [UK] (2005). Research Governance
Framework for Health and Social Care. London: Department of
Health.
Derrick, B., and White, P. (2017). ‘Comparing Two Samples from an
Individual Likert Question’, International Journal of Mathematics
and Statistics, 18(3): 1–13.
DeVault, M. L., and Gross, G. (2012). ‘Feminist Qualitative
Interviewing: Experience, Talk, and Knowledge’, in S. N. Hesse-Biber
(ed.), Handbook of Feminist Research: Theory and Praxis, 2nd edn.
Thousand Oaks, CA: Sage, 173–97.
Díaz de Rada, V. D., and Domínguez-Álvarez, J. A. (2013). ‘Response
Quality of Self-Administered Questionnaires: A Comparison Between
Paper and Web Questionnaires’, Social Science Computer Review,
32: 256–69.
Diener, E., and Crandall, R. (1978). Ethics in Social and Behavioral
Research. Chicago: University of Chicago Press.
Diener, E., Inglehart, R., and Tay, L. (2013). ‘Theory and Validity of
Life Satisfaction Scales’, Social Indicators Research, 112: 497–527.
Diersch, N., and Walther, E. (2016). ‘The Impact of Question Format,
Context, and Content on Survey Answers in Early and Late
Adolescence’, Journal of Official Statistics, 32(2): 307–28.
Dillman, D. A., Smyth, J. D., and Christian, L. M. (2014). Internet,
Phone, Mail, and Mixed-Mode Surveys: The Tailored Design
Method, 4th edn. Hoboken, NJ: Wiley.
Dingwall, R. (1980). ‘Ethics and Ethnography’, Sociological Review,
28: 871–91.
Ditton, J. (1977). Part-Time Crime: An Ethnography of Fiddling and
Pilferage. London: Macmillan.
Dixon-Woods, M., Angell, E., Ashcroft, R., and Bryman, A. (2007).
‘Written Work: The Social Functions of Research Ethics Committee
Letters’, Social Science and Medicine, 65: 792–802.
Dommeyer, C. J., and Moriarty, E. (2000). ‘Comparison of Two
Forms of an E-Mail Survey: Embedded vs Attached’, International
Journal of Market Research, 42: 39–50.
Döring, N., Reif, A., and Poeschl, S. (2016). ‘How Gender-
Stereotypical are Selfies? A Content Analysis and Comparison with
Magazine Adverts’, Computers in Human Behavior, 55(B): 955–62.
Douglas, J. D. (1967). The Social Meanings of Suicide. Princeton:
Princeton University Press.
Douglas, J. D. (1976). Investigative Social Research: Individual and
Team Field Research. Beverly Hills, CA: Sage.
Drabble, L., Trocki, K. F., Salcedo, B., Walker, P. C., and Korcha, R.
A. (2016). ‘Conducting Qualitative Interviews by Telephone: Lessons
Learned from a Study of Alcohol Use among Sexual Minority and
Heterosexual Women’, Qualitative Social Work, 15(1): 118–33.
Drdová, L., and Saxonberg, S. (2020). ‘Dilemmas of a Subculture: An
Analysis of BDSM Blogs about Fifty Shades of Grey’, Sexualities,
23(5–6): 987–1008.
Durkheim, E. (1938/1895). The Rules of Sociological Method, trans.
S. A. Solavay and J. H. Mueller. New York: Free Press.
Durkheim, E. (1952/1897). Suicide: A Study In Sociology, trans. J.
A. Spaulding and G. Simpson. London: Routledge & Kegan Paul.
Dwyer, S. C., and Buckle, J. L. (2009). ‘The Space Between: On Being
an Insider–Outsider in Qualitative Research’, International Journal
of Qualitative Methods, 8(1): 54–63.
Eaton, M. A. (2018). ‘Manifesting Spirits: Paranormal Investigation
and the Narrative Development of a Haunting’, Journal of
Contemporary Ethnography, 48(2): 155–82.
Edmiston, D. (2018). ‘The Poor “Sociological Imagination” of the
Rich: Explaining Attitudinal Divergence towards Welfare, Inequality,
and Redistribution’, Social Policy & Administration, 52(5): 983–97.
Edwards, A. M., Housley, W., Williams, M. L., Sloan, L., and
Williams, M. D. (2013). ‘Digital Social Research, Social Media and
the Sociological Imagination: Surrogacy, Augmentation and Re-
orientation’, International Journal of Social Research Methodology
16(3): 245–60, 10.1080/13645579.2013.774185
Eisenhardt, K. M. (1989). ‘Building Theories from Case Study
Research’, Academy of Management Review, 14: 532–50.
Elias, N. (2000). The Civilising Process: Sociogenetic and
Psychogenetic Investigations. Oxford: Blackwell.
Elliott, C., and Ellingworth, D. (1997). ‘Assessing the
Representativeness of the 1992 British Crime Survey: The Impact of
Sampling Error and Response Biases’, Sociological Research Online,
2(4), www.socresonline.org.uk/2/4/3.html (accessed 1 March
2021).
Elliott, H. (1997). ‘The Use of Diaries in Sociological Research on
Health Experience’, Sociological Research Online, 2(2),
www.socresonline.org.uk/2/2/7.html (accessed 1 March 2021).
Elliott, V. (2018). ‘Thinking about the Coding Process in Qualitative
Data Analysis’, Qualitative Report, 23(11): 2850–61,
https://nsuworks.nova.edu/tqr/vol23/iss11/14.
Ellis, C., and Bochner, A. (2000). ‘Autoethnography, Personal
Narrative, Reflexivity: Researcher as Subject’, in N. K. Denzin and Y.
S. Lincoln (eds), Handbook of Qualitative Research, 2nd edn.
Thousand Oaks, CA: Sage, 733–68.
Emerson, R. M. (1987). ‘Four Ways to Improve the Craft of
Fieldwork’, Journal of Contemporary Ethnography, 16: 69–89.
Erel, U., Reynolds, T., and Kaptani, E. (2017). ‘Participatory Theatre
for Transformative Social Research’, Qualitative Research, 17(3):
302–12.
Erikson, K. T. (1967). ‘A Comment on Disguised Observation in
Sociology’, Social Problems, 14: 366–73.
Ernst, F. A. (2014). Ethnographic Fieldwork: Conducting Close-
Proximity Research on Mexican Organized Crime. SAGE Research
Methods Cases Part 1,
https://dx.doi.org/10.4135/978144627305013504182.
Ertesvag, S. K., Sammons, P., and Blossing, U. (2020). ‘Integrating
Data in a Complex Mixed-Methods Classroom Interaction Study’,
British Educational Research Journal, 10.1002/berj.3678.
Eschleman, K. J., Bowling, N. A., Michel, J. S., and Burns, G. N.
(2014). ‘Perceived Intent of Supervisor as a Moderator Between
Abusive Supervision and Counterproductive Work Behaviours’,
Work and Stress, 28: 362–75.
Evans, A., Elford, J., and Wiggins, R. D. (2008). ‘Using the Internet
in Qualitative Research’, in C. Willig and W. Stainton Rogers (eds),
SAGE Handbook of Qualitative Methods in Psychology. London:
Sage.
Fairclough, N. (1995). Critical Discourse Analysis: The Critical Study
of Language. London: Longman.
Faraday, A., and Plummer, K. (1979). ‘Doing Life Histories’,
Sociological Review, 27: 773–98.
Farkas, J., Schou, J., and Neumayer, C. (2018). ‘Cloaked Facebook
Pages: Exploring Fake Islamist Propaganda in Social Media’, New
Media & Society, 20(5): 1850–67.
Farnsworth, J., and Boon, B. (2010). ‘Analysing Group Dynamics
Within the Focus Group’, Qualitative Research, 10(5): 605–24.
Faulk, N., and Crist, E. (2020). ‘A Mixed-Methods Study of Library
Communication with Online Students and Faculty Members’, College
& Research Libraries, 81(3): 361–77.
Fenton, N., Bryman, A., and Deacon, D. (1998). Mediating Social
Science. London: Sage.
Ferguson, H. (2016). ‘What Social Workers Do in Performing Child
Protection Work: Evidence from Research into Face‐to‐Face
Practice’, Child & Family Social Work, 21(3): 283–94.
Festinger, L., Riecken, H. W., and Schachter, S. (1956). When
Prophecy Fails. New York: Harper & Row.
Fetters, M.D. (2020). The Mixed Methods Research Workbook:
Activities for Designing, Implementing, and Publishing Projects: 7.
Los Angeles: SAGE.
Field, A. (2017). Discovering Statistics Using IBM SPSS Statistics.
London: Sage.
Fielding, N. (1982). ‘Observational Research on the National Front’,
in M. Bulmer (ed.), Social Research Ethics. London: Macmillan.
Filmer, P., Phillipson, M., Silverman, D., and Walsh, D. (1972). New
Directions in Sociological Theory. London: Collier-Macmillan.
Finch, J. (1987). ‘The Vignette Technique in Survey Research’,
Sociology, 21: 105–14.
Fink, K. (2018). ‘Opening the Government’s Black Boxes: Freedom of
Information and Algorithmic Accountability’, Information,
Communication & Society, 21(10): 1453–71.
Fish, R. (2017). A Feminist Ethnography of Secure Wards for
Women with Learning Disabilities: Locked Away. London:
Routledge.
Fisher, K., and Layte, R. (2004). ‘Measuring Work-Life Balance
using Time Diary Data’, Electronic International Journal of Time
Use Research, 1: 1–13, www.eijtur.org/pdf/volumes/eIJTUR-1-1-
3_Fisher_Layte.pdf (accessed 1 March 2021).
Flanders, N. (1970). Analyzing Teacher Behavior. Reading, MA:
Addison-Wesley.
Fletcher, J. (1966). Situation Ethics. London: SCM.
Flick, U. (ed.) (2014). SAGE Handbook of Qualitative Data Analysis.
London: Sage.
Flick, U. (2018). An Introduction to Qualitative Research. London:
Sage.
Flyvbjerg, B. (2003). ‘Five Misunderstandings about Case Study
Research’, in C. Seale, G. Gobo, J. F. Gubrium, and D. Silverman
(eds), Qualitative Research Practice. London: Sage.
Fosnacht, K., Sarraf, S., Howe, E., and Peck, L. (2017). ‘How
Important are High Response Rates for College Surveys?’, Review of
Higher Education, 40(2): 245–65.
Foster, L., Diamond, I., and Jefferies, J. (2014). Beginning Statistics:
An Introduction for Social Scientists. London: Sage.
Foster, L., and Gunn, A. (2017). Special Issue on Social Science
Research Methods Education, Teaching Public Administration,
35(3), 237–40.
Foster, L., and Heneghan, M. (2018). ‘Pensions Planning in the UK:
A Gendered Challenge’, Critical Social Policy, 38(2): 345–66.
Foucault, M. (1977). Discipline and Punish. Harmondsworth, UK:
Penguin.
Foucault, M. (1991). The Foucault Effect: Studies in
Governmentality. Chicago: University of Chicago Press.
Foucault, M. (1998). The Will to Knowledge. London: Penguin.
Foucault, M. (2009). Security, Territory, Population: Lectures at
the Collège de France, 1977–1978. New York: Picador.
Fowler, F. J. (2013). Survey Research Methods, 5th edn. London:
Sage.
Fowler, F. J., and Mangione, T. W. (1990). Standardized Survey
Interviewing: Minimizing Interviewer-Related Error. Beverly Hills,
CA: Sage.
Fox, B. H., and Jennings, W. G. (2014). ‘How to Write a Methodology
and Results Section for Empirical Research’, Journal of Criminal
Justice Education, 25: 137–56.
Fox, J., and Fogelman, K. (1990). ‘New Possibilities for Longitudinal
Studies of Intergenerational Factors in Child Health and
Development’, in D. Magnusson and L. R. Bergman (eds), Data
Quality in Longitudinal Research. Cambridge: Cambridge
University Press.
Frean, A. (1998). ‘Children Read More after Arrival of TV’, The
Times, 29 April: 7.
Fredriksson, P., Öckert, B., and Oosterbeck, H. (2013). ‘Long-Term
Effects of Class Size’, Quarterly Journal of Economics, 128(1): 249–
85.
Freese, J., and Peterson, D. (2017). ‘Replication in Social Science’,
Annual Review of Sociology, 43: 147–65.
Frey, J. H. (2004). ‘Telephone Surveys’, in M. S. Lewis-Beck, A.
Bryman, and T. F. Liao (eds), The Sage Encyclopedia of Social
Science Research Methods. Thousand Oaks, CA: Sage.
Fricker, R. (2008). ‘Sampling Methods for Web and E-mail Surveys’,
in N. Fielding, R. M. Lee, and G. Blank (eds), The Sage Handbook of
Online Research Methods. London: Sage, 195–216.
Gallaher, C. M., and WrinklerPrins, A. M. G. A. (2016). ‘Effective Use
of Mixed Methods in African Livelihoods Research’, African
Geographical Review, 35(1), 83–93.
Gambetta, D., and Hamill, H. (2005). Streetwise: How Taxi Drivers
Establish their Customers’ Trustworthiness. New York: Russell Sage
Foundation.
Gambling Commission (2017). Gambling Participation in 2016:
Behaviour, Awareness and Attitudes,
www.gamblingcommission.gov.uk/PDF/survey-data/Gambling-
participation-in-2016-behaviour-awareness-and-attitudes.pdf
(accessed 27 November 2020).
Gans, H. J. (1962). The Urban Villagers. New York: Free Press.
Gans, H. J. (1968). ‘The Participant-Observer as Human Being:
Observations on the Personal Aspects of Field Work’, in H. S. Becker
(ed.), Institutions and the Person: Papers Presented to Everett C.
Hughes. Chicago: Aldine.
Garcia, A. C., Standlee, A. I., Bechkoff, J., and Cui, Y. (2009).
‘Ethnographic Approaches to the Internet and Computer-Mediated
Communication’, Journal of Contemporary Ethnography, 38 (1):
52–84.
Garfinkel, H. (1967). Studies in Ethnomethodology. Englewood
Cliffs, NJ: Prentice-Hall.
Garthwaite, K. (2016). ‘Stigma, Shame and “People Like Us”: An
Ethnographic Study of Foodbank Use in the UK’, Journal of Poverty
and Social Justice, 24(3): 277–89.
Geertz, C. (1973a). ‘Thick Description: Toward an Interpretive
Theory of Culture’, in C. Geertz, The Interpretation of Cultures. New
York: Basic Books.
Geertz, C. (1973b). ‘Deep Play: Notes on the Balinese Cockfight’, in C.
Geertz, The Interpretation of Cultures. New York: Basic Books.
Geertz, C. (1983). Local Knowledge. New York: Basic Books.
Gershuny, J., and Sullivan, O. (2014). ‘Household Structure and
Housework: Assessing the Contributions of All Household Members,
with a Focus on Children and Youths’, Review of Economics of the
Household, 12: 7–27.
Gerson, K., and Horowitz, R. (2002). ‘Observation and Interviewing:
Options and Choices’, in T. May (ed.), Qualitative Research in
Action. London: Sage.
Gibson, L. (2010). ‘Realities Toolkit #09: Using Email Interviews’,
http://eprints.ncrm.ac.uk/1303/1/09-toolkit-email-
interviews.pdf (accessed 1 March 2021).
Gibson, N. (2004). ‘Action Research’, in M. S. Lewis-Beck, A.
Bryman, and T. F. Liao (eds), The Sage Encyclopedia of Social
Science Research Methods. Thousand Oaks, CA: Sage.
Gicevic, S., Aftosmes‐Tobio, A., Manganello, J., Ganter, C., Simon, C.,
Newlan, S., and Davison, K. (2016). ‘Parenting and Childhood
Obesity Research: A Quantitative Content Analysis of Published
Research 2009–2015’, Pediatric Obesity, 17(8): 724–34.
Gilbert, G. N., and Mulkay, M. (1984). Opening Pandora’s Box: A
Sociological Analysis of Scientists’ Discourse. Cambridge:
Cambridge University Press.
Gill, R. (1996). ‘Discourse Analysis: Practical Implementation’, in J.
T. E. Richardson (ed.), Handbook of Qualitative Research Methods
for Psychology and the Social Sciences. Leicester, UK: BPS Books.
Gill, R. (2000). ‘Discourse Analysis’, in M. W. Bauer and G. Gaskell
(eds), Qualitative Researching with Text, Image and Sound.
London: Sage.
Gioia, D. A., Corley, K. G., and Hamilton, A. L. (2012). ‘Seeking
Qualitative Rigor in Inductive Research: Notes on the Gioia
Methodology’, Organizational Research Methods, 16: 15–31.
Giordano, C., Piras, S., Boschini, M., and Falasconi, L. (2018). ‘Are
Questionnaires a Reliable Method to Measure Food Waste? A Pilot
Study on Italian Households’, British Food Journal, 120(12): 2885–
97.
Giulianotti, R. (1995). ‘Participant Observation and Research into
Football Hooliganism: Reflections on the Problems of Entrée and
Everyday Risks’, Sociology of Sport Journal, 12: 1–20.
Glaser, B. G. (1992). Basics of Grounded Theory Analysis. Mill
Valley, CA: Sociology Press.
Glaser, B. G., and Strauss, A. L. (1967). The Discovery of Grounded
Theory: Strategies for Qualitative Research. Chicago: Aldine.
Glucksmann, M. (1994). ‘The Work of Knowledge and the Knowledge
of Women’s Work’, in M. Maynard and J. Purvis (eds), Researching
Women’s Lives from a Feminist Perspective. London: Taylor &
Francis.
Gnambs, T., and Kaspar, K. (2015). ‘Disclosure of Sensitive
Behaviors Across Self-Administered Survey Modes: A Meta-analysis’,
Behavior Research Methods, 47(4): 1237–59.
Goffman, A. (2009). ‘On the Run: Wanted Men in a Philadelphia
Ghetto’, American Sociological Review, 74: 339–57.
Goffman, A. (2014). On the Run: Fugitive Life in an American City.
Chicago: University of Chicago Press.
Goffman, E. (1956). The Presentation of Self in Everyday Life. New
York: Doubleday.
Goffman, E. (1979). Gender Advertisements. New York: Harper and
Row.
Gold, R. L. (1958). ‘Roles in Sociological Fieldwork’, Social Forces,
36: 217–23.
Goldthorpe, J. H., Lockwood, D., Bechhofer, F., and Platt, J. (1968).
The Affluent Worker: Industrial Attitudes and Behaviour.
Cambridge: Cambridge University Press.
Goode, E. (1996). ‘The Ethics of Deception in Social Research: A Case
Study’, Qualitative Sociology, 19: 11–33.
Goodings, L., Brown, S. D., and Parker, M. (2013). ‘Organising
Images of Futures-Past: Remembering the Apollo Moon Landings’,
International Journal of Management Concepts and Philosophy, 7:
263–83.
Gottdiener, M. (1982). ‘Disneyland: A Utopian Urban Space’, Urban
Life, 11: 139–62.
Gottdiener, M. (1997). The Theming of America: Dreams, Visions
and Commercial Spaces. Boulder, CO: Westview Press.
Goulding, C. (2009). ‘Grounded Theory Perspectives in
Organizational Research’, in D. A. Buchanan and A. Bryman (eds),
The SAGE Handbook of Organizational Research Methods. London:
SAGE.
Gouldner, A. (1968). ‘The Sociologist as Partisan’, American
Sociologist, 3: 103–16.
Grant, D., Hardy, C., Oswick, C., and Putnam, L. L. (2004).
‘Introduction: Organizational Discourse: Exploring the Field’, in D.
Grant, C. Hardy, C. Oswick, and L. Putnam (eds), The SAGE
Handbook of Organizational Discourse. London: Sage.
Grassian, D. T. (2020). ‘The Dietary Behaviors of Participants in UK-
Based Meat Reduction and Vegan Campaigns—A Longitudinal,
Mixed-Methods Study’, Appetite, 154, doi:
10.1016/j.appet.2020.104788.
Grazian, D. (2003). Blue Chicago: The Search for Authenticity in
Urban Blues Clubs. Chicago: University of Chicago Press.
Greaves, F., Laverty, A. A., Cano, D. R., Moilanen, K., Pulman, S.,
Darzi, A., and Millett, C. (2014). ‘Tweets about Hospital Quality: A
Mixed Methods Study’, BMJ Quality and Safety, doi: 10.1136/bmjqs-
2014–002875.
Green, J., Steinbach, R., and Datta, J. (2012). ‘The Travelling Citizen:
Emergent Discourses of Moral Mobility in a Study of Cycling in
London’, Sociology, 46: 272–89.
Greene, J. C. (1994). ‘Qualitative Program Evaluation: Practice and
Promise’, in N. K. Denzin and Y. S. Lincoln (eds), Handbook of
Qualitative Research. Thousand Oaks, CA: Sage.
Greene, J. C. (2000). ‘Understanding Social Programs through
Evaluation’, in N. K. Denzin and Y. S. Lincoln (eds), Handbook of
Qualitative Research, 2nd edn. Thousand Oaks, CA: Sage.
Greenland, S., Senn, S., Rothman, K., Carlin, J., Poole, C., Goodman,
S., and Altman, D. (2016). ‘Statistical Tests, P Values, Confidence
Intervals, and Power: A Guide to Misinterpretations’, European
Journal of Epidemiology, 31(4): 337–50.
Grele, R. J. (1998). ‘Movement Without Aim: Methodological and
Theoretical Problems in Oral History’, in R. Perks and A. Thomson
(eds), The History Reader. London: Routledge.
Grogan, S., Gill, S., Brownbridge, K., Kilgariff, S., and Whalley, A.
(2013). ‘Dress Fit and Body Image: A Thematic Analysis of Women’s
Accounts During and After Trying on Clothes’, Body Image, 10:
380–8.
Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer,
E., and Tourangeau, R. (2009). Survey Methodology, 2nd edn.
Hoboken, NJ: Wiley.
Guba, E. G. (1985). ‘The Context of Emergent Paradigm Research’, in
Y. S. Lincoln (ed.), Organization Theory and Inquiry: The
Paradigm Revolution. Beverly Hills, CA: Sage.
Guba, E. G., and Lincoln, Y. S. (1994). ‘Competing Paradigms in
Qualitative Research’, in N. K. Denzin and Y. S. Lincoln (eds),
Handbook of Qualitative Research. Thousand Oaks, CA: Sage.
Guégen, N., and Jacob, C. (2012). ‘Lipstick and Tipping Behavior:
When Red Lipstick Enhance Waitresses Tips’, International Journal
of Hospitality Management, 31: 1333–5.
Guest, G., Bunce, A., and Johnson, L. (2006). ‘How Many Interviews
are Enough? An Experiment with Data Saturation and Variability’,
Field Methods, 18: 59–82.
Guest, G., Namey, E., and McKenna, K. (2017a). ‘How Many Focus
Groups are Enough? Building an Evidence Base for Nonprobability
Sample Sizes’, Field Methods, 29(1): 3–22.
Guest, G., Namey, E., Taylor, J., Eley, N., and McKenna, K. (2017b).
‘Comparing Focus Groups and Individual Interviews: Findings from
a Randomized Study’, International Journal of Social Research
Methodology, 20(6): 693–708.
Gullifer, J. M., and Tyson, G. A. (2014). ‘Who Has Read the Policy on
Plagiarism? Unpacking Students’ Understanding of Plagiarism’,
Studies in Higher Education, 39: 1202–18.
Gunderman, R., and Kane, L. (2013). ‘A Study in Deception:
Psychology’s Sickness’, The Atlantic, 8 April,
www.theatlantic.com/health/archive/2013/04/a-study-in-
deception-psychologys-sickness/274739/.
Gunn, E., and Naper, A. (2016). ‘Social Media Incumbent Advantage:
Barack Obama and Mitt Romney’s Tweets in the 2012 US
Presidential Election Campaign’, in A. Bruns, G. Enli, E. Skogerbo,
and A. Larsson (eds), The Routledge Companion to Social Media
and Politics. London: Routledge, 364–78.
Guschwan, M. (2017). ‘Ethnography and the Italian Ultrà’, Sport in
Society, 21(6): 977–92, 10.1080/17430437.2017.1300400.
Gusterson, H. (1996). Nuclear Rites: A Weapons Laboratory at the
End of the Cold War. Berkeley: University of California Press.
Gwet, K. (2014). Handbook of Inter-rater Reliability: The Definitive
Guide to Measuring the Extent of Agreement Among Raters.
Gaithersburg, MD: Advanced Analytics.
Halder, A., Molyneaux, J., Luby, S., and Ram, P. (2013). ‘Impact of
Duration of Structured Observations on Measurement of
Handwashing Behaviour at Critical Times’, BMC Public Health, 13:
705.
Halford, S. (2018). ‘The Ethical Disruptions of Social Media Data:
Tales from the Field’, in K. Woodfield (ed), The Ethics of Online
Research. Bingley, UK: Emerald, 13–26.
Halfpenny, P. (1979). ‘The Analysis of Qualitative Data’, Sociological
Review, 27: 799–825.
Halkier, B. (2011). ‘Methodological Practicalities in Analytical
Generalization’, Qualitative Inquiry, 17(9): 787–97.
Hall, W. S., and Guthrie, L. F. (1981). ‘Cultural and Situational
Variation in Language Function and Use—Methods and Procedures
for Research’, in J. L. Green and C. Wallatt (eds), Ethnography and
Language in Educational Settings. Norwood, NJ: Ablex.
Hallett, R. E., and Barber, K. (2014). ‘Ethnographic Research in a
Cyber Era’, Journal of Contemporary Ethnography, 43: 306–30.
Hammersley, M. (1989). The Dilemma of Qualitative Method:
Herbert Blumer and the Chicago Tradition. London: Routledge.
Hammersley, M. (1992a). ‘By What Criteria should Ethnographic
Research be Judged?’, in M. Hammersley, What’s Wrong with
Ethnography? London: Routledge.
Hammersley, M. (1992b). ‘Deconstructing the Qualitative-
Quantitative Divide’, in M. Hammersley, What’s Wrong with
Ethnography? London: Routledge.
Hammersley, M. (2011). Methodology: Who Needs It? London: Sage.
Hammersley, M. (2018). ‘What is Ethnography? Can It Survive?
Should It?’, Ethnography and Education, 13(1): 1–17.
Hammersley, M., and Atkinson, P. (1995). Ethnography: Principles
in Practice, 2nd edn. London: Routledge.
Hammersley, M., and Atkinson, P. (2019). Ethnography: Principles
in Practice, 4th edn. London: Routledge.
Hammersley, M., and Gomm, R. (2000). ‘Bias in Social Research’, in
M. Hammersley (ed.), Taking Sides in Social Research: Essays in
Partisanship and Bias. London: Routledge.
Hammond, P. (1964). Sociologists at Work. New York: Basic Books.
Hand, M. (2017). ‘Visuality in Social Media: Researching Images,
Circulations and Practices’, in L. Sloan and A. Quan-Haase (eds), The
SAGE Handbook of Social Media Research Methods. London: Sage,
215–31.
Hantrais, L. (1996). ‘Comparative Research Methods’, Social
Research Update, 13, http://sru.soc.surrey.ac.uk/SRU13.html
(accessed 1 March 2021).
Haraway, D. (2003). ‘Situated Knowledges: The Science Question in
Feminism and the Privilege of Partial Perspective’, in Y. S. Lincoln
and N. K. Denzin (eds), Turning Points in Qualitative Research:
Tying Knots in a Handkerchief. Lanham, MD: AltaMira, 21–46.
Hardie-Bick, J. (2011). ‘Skydiving and the Metaphorical Edge’, in D.
Hobbs (ed.), Ethnography in Context, vol. 3. London: Sage.
Hardie-Bick, J., and Bonner, P. (2016). ‘Experiencing Flow,
Enjoyment and Risk in Skydiving and Climbing’, Ethnography,
17(3): 369–87.
Hardie-Bick, J., and Scott, S. (2017). ‘Tales from the Drop Zone:
Roles, Risks and Dramaturgical Dilemmas’, Qualitative Research,
17(2): 246–59.
Harding, S. G. (1986). The Science Question in Feminism. Ithaca,
NY: Cornell University Press.
Hardy, M., and Bryman, A. (2004). ‘Introduction: Common Threads
among Techniques of Data Analysis’, in M. Hardy and A. Bryman
(eds), Handbook of Data Analysis. London: Sage.
Hargittai, E., and Karaoglu, G. (2018). ‘Biases of Online Political
Polls: Who Participates?’, Socius: Sociological Research for a
Dynamic World, 4(2): 1–7.
Harper, D. (2002). ‘Talking about Pictures: A Case for Photo
Elicitation’, Visual Studies, 17: 13–26.
Harrison, N., and McCaig, C. (2015). ‘An Ecological Fallacy in Higher
Education Policy: The Use, Overuse and Misuse of “Low
Participation Neighbourhoods”’, Journal of Further and Higher
Education, 39(6): 793–817.
Hartsock, N. C. (1983). Money, Sex, and Power: Toward a Feminist
Historical Materialism (p. 247). New York: Longman.
Hauken, M. A., Larsen, T. M. B., and Holsen, I. (2019). ‘“Back on
Track”: A Longitudinal Mixed Methods Study on the Rehabilitation
of Young Adult Cancer Survivors’, Journal of Mixed Methods
Research, 13(3), 339–60,
https://doi.org/10.1177/1558689817698553.
Hauser, O., Linos, E., and Rogers, T. (2017). ‘Innovation with Field
Experiments: Studying Organizational Behaviors in Actual
Organizations’, Research in Organizational Behavior, 37: 185–98.
Healy, K., and Moody, J. (2014). ‘Data Visualisation in Sociology’,
Annual Review of Sociology, 40: 105–28.
Heap, J. L., and Roth, P. A. (1973). ‘On Phenomenological Sociology’,
American Sociological Review, 38: 354–67.
Heath, S., Davies, K., Edwards, G., and Scicluna, R. (2017). Shared
Housing, Shared Lives: Everyday Experiences Across the
Lifecourse. Abingdon, UK: Routledge.
Heerwegh, D., and Loosveldt, G. (2008). ‘Face-to-Face versus Web
Surveying in a High Internet-Coverage Population: Differences in
Response Quality’, Public Opinion Quarterly, 72: 836–46.
Henwood, K., Shirani, F., and Finn, M. (2011). ‘“So you Think we’ve
Moved, Changed, the Representation’s got More What?”
Methodological and Analytical Reflections on Visual (Photo-
Elicitation) Methods Used in the Men-as-Fathers Study’, in P.
Reavey (ed.), Visual Methods in Psychology: Using and Interpreting
Images in Qualitative Research. London: Psychology Press.
Herbell, K., and Zauszniewski, J. (2018). ‘Facebook or Twitter?
Effective Recruitment Strategies for Family Caregivers’, Applied
Nursing Research, 41(6): 1–4.
Heritage, J. (1984). Garfinkel and Ethnomethodology. Cambridge:
Polity.
Heritage, J. (1987). ‘Ethnomethodology’, in A. Giddens and J. H.
Turner (eds), Social Theory Today. Cambridge: Polity.
Hesse-Biber, S. N. (ed.) (2013). Feminist Research Practice: A
Primer. London: Sage.
Hester, S. (2016). ‘Answering Questions Instead of Telling Stories:
Everyday Breaching in a Family Meal’, Journal of Pragmatics, 102,
54–66.
Hewson, C., and Laurent, D. (2008). ‘Research Design and Tools for
Internet Research’, in N. Fielding, R. M. Lee, and G. Blank (eds), The
SAGE Handbook of Online Research Methods. London: Sage.
Heyvaert, M., Hannes, K., and Onghena, P. (2016). Using Mixed
Methods Research Synthesis for Literature Reviews: The Mixed
Methods Research Synthesis Approach, vol. 4. London: Sage.
Hill, K., and Davis, A. (2018). Making Ends Meet Below the
Minimum Income Standard: Families’ Experiences Over Time.
Centre for Research in Social Policy Working Paper 662,
www.lboro.ac.uk/media/wwwlboroacuk/content/crsp/downloads/re
ports/Making%20Ends%20Meet%20Below%20the%20MIS%20fam
ilies%20experiences%20over%20time.pdf.
Hill, K., Davis, A., Hirsch, D., and Marshall, L. (2016). Falling Short:
The Experiences of Families Below the Minimum Income Standard.
Joseph Rowntree Foundation, www.jrf.org.uk/report/falling-
short-experiences-families-below-minimum-income-standard.
Hine, C. (2000). Virtual Ethnography. London: Sage.
Hine, C. (2008). ‘Virtual Ethnography: Models, Varieties,
Affordances’, in N. Fielding, R. M. Lee, and G. Blank (eds), The SAGE
Handbook of Online Research Methods. London: Sage.
Hine, C. (2014). ‘Headlice Eradication as Everyday Engagement with
Science: An Analysis of Online Parenting Discussions’, Public
Understanding of Science, 23: 574–91.
Hirsch, J. (1981). Family Photographs. New York: Oxford University
Press.
Hobbs, D. (1988). Doing the Business: Entrepreneurship, the
Working Class and Detectives in the East End of London. Oxford:
Oxford University Press.
Hochschild, A. R. (1983). The Managed Heart. Berkeley and Los
Angeles: University of California Press.
Hodges, L. (1998). ‘The Making of a National Portrait’, The Times
Higher Education Supplement, 20 February: 22–3.
Hoefnagels, M. (2017). Research Design: The Logic of Social
Inquiry. London: Routledge.
Hofmans, J., Gelens, J., and Theuns, P. (2014). ‘Enjoyment as a
Mediator in the Relationship Between Task Characteristics and Work
Effort: An Experience Sampling Study’, European Journal of Work
and Organizational Psychology, 23: 693–705.
Holbrook, A. L., Green, M. C., and Krosnick, J. A. (2003). ‘Telephone
versus Face-to-Face Interviewing of National Probability Samples
with Long Questionnaires: Comparisons of Respondent Satisficing
and Social Desirability Response Bias’, Public Opinion Quarterly, 67:
79–125.
Holdaway, S. (1982). ‘“An Inside Job”: A Case Study of Covert
Research on the Police’, in M. Bulmer (ed.), Social Research Ethics.
London: Macmillan.
Holdaway, S. (1983). Inside the British Police: A Force at Work.
Oxford: Blackwell.
Holsti, O. R. (1969). Content Analysis for the Social Sciences and
Humanities. Reading, MA: Addison-Wesley.
Homan, R. (1991). The Ethics of Social Research. London: Longman.
Homan, R., and Bulmer, M. (1982). ‘On the Merits of Covert
Methods: A Dialogue’, in M. Bulmer (ed.), Social Research Ethics.
London: Macmillan.
Hood, J. C. (2007). ‘Orthodoxy vs. Power: The Defining Traits of
Grounded Theory’, in A. Bryant and K. Charmaz (eds), The SAGE
Handbook of Grounded Theory. Los Angeles: Sage.
Hordosy, R., and Clark, T. (2019), ‘Student Budgets and Widening
Participation: Comparative Experiences of Finance in Low and
Higher Income Undergraduates at a Northern Red Brick University’,
Social Policy and Administration, 53(5): 761–75.
Hu, J. (2020). ‘Horizontal or Vertical? The Effects of Visual
Orientation of Categorical Response Options on Survey Responses in
Web Surveys’, Social Science Computer Review, 38(6): 779–92.
Huberman, A. M., and Miles, M. B. (1994). ‘Data Management and
Analysis Methods’, in N. K. Denzin and Y. S. Lincoln (eds),
Handbook of Qualitative Research. Thousand Oaks, CA: Sage.
Hughes, C., and Cohen, R. L. (eds). (2013). Feminism Counts:
Quantitative Methods and Researching Gender. Abingdon, UK:
Routledge.
Hughes, G. (2000). ‘Understanding the Politics of Criminological
Research’, in V. Jupp, P. Davies, and P. Francis (eds), Doing
Criminological Research. London: Sage.
Hughes, J. A. (1990). The Philosophy of Social Research, 2nd edn.
Harlow, UK: Longman.
Humphreys, L. (1970). Tearoom Trade: Impersonal Sex in Public
Places. Chicago: Aldine.
Humphreys, L., Gill, P., and Krishnamurthy, B. (2014). ‘Twitter: A
Content Analysis of Personal Information’, Information,
Communication and Society, 17: 843–57.
Humphreys, M., and Watson, T. (2009). ‘Ethnographic Practices:
From “Writing-up Ethnographic Research” to “Writing
Ethnography”’, in S. Ybema, D. Yanow, H. Wels, and F. Kamsteeg
(eds), Organizational Ethnography: Studying the Complexities of
Everyday Life. London: Sage.
Hutchby, I., and Wooffitt, R. (1998). Conversation Analysis.
Cambridge: Polity.
Iacono, V. L., Symonds, P., and Brown, D. H. (2016). ‘Skype as a Tool
for Qualitative Research Interviews’, Sociological Research Online,
21(2): 1–12.
Ibrahim, Q. (2018). ‘Social Work Education in Arab Universities and
Educators’ Perceptions’, Social Work Education, 37(1): 78–91.
Irvine, A. (2011). ‘Duration, Dominance and Depth in Telephone and
Face-to-Face Interviews: A Comparative Exploration’, International
Journal of Qualitative Methods, 10(3): 202–20.
Irvine, A., Drew, P., and Sainsbury, R. (2013). ‘“Am I Not Answering
Your Questions Properly?” Clarification, Adequacy and
Responsiveness in Semi-Structured Telephone and Face-to-Face
Interviews’, Qualitative Research, 13: 87–106.
Israel, M., and Hay, I. (2004). Research Ethics for Social Scientists.
London: Sage.
Jack, I. (2010). ‘Five Boys: The Story of a Picture’, Intelligent Life
(spring) [not available at time of this writing].
Jackson, K., Paulus, T., and Woolf, N. H. (2018). ‘The Walking Dead
Genealogy: Unsubstantiated Criticisms of Qualitative Data Analysis
Software (QDAS) and the Failure to Put Them to Rest’, Qualitative
Report, 23(13): 74–91.
Jackson, W. (2020). ‘Researching the Policed: Critical Ethnography
and the Study of Protest Policing’, Policing and Society, 30(2): 169–
85, doi: 10.1080/10439463.2019.1593982.
Jæger, M. (2019). ‘Hello Beautiful? The Effect of Interviewer
Physical Attractiveness on Cooperation Rates and Survey Responses’,
Sociological Methods and Research, 48(1):156–84.
James, N. (2016). ‘Using Email Interviews in Qualitative Educational
Research: Creating Space to Think and Time to Talk’, International
Journal of Qualitative Studies in Education, 29(2): 150–63.
Jamieson, J. (2000). ‘Negotiating Danger in Fieldwork on Crime: A
Researcher’s Tale’, in G. Lee-Treweek and S. Linkogle (eds), Danger
in the Field: Risk and Ethics in Social Research. London: Routledge.
Jangi, M., Ferandez-de-Las-Penas, C., Tara, M., Moghbeli, F.,
Ghaderi, F., and Javanshir, K. (2018). ‘A Systematic Review on
Reminder Systems in Physical Therapy’, Caspian Journal of Internal
Medicine, 9(1): 7–15.
Janis, I. L. (1982). Groupthink: Psychological Studies of Policy
Decisions and Fiascos, 2nd edn. Boston: Houghton-Mifflin.
Janmaat, J., and Keating, A. (2019). ‘Are Today’s Youth More
Tolerant? Trends in Tolerance among Young People in Britain’,
Ethnicities, 19(1): 44–65.
Jefferson, G. (2004). ‘Glossary of Transcript Symbols with an
Introduction’, in G. H. Lerner (ed.), Conversation Analysis: Studies
from the First Generation. Amsterdam: John Benjamins, 13–31.
Jenkins, N., Bloor, M., Fischer, J., Berney, L., and Neale, J. (2010).
‘Putting it in Context: the Use of Vignettes in Qualitative
Interviewing’, Qualitative Research, 10: 175–98.
Jensen, E., and Laurie, C. (2016). Doing Real Research: A Practical
Guide to Social Research. London: Sage.
Jerolmack, C., and Khan, S. (2014). ‘Talk is Cheap: Ethnography and
the Attitudinal Fallacy’, Sociological Methods and Research, 43:
178–209.
Jessee, S. (2017). ‘“Don’t Know” Responses, Personality, and the
Measurement of Political Knowledge’, Political Science Research and
Methods, 5(4): 711–31.
John, I. D. (1992). ‘Statistics as Rhetoric in Psychology’, Australian
Psychologist, 27: 144–9.
John, P., and Jennings, W. (2010). ‘Punctuations and Turning Points
in British Politics: The Policy Agenda of the Queen’s Speech, 1940–
2005’, British Journal of Political Science, 40: 561–86.
Johnston, M. (2017). ‘Secondary Data Analysis: A Method of which
the Time Has Come’, Qualitative and Quantitative Methods in
Libraries, 3(3): 619–26.
Jones, I. R., Leontowitsch, M., and Higgs, P. (2010). ‘The Experience
of Retirement in Secondary Modernity: Generational Habitus among
Retired Senior Managers’, Sociology, 44: 103–20.
Judge, M., and Wilson, A. (2018). ‘Dual-Process Motivational Model
of Attitudes toward Vegetarians and Vegans’, European Journal of
Social Psychology, 49(1): 169–78.
Kahan, D., McKenzie, T., and Khatri, A. (2019). ‘U.S. Charter Schools
Neglect Promoting Physical Activity: Content Analysis of Nationally
Representative Elementary Charter School Websites’, Preventative
Medicine Reports, 14.
Kalton, G. (2019). ‘Developments in Survey Research over the Past
60 Years: A Personal Perspective’, International Statistical Review,
87(S1): S10–30.
Kamin, L. J. (1974). The Science and Politics of IQ. New York: Wiley.
Kaminska, O., and Foulsham, T. (2013). Understanding Sources of
Social Desirability Bias in Different Modes: Evidence from Eye-
Tracking (Report No. 2013–04). Colchester, UK: Institute for Social
and Economic Research, University of Essex.
Kamp, K., Herbell, K., Magginis, W., Berry, D., and Given, B. (2019).
‘Facebook Recruitment and the Protection of Human Subjects’,
Western Journal of Nursing Research, 41(9): 1270–81.
Kandola, B. (2012). ‘Focus Groups’, in G. Symon and C. Cassell (eds),
Qualitative Organizational Research: Core Methods and Current
Challenges. Los Angeles: Sage.
Kang, M. (1997). ‘The Portrayal of Women’s Images in Magazine
Advertisements: Goffman’s Gender Analysis Revisited’, Sex Roles,
37(11–12): 979–96.
Kapidzic, S., and Herring, S. C. (2015) ‘Race, Gender, and Self-
Presentation in Teen Profile Photographs’, New Media and Society,
17: 958–76.
Kaplan, D. (2004). SAGE Handbook of Quantitative Methodology
for the Social Sciences. London: Sage.
Kapsis, R. E. (1989). ‘Reputation Building and the Film Art World:
The Case of Alfred Hitchcock’, Sociological Quarterly, 30: 15–35.
Kara, H. (2015). Creative Research Methods in the Social Sciences:
A Practical Guide. Bristol, UK: Policy Press.
Katz, J. (1982). Poor People’s Lawyers in Transition. New
Brunswick, NJ: Rutgers University Press.
Kavanagh, J., Campbell, F., Harden, A., and Thomas, J. (2012).
‘Mixed Methods Synthesis: A Worked Example’, in K. Hannes and C.
Lockwood (eds), Synthesizing Qualitative Research: Choosing the
Right Approach. Chichester, UK: Wiley.
Kellogg, K. C. (2009). ‘Operating Room: Relational Spaces and
Microinstitutional Change in Surgery’, American Journal of
Sociology, 115: 657–711.
Kellogg, K. C. (2011). Challenging Operations: Medical Reform and
Resistance in Surgery. Chicago: University of Chicago Press.
Kendrick, K. H., and Holler, J. (2017). ‘Gaze Direction Signals
Response Preference in Conversation’, Research on Language and
Social Interaction, 50(1): 12–32.
Kentikelenis, A., Stubbs, T., and King, L. (2016). ‘IMF Conditionality
and Development Policy Space, 1985–2014’, Review of International
Political Economy, 23(4): 543–82.
Khan, S. (2011). Privilege: The Making of an Adolescent Elite at St.
Paul’s School. Princeton, NJ: Princeton Univesity Press.
Khan, S. (2014). ‘The Science of Everyday Life’, in S. Khan and D. R.
Fisher (eds), The Practice of Research: How Social Scientists
Answer Their Questions. New York: Oxford University Press.
Kibuchi, E., Sturgis, P., Durrant, G., and Maslovskaya, O. (2018). Do
Interviewers Moderate the Effect of Monetary Incentives on
Response Rates in Household Interview Surveys? NCRM Working
Paper. Southampton, UK: National Centre for Research Methods.
Kidd, P. S., and Parshall, M. B. (2000). ‘Getting the Focus and the
Group: Enhancing Analytical Rigor in Focus Group Research’,
Qualitative Health Research, 10(3): 293–308.
Kimmel, A. J. (1988). Ethics and Values in Applied Social Research.
Newbury Park, CA: Sage.
Kirk, J., and Miller, M. L. (1986). Reliability and Validity in
Qualitative Research. Newbury Park, CA: Sage.
Kitchin, R. (2014). ‘Big Data, New Epistemologies and Paradigm
Shifts’, Big Data and Society, 1(1).
Kitchin, R., and McArdle, G. (2016). ‘What makes Big Data, Big
Data? Exploring the Ontological Characteristics of 26 Datasets’, Big
Data and Society, 3(1).
Kitzinger, J. (1994). ‘The Methodology of Focus Groups: The
Importance of Interaction between Research Participants’, Sociology
of Health and Illness, 16: 103–21.
Kivits, J. (2005). ‘Online Interviewing and the Research
Relationship’, in C. Hine (ed.), Virtual Methods: Issues in Social
Research on the Internet. Oxford: Berg.
Knepper, P. (2018). ‘Laughing at Lombroso: Positivism and Criminal
Anthropology in Historical Perspective’, Handbook of the History
and Philosophy of Criminology, 51.
Kozinets, R. V. (2002). ‘The Field behind the Screen: Using
Netnography for Marketing Research in Online Communities’,
Journal of Marketing Research, 39: 61–72.
Kozinets, R. V. (2010). Netnography: Doing Ethnographic Research
Online. London: Sage.
Kramer, A. D. I., Guillory, J. E., and Hancock, J. T. (2014).
‘Experimental Evidence of Massive-scale Emotional Contagion
through Social Networks’, Proceedings of the National Academy of
Sciences of the United States of America, 111: 8788–90,
10.1073/pnas.1320040111.
Krause, M., and Kowalski, A. (2013). ‘Reflexive Habits: Dating and
Rationalized Conduct in New York and Berlin’, Sociological Review,
61: 21–40.
Kress, G., and Van Leeuwen, T. (2001). Multimodal Discourse: The
Modes and Media of Contemporary Communication. London:
Arnold.
Krippendorff, K. (2018). Content Analysis: An Introduction to Its
Methodology. London: Sage.
Krosnick, J. (2018). ‘Improving Question Design to Maximize
Reliability and Validity’, in D. Vannette and J. Krosnick (eds), The
Palgrave Handbook of Survey Research. Basingstoke, UK: Palgrave,
95–101.
Krosnick, J. A. (1999). ‘Survey Research’, Annual Review of
Psychology, 50: 537–67.
Krosnick, J. A., Holbrook, A. L., Berent, M. K., Carson, R. T.,
Hanemann, W. M., Kopp, R. J., Mitchell, R. C., Presser, S., Ruud, P.
A., Smith, V. K., Moody, W. R., Green, M. C., and Conaway, M.
(2002). ‘The Impact of “No Opinion” Response Options on Data
Quality: Non-Attitude Reduction or an Invitation to Satisfice?’,
Public Opinion Quarterly, 66: 371–403.
Kross, E., Verduyn, P., Demiralp, E., Park, J., Lee, D. S., Lin, N.,
Shablack, H., Jonides, J., and Ybarra, O. (2013). ‘Facebook Use
Predicts Declines in Subjective Well-Being in Young Adults’, PLOS
One, 8,
www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0
069841 (accessed 1 March 2021).
Krueger, R. A. (1998). Moderating Focus Groups. Thousand Oaks,
CA: Sage.
Krumpal, I. (2013). ‘Determinants of Social Desirability Bias in
Sensitive Surveys: A Literature Review’, Quality and Quantity,
47(4): 2025–47.
Kuhn, T. S. (1970). The Structure of Scientific Revolutions, 2nd edn.
Chicago: University of Chicago Press.
Kvale, S. (1996). InterViews: An Introduction to Qualitative
Research Interviewing. Thousand Oaks, CA: Sage.
Lafleur, J. M., and Mescoli, E. (2018). ‘Creating Undocumented EU
Migrants through Welfare: A Conceptualization of Undeserving and
Precarious Citizenship’, Sociology, 52(3): 480–96.
Lamont, M., and Swidler, A. (2014). ‘Methodological Pluralism and
the Possibilities and Limits of Interviewing’, Qualitative Sociology,
37(2): 153–71.
La Pelle, N. (2004). ‘Simplifying Qualitative Data Analysis using
General Purpose Software Tools’, Field Methods, 16(1): 85–108.
LaPiere, R. T. (1934). ‘Attitudes vs. Actions’, Social Forces, 13: 230–
7.
Laub, J. H., and Sampson, R. J. (2004). ‘Strategies for Bridging the
Quantitative and Qualitative Divide: Studying Crime over the Life
Course’, Research in Human Development, 1: 81–99.
Laub, J. H., and Sampson, R. J. (2019). ‘Life-course and
Developmental Criminology: Looking Back, Moving Forward—ASC
Division of Developmental and Life-Course Criminology Inaugural
David P. Farrington Lecture, 2017’, Journal of Developmental and
Life-Course Criminology, 6: 158–71.
Laurie, H., and Gershuny, J. (2000). ‘Couples, Work and Money’, in
R. Berthoud and J. Gershuny (eds), Seven Years in the Lives of
British Families: Evidence on the Dynamics of Social Change from
the British Household Panel Survey. Bristol, UK: Policy Press.
Lavrakas, P. (ed.) (2008). Encyclopedia of Survey Research
Methods. London: Sage.
Layder, D. (1993). New Strategies in Social Research. Cambridge:
Polity.
Lazarsfeld, P. (1958). ‘Evidence and Inference in Social Research’,
Daedalus, 87: 99–130.
Le Blanc, A. M. (2017). ‘Disruptive Meaning-Making: Qualitative
Data Analysis Software and Postmodern Pastiche’, Qualitative
Inquiry, 23(10): 789–98.
LeCompte, M. D., and Goetz, J. P. (1982). ‘Problems of Reliability
and Validity in Ethnographic Research’, Review of Educational
Research, 52: 31–60.
Ledford, C. J., and Anderson, L. N. (2013). ‘Online Social Networking
in Discussions of Risk: The CAUSE Model in a Content Analysis of
Facebook’, Health, Risk and Society, 15: 251–64.
Lee, R. M. (2000). Unobtrusive Methods in Social Research.
Buckingham, UK: Open University Press.
Lee, R. M., and Fielding, N. G. (1991). ‘Computing for Qualitative
Research: Options, Problems and Potential’, in N. G. Fielding and R.
M. Lee (eds), Using Computers in Qualitative Research. London:
Sage.
Lenzner, T. (2012). ‘Effects of Survey Question Comprehensibility on
Response Quality’, Field Methods, 24: 409–28.
Letherby, G., Scott, J., and Williams, M. (2012). Objectivity and
Subjectivity in Social Research. London: Sage.
Lewis, O. (1961). The Children of Sánchez. New York: Vintage.
Li, J., Ohlbrecht, H., Pollmann-Schult, M., and Habib, F. E. (2020).
‘Parents’ Nonstandard Work Schedules and Children’s Social and
Emotional Wellbeing: A Mixed-Methods Analysis in Germany’.
Zeitschrift für Familienforschung/Journal of Family Research,
32(2).
Lieberman, J. D., Koetzle, D., and Sakiyama, M. (2013). ‘Police
Departments’ Use of Facebook: Patterns and Policy Issues’, Policy
Quarterly, 16: 438–62.
Liebling, A. (2001). ‘Whose Side Are We On? Theory, Practice and
Allegiances in Prisons Research’, British Journal of Criminology, 41:
472–84.
Liebow, E. (1967). Tally’s Corner. Boston: Little, Brown.
Lijadi, A. A., and van Schalkwyk, G. J. (2015). ‘Online Facebook
Focus Group Research of Hard-to-Reach Participants’, International
Journal of Qualitative Methods, 14(5): 1609406915621383.
Likert, R. (1932). ‘A Technique for the Measurement of Attitudes’,
Archives of Psychology, 140: 1–55.
Lincoln, Y. S., and Denzin, N. K. (2005). ‘Epilogue: The Eighth and
Ninth Moments—Qualitative Research in/and the Fractured Future’,
in N. K. Denzin and Y. S. Lincoln (eds), Handbook of Qualitative
Research, 3rd edn. Thousand Oaks, CA: Sage.
Lincoln, Y. S., and Guba, E. (1985). Naturalistic Inquiry. Beverly
Hills, CA: Sage.
Lindesmith, A. R. (1947). Opiate Addiction. Bloomington, IN:
Principia Press.
Linvill, D. L., and Warren, P. L. (2020). ‘Troll Factories:
Manufacturing Specialized Disinformation on Twitter’, Political
Communication, 37(4): 1–21.
Livingstone, S. (2019). ‘Audiences in an Age of Datafication: Critical
Questions for Media Research’, Television & New Media, 20(2):
170–83.
Livingstone, S., Kirwil, L., Ponte, C., and Staksrud, E. (2014). ‘In
Their Own Words: What Bothers Children Online’, European
Journal of Communication, 29: 271–88.
Lloyd, A. (2012). ‘Working to Live, Not Living to Work: Work,
Leisure and Youth Identity among Call Centre Workers in North East
England’, Current Sociology, 60: 619–35.
Locke, K. (1996). ‘Rewriting The Discovery of Grounded Theory
after 25 Years?’, Journal of Management Inquiry, 5: 239–45.
Lofland, J., and Lofland, L. (1995). Analyzing Social Settings: A
Guide to Qualitative Observation and Analysis, 3rd edn. Belmont,
CA: Wadsworth.
Longhi, S. (2020). ‘A Longitudinal Analysis of Ethnic Unemployment
Differentials in the UK’, Journal of Ethnic and Migration Studies,
46(5): 879–92, 10.1080/1369183X.2018.1539254.
Longhurst, R. (2009). ‘Embodied Knowing’, in R. Kitchin and N.
Thrift (eds), International Encyclopedia of Human Geography.
Oxford: Elsevier, 429–33.
Lumsden, K., and Black, A. (2018). ‘Austerity Policing, Emotional
Labour and the Boundaries of Police Work: An Ethnography of a
Police Force Control Room in England’, British Journal of
Criminology, 58(3): 606–23.
Lynch, M. (2000). ‘Against Reflexivity as an Academic Virtue and
Source of Privileged Knowledge’, Theory, Culture and Society, 17:
26–54.
Lynd, R. S., and Lynd, H. M. (1929). Middletown: A Study in
Contemporary American Culture. New York: Harcourt, Brace.
Lynd, R. S., and Lynd, H. M. (1937). Middletown in Transition: A
Study in Cultural Conflicts. New York: Harcourt, Brace.
Lynn, M., Sturman, M., Ganley, C., Adams, E., Douglas, M., and
McNeil, J. (2008). ‘Consumer Racial Discrimination in Tipping: A
Replication and Extension’, Journal of Applied Social Psychology,
38(4): 1045–60.
Lynn, P., and Borkowska, M. (2018). ‘Some Indicators of Sample
Representativeness and Attrition Bias for BHPS and Understanding
Society’, Economic and Social Research Council Understanding
Society Working Paper Series no. 2018–01,
www.understandingsociety.ac.uk/research/publications/524851.
Lynn, P., and Kaminska, O. (2012). ‘The Impact of Mobile Phones on
Survey Measurement Error’, Public Opinion Quarterly, 77: 586–605.
Macgilchrist, F., and Van Hout, T. (2011). ‘Ethnographic Discourse
Analysis and Social Science’, Forum: Qualitative
Sozialforschung/Forum: Qualitative Social Research,
12(1/January).
MacInnes, J. (2017). An Introduction to Secondary Data Analysis
with IBM SPSS Statistics. London: Sage.
MacInnes, J., Breeze, M., de Haro, M., Kandlik, M., and Karels, M.
(2016). Measuring Up: International Case Studies on the Teaching
of Quantitative Methods in the Social Sciences. London: British
Academy.
MacKay, J., Moore, J., and Huntingford, F. (2016). ‘Characterizing
the Data in Online Companion-Dog Obituaries to Assess their
Usefulness as a Source of Information about Human–Animal Bonds’,
Anthrozoös, 29(3): 431–40.
Macnaghten, P., and Jacobs, M. (1997). ‘Public Identification with
Sustainable Development: Investigating Cultural Barriers to
Participation’, Global Environmental Change, 7: 5–24.
MacPherson, W. (1999). The Stephen Lawrence Inquiry: Report of
an Inquiry by Sir Willam MacPherson. United Kingdom: Stationery
Office.
Madriz, M. (2000). ‘Focus Groups in Feminist Research’, in N. K.
Denzin and Y. S. Lincoln (eds), Handbook of Qualitative Research,
2nd edn. Thousand Oaks, CA: Sage.
Mahfoud, Z., Ghandour, L., Ghandour, B., Mokdad, A., and Sibai, A.
(2015). ‘Cell Phone and Face-to-Face Interview Responses in
Population-based Surveys: How Do They Compare?’, Field Methods,
27(1): 39–54.
Maitlis, S., and Lawrence, T. B. (2007). ‘Triggers and Enablers of
Sensegiving in Organizations’, Academy of Management Journal,
50: 57–84.
Malinowski, B. (1967). A Diary in the Strict Sense of the Term.
London: Routledge & Kegan Paul.
Malsch, B., and Salterio, S. E. (2016). ‘“Doing Good Field Research”:
Assessing the Quality of Audit Field Research’, Auditing: A Journal
of Practice & Theory, 35(1): 1–22.
Malterud, K., Siersma, V. D., and Guassora, A. D. (2015). ‘Sample
Size in Qualitative Interview Studies: Guided by Information Power’,
Qualitative Health Research, 26(13): 1753–60.
Manfreda, K. L., Bosnjak, M., Berzelak, J., Haas, I., and Vehovar, V.
(2008). ‘Web Surveys versus Other Survey Modes: A Meta-Analysis
Comparing Response Rates’, International Journal of Market
Research, 50: 79–104.
Mann, C., and Stewart, F. (2000). Internet Communication and
Qualitative Research: A Handbook for Researching Online. London:
Sage.
Manokha, I. (2018). ‘Surveillance: The DNA of Platform Capital—
The Case of Cambridge Analytica Put into Perspective’, Theory and
Event, 24(4): 891–913.
Marcus, G., and Fisher, M. J. (1986). Anthropology as Cultural
Critique. Chicago: University of Chicago Press.
Markham, A. (1998). Life Online: Researching the Real Experience
in Virtual Space. London and Walnut Creek, CA: AltaMira Press.
Marsh, C. (1982). The Survey Method: The Contribution of Surveys
to Sociological Explanation. London: Allen & Unwin.
Marsh, C., and Scarbrough, E. (1990). ‘Testing Nine Hypotheses
about Quota Sampling’, Journal of the Market Research Society, 32:
485–506.
Marshall, C., and Rossman, G. (2016). Designing Qualitative
Research. 6th edn. London: Sage.
Marshall, G., Newby, H., and Vogler, C. (1988). Social Class in
Modern Britain. London: Unwin Hyman.
Martin, P., and Bateson, P. (2007). Measuring Behaviour: An
Introductory Guide, 3rd edn. Cambridge: Cambridge University
Press.
Mason, J. (1996). Qualitative Researching. London: Sage.
Mason, J. (2002). ‘Qualitative Interviewing: Asking, Listening,
Interpreting’, in T. May (ed.), Qualitative Research in Action.
London: Sage.
Mason, J. (2017). Qualitative Interviewing. London: Sage.
Mason, M. (2010). ‘Sample Size and Saturation in PhD Studies Using
Qualitative Interviews’, Forum: Qualitative
Sozialforschung/Forum: Qualitative Social Research, 11(3), article
8, www.qualitative-
research.net/index.php/fqs/article/view/1428/3027 (accessed 1
March 2021).
Mason, W. (2018). ‘“Swagger”: Urban Youth Culture, Consumption
and Social Positioning’, Sociology, 52(6): 1117–33.
Mason, W., Mirza, N., and Webb, C. (2018). Using the Framework
Method to Analyze Mixed-Methods Case Studies. London: Sage.
Mason, W., Morris, K., Webb, C., Daniels, B., Featherstone, B.,
Bywaters, P., Mirza, N., Hooper, J., Brady, G., Bunting, L., and
Scourfield, J. (2020). ‘Toward Full Integration of Quantitative and
Qualitative Methods in Case Study Research: Insights From
Investigating Child Welfare Inequalities’, Journal of Mixed Methods
Research, 14(2): 164–83,
https://doi.org/10.1177/1558689819857972.
Masterman, M. (1970). ‘The Nature of a Paradigm’, in I. Lakatos and
A. Musgrave (eds), Criticism and the Growth of Knowledge.
Cambridge: Cambridge University Press.
Matthews, K. L., Baird, M., and Duchesne, G. (2018). ‘Using Online
Meeting Software to Facilitate Geographically Dispersed Focus
Groups for Health Workforce Research’, Qualitative Health
Research, 28(10): 1621–8.
Mattley, C. (2006). ‘Aural Sex: The Politics and Moral Dilemmas of
Studying the Social Construction of Fantasy’, in D. Hobbs and R.
Wright (eds), The SAGE Handbook of Fieldwork. London: Sage.
Mattsson, C., Hammarén, N., and Odenbring, Y. (2016). ‘Youth “At
Risk”: A Critical Discourse Analysis of the European Commission’s
Radicalisation Awareness Network Collection of Approaches and
Practices Used in Education’, Power and Education, 8(3): 251–65.
Mawhinney, L., and Rinke, C. R. (2018). ‘I Just Feel so Guilty: The
Role of Emotions in Former Urban Teachers’ Career Paths’, Urban
Education, 53(9): 1079–101.
Mayhew, P. (2000). ‘Researching the State of Crime: Local, National,
and International Victim Surveys’, in R. D. King and E. Wincup
(eds), Doing Research on Crime and Justice. Oxford: Oxford
University Press.
Mays, N., Pope, C., and Popay, J. (2005). ‘Systematically Reviewing
Qualitative and Quantitative Evidence to Inform Management and
Policy-Making in the Health Field’, Journal of Health Services
Research and Policy, 10 (Supplement 1): S6–20.
Mazmanian, M., Orlikowski, W. J., and Yates, J. (2013). ‘The
Autonomy Paradox: The Implications of Mobile Email Devices for
Knowledge Professionals’, Organization Science, 24: 1337–57.
McCabe, S. E. (2004). ‘Comparison of Mail and Web Surveys in
Collecting Illicit Drug Use Data: A Randomized Experiment’,
Journal of Drug Education, 34: 61–73.
McCall, L. (2005). ‘The Complexity of Intersectionality’, Signs,
30(3): 1771–800.
McCall, M. J. (1984). ‘Structured Field Observation’, Annual Review
of Sociology, 10: 263–82.
McCrudden, M. T., and McTigue, E. M. (2019). ‘Implementing
Integration in an Explanatory Sequential Mixed Methods Study of
Belief Bias About Climate Change With High School Students’,
Journal of Mixed Methods Research, 13(3): 381–400,
https://doi.org/10.1177/1558689818762576.
McDonald, P., Townsend, K., and Waterhouse, J. (2009). ‘Wrong
Way, Go Back! Negotiating Access in Industry-Based Research’, in K.
Townsend and J. Burgess (eds), Method in the Madness: Research
Stories you Won’t Read in Textbooks. Oxford: Chandos.
McKeever, L. (2006). ‘Online Plagiarism Detection Services: Saviour
or Scourge?’, Assessment and Evaluation in Higher Education, 31:
155–65.
McKernan, B. (2015). ‘The Meaning of a Game: Stereotypes, Video
Game Commentary and Color-Blind Racism’, American Journal of
Cultural Sociology, 3(2): 224–53.
McKernan, B. (2018). Studying the Racial Stories We Tell Ourselves
while Playing Games: The Value of Narrative Analysis in Cultural
Sociology. London: Sage.
Mead, M. (1928). Coming of Age in Samoa. New York: Morrow.
Mears, A. (2011). Pricing Beauty: The Making of a Fashion Model.
Berkeley: University of California Press.
Medway, R. L., and Fulton, J. (2012). ‘When More Gets You Less: A
Meta-analysis of the Effect of Concurrent Web Options on Mail
Survey Response Rates’, Public Opinion Quarterly, 76: 733–46.
Mellahi, K., and Harris, L. (2016). ‘Response Rates in Business and
Management Research: An Overview of Current Practice and
Suggestions for Future Directions’, British Journal of Management,
27(2): 426–37.
Merryweather, D. (2010). ‘Using Focus Group Research in Exploring
the Relationships between Youth, Risk and Social Position’,
Sociological Research Online, 15(1): 11–23.
Merton, R. (1972). ‘Insiders and Outsiders: A Chapter in the
Sociology of Knowledge’, American Journal of Sociology, 78(1), 9–
47.
Merton, R. K. (1967). On Theoretical Sociology. New York: Free
Press.
Merton, R. K., Fiske, M., and Kendall, P. L. (1956). The Focused
Interview: A Manual of Problems and Procedures. New York: Free
Press.
Mervis, J. (2020). ‘Researchers Finally Get Access to Data on
Facebook’s Role in Political Discourse’, Science, 13 February,
www.sciencemag.org/news/2020/02/researchers-finally-get-access-
data-facebook-s-role-political-discourse (accessed 21 July 2020).
Messing, J., Campbell, J., Wilson, J., Brown, S., and Patchell, B.
(2017). ‘The Lethality Screen: The Predictive Validity of an Intimate
Partner Violence Risk Assessment for Use by First Responders’,
Journal of Interpersonal Violence, 32(2): 205–26.
Meterko, M., Restuccia, J. D., Stolzmann, K., Mohr, D., Brennan, C.,
Glasgow, J., and Kaboli, P. (2015). ‘Response Rates, Nonresponse
Bias, and Data Quality: Results from a National Survey of Senior
Healthcare Leaders’, Public Opinion Quarterly, 79(1): 130–44.
Meyer, B., Mok, W., and Sullivan, J. (2015). ‘Household Surveys in
Crisis’, Journal of Economic Perspectives, 29(4): 199–226.
Midgley, C. (1998). ‘TV Violence has Little Impact on Children, Study
Finds’, The Times, 12 January: 5.
Mies, M. (1993). ‘Towards a Methodology for Feminist Research’, in
M. Hammersley (ed.), Social Research: Philosophy, Politics and
Practice. London: Sage.
Miles, M. B. (1979). ‘Qualitative Data as an Attractive Nuisance’,
Administrative Science Quarterly, 24: 590–601.
Miles, M. B., Huberman, A. M., and Saldaña, J. (2019). Qualitative
Data Analysis: A Methods Sourcebook. London: Sage.
Milgram, S. (1963). ‘A Behavioral Study of Obedience’, Journal of
Abnormal and Social Psychology, 67: 371–8.
Miller, R. L. (2000). Researching Life Stories and Family Histories.
London: Sage.
Millward, P. (2009). ‘Glasgow Rangers Supporters in the City of
Manchester: The Degeneration of a “Fan Party” into a “Hooligan
Riot”’, International Review for the Sociology of Sport, 44(4): 381–
98.
Minson, J., VanEpps, E., Yip, J., and Schweitzer, M. (2018). ‘Eliciting
the Truth, the Whole Truth, and Nothing But the Truth: The Effect of
Question Phrasing on Deception’, Organizational Behavior and
Human Decision Processes, 147: 76–93.
Mintzberg, H. (1973). The Nature of Managerial Work. New York:
Harper & Row.
Mishler, E. G. (1986). Research Interviewing: Context and
Narrative. Cambridge, MA: Harvard University Press.
Miske, O., Schweitzer, N., and Horne, Z. (2019). ‘What Information
Shapes and Shifts People’s Attitudes about Capital Punishment?’
Proceedings of the 41st Annual Conference of the Cognitive Science
Society.
Mitchell, J. C. (1983). ‘Case and Situation Analysis’, Sociological
Review, 31: 186–211.
Mkono, M., and Markwell, K. (2014). ‘The Application of
Netnography in Tourism Studies’, Annals of Tourism Research, 48:
289–91.
Mohanty, C. T. (1984). ‘Under Western Eyes: Feminist Scholarship
and Colonial Discourses’, Ground Report, 19 February, 333–58.
Moore, T., McKee, K., and McCoughlin, P. (2015). ‘Online Focus
Groups and Qualitative Research in the Social Sciences: Their Merits
and Limitations in a Study of Housing and Youth’, People, Place and
Policy, 9(1): 17–28.
Moran-Ellis, J., Alexander, V. D., Cronin, A., Dickinson, M., Fielding,
J., Sleney, J., and Thomas, H. (2006). ‘Triangulation and
Integration: Processes, Claims and Implications’, Qualitative
Research, 6(1): 45–59.
Morgan, D. L. (1998a). Planning Focus Groups. Thousand Oaks, CA:
Sage.
Morgan, D. L. (1998b). ‘Practical Strategies for Combining
Qualitative and Quantitative Methods: Applications for Health
Research’, Qualitative Health Research, 8: 362–76.
Morgan, D. L. (2002). ‘Focus Group Interviewing’, in J. F. Gubrium
and J. A. Holstein (eds), Handbook of Interview Research: Context
and Method. Thousand Oaks, CA: Sage.
Morgan, D. L. (2010). ‘Reconsidering the Role of Interaction in
Analyzing and Reporting Focus Groups’, Qualitative Health
Research, 20: 718–22.
Morgan, D. L., and Hoffman, K. (2018). ‘A System for Coding the
Interaction in Focus Groups and Dyadic Interviews’, Qualitative
Report, 23(3): 519–31.
Morgan, D. L., and Spanish, M. T. (1985). ‘Social Interaction and the
Cognitive Organization of Health-Relevant Behaviour’, Sociology of
Health and Illness, 7: 401–22.
Morgan, G., and Smircich, L. (1980). ‘The Case for Qualitative
Research’, Academy of Management Review, 5: 491–500.
Morgan, M., Gibbs, S., Maxwell, K., and Britten, N. (2002). ‘Hearing
Children’s Voices: Methodological Issues in Conducting Focus
Groups with Children aged 7–11 years’, Qualitative Research, 2(1):
5–20.
Morgan, R. (2000). ‘The Politics of Criminological Research’, in R. D.
King and E. Wincup (eds), Doing Research on Crime and Justice.
Oxford: Oxford University Press.
Morley, D. (1980). The ‘Nationwide’ Audience: Structure and
Decoding. London: British Film Institute.
Morris, N., and Cravens Pickens, J. D. (2017). ‘“I’m Not a Gadget”: A
Grounded Theory on Unplugging’, American Journal of Family
Therapy, 45(5): 264–82.
Morse, J. M. (2004). ‘Sampling in Qualitative Research’, in M. S.
Lewis-Beck, A. Bryman, and T. F. Liao (eds), The Sage Encyclopedia
of Social Science Research Methods. Thousand Oaks, CA: Sage.
Moser, C. A., and Kalton, G. (1971). Survey Methods in Social
Investigation. London: Heinemann.
Mühlböck, M., Steiber, N., and Kittel, B. (2017). ‘Less Supervision,
More Satisficing? Comparing Completely Self-Administered Web-
Surveys and Interviews Under Controlled Conditions’, Statistics,
Politics and Policy, 8(1): 13–28.
Mullan, K., and Chatzitheochari, S. (2019). ‘Changing Times
Together? A Time‐Diary Analysis of Family Time in the Digital Age in
the United Kingdom’, Journal of Marriage and Family, 81(4): 795–
811.
Müller, K. (2017). ‘Reframing the Aura: Digital Photography in
Ancestral Worship’, Museum Anthropology, 40(1): 65–78.
Munday, J. (2006). ‘Identity in Focus: The Use of Focus Groups to
Study the Construction of Collective Identity’, Sociology, 40: 89–
105.
Musa, I., Seymour, J., Narayanasamy, M., Wada, T., and Conroy, S.
(2015). ‘A Survey of Older Peoples’ Attitudes towards Advance Care
Planning’, Age and Ageing, 44(3): 371–6.
Naab, T., Karnowski, V., and Schlütz, D. (2019). ‘Reporting Mobile
Social Media Use: How Survey and Experience Sampling Measures
Differ’, Communication Methods and Measures, 13(2): 126–47.
Näher, A-F., and Krumpal, I. (2012). ‘Asking Sensitive Questions:
The Impact of Forgiving Wording and Question Context on Social
Desirability Bias’, Quality and Quantity, 46(5): 1601–16.
Nash, C. J. (2016). Queer Methods and Methodologies: Intersecting
Queer Theories and Social Science Research. London: Routledge,
https://doi.org/10.4324/9781315603223 (open access).
National Centre for Social Research [UK] (2019). ‘British Social
Attitudes 36: Technical Details’,
www.bsa.natcen.ac.uk/media/39290/7_bsa36_technical-details.pdf
(accessed 20 March 2021).
National Student Survey [UK] (2020).
www.officeforstudents.org.uk/publications/the-national-student-
survey-consistency-controversy-and-change/ (accessed 13 March
2020).
Natow, R. S. (2020). ‘The Use of Triangulation in Qualitative Studies
Employing Elite Interviews’, Qualitative Research, 20(2): 160–73.
Nelson, L. K. (2017). ‘Computational Grounded Theory: A
Methodological Framework’, Sociological Methods & Research,
49(1): 3–42, https://doi.org/10.1177/0049124117729703.
Neuendorf, K. (2016). The Content Analysis Guidebook. Thousand
Oaks, CA: Sage.
Neuman, W. (2013). Social Research Methods: Qualitative and
Quantitative Approaches. Harlow, UK: Pearson.
Nichols, K. (2016). ‘Moving Beyond Ideas of Laddism:
Conceptualising “Mischievous Masculinities” as a New Way of
Understanding Everyday Sexism and Gender Relations’, Journal of
Gender Studies, 27(1): 73–85.
Noblit, G. W., and Hare, R. D. (1988). Meta-Ethnography:
Synthesizing Qualitative Studies. Newbury Park, CA: Sage.
Nosowska, G., McKee, K., and Dahlberg, L. (2014). ‘Using Structured
Observation and Content Analysis to Explore the Presence of Older
People in Public Fora in Developing Countries’, Journal of Aging
Research, article ID 860612.
Nowicka, M., and Ryan, L. (2015). ‘Beyond Insiders and Outsiders in
Migration Research: Rejecting A Priori Commonalities’. Introduction
to FQS Thematic Section ‘Researcher, Migrant, Woman:
Methodological Implications of Multiple Positionalities in Migration
Studies’. Forum: Qualitative Sozialforschung/Forum: Qualitative
Social Research, 16(2/May).
Noy, C. (2008). ‘Sampling Knowledge: The Hermeneutics of
Snowball Sampling in Qualitative Research’, International Journal
of Social Research Methodology, 11: 327–44.
Noyes, J., Booth, A., Cargo, M., Flemming, K., Garside, R., Hannes,
K., Harden, A., Harris, J., Lewin, S., Pantoja, T., and Thomas, J.
(2017). ‘Cochrane Qualitative and Implementation Methods Group
Guidance Series—Paper 1: Introduction’, Journal of Clinical
Epidemiology, 97: 35–8.
Nyberg, D. (2009). ‘Computers, Customer Service Operatives, and
Cyborgs: Intra-Actions in Call Centres’, Organization Studies, 30:
1181–99.
Oakley, A. (1981). ‘Interviewing Women: A Contradiction in Terms’,
in H. Roberts (ed.), Doing Feminist Research. London: Routledge &
Kegan Paul.
Oakley, A. (1998). ‘Gender, Methodology and People’s Ways of
Knowing: Some Problems with Feminism and the Paradigm Debate
in Social Science’, Sociology, 32: 707–31.
O’Boyle, Timothy (2019). ‘Protecting Self-Image: The Rationalization
of Cheating in an International Billiard League’, Deviant Behavior,
40(8): 1020–30, doi: 10.1080/01639625.2018.1456717.
O’Brien, K. (2010). ‘Inside “Doorwork”: Gendering the Security
Gaze’, in R. Ryan-Flood and R. Gill (eds), Secrecy and Silence in the
Research Process: Feminist Reflections. London: Routledge.
O’Cathain, A. (2010). ‘Assessing the Quality of Mixed Methods
Research: Toward a Comprehensive Framework’, in A. Tashakkori
and C. Teddlie (eds), SAGE Handbook of Mixed Methods in Social
and Behavioral Research, 2nd edn. Los Angeles: Sage.
O’Cathain, A., Murphy, E., and Nicholl, J. (2007). ‘Integration and
Publication as Indicators of “Yield” from Mixed Methods Studies’,
Journal of Mixed Methods Research, 1: 147–63.
O’Cathain, A., Murphy, E., and Nicholl, J. (2008). ‘The Quality of
Mixed Methods Studies in Health Services Research’, Journal of
Health Services Research & Policy, 13: 92–8.
O’Connor, H., and Madge, C. (2001). ‘Cyber-Mothers: Online
Synchronous Interviewing using Conferencing Software’,
Sociological Research Online, 5(4),
www.socresonline.org.uk/9/2/hine.html (accessed 1 March 2021).
Office for National Statistics [UK] (2019a). The National Statistics
Socio-economic Classification (NS-SEC),
www.ons.gov.uk/methodology/classificationsandstandards/othercla
ssifications/thenationalstatisticssocioeconomicclassificationnssecreb
asedonsoc2010 (accessed 1 December 2020).
Office for National Statistics [UK] (2019b). What is the Difference
between Sex and Gender?,
www.ons.gov.uk/economy/environmentalaccounts/articles/whatisth
edifferencebetweensexandgender/2019-02-21 (accessed 15 July
2020).
Office for National Statistics [UK] (2020). User Guide to Crime
Statistics for England and Wales,
www.ons.gov.uk/peoplepopulationandcommunity/crimeandjustice/
methodologies/userguidetocrimestatisticsforenglandandwales.
O’Grady, C., Dahm, M. R., Roger, P., and Yates, L. (2014). ‘Trust,
Talk and the Dictaphone: Tracing the Discursive Accomplishment of
Trust in a Surgical Consultation’, Discourse and Society, 25: 65–83.
O’Kane, P., Smith, A., and Lerman, M. P. (2019). ‘Building
Transparency and Trustworthiness in Inductive Research through
Computer-Aided Qualitative Data Analysis Software’,
Organizational Research Methods, 24(1): 104–39.
Okely, J. (1994). ‘Thinking Through Fieldwork’, in A. Bryman and R.
G. Burgess (eds), Analyzing Qualitative Data. London: Routledge.
Olesen, B. R., and Nordentoft, H. M. (2013). ‘Walking the Talk? A
Micro-sociological Approach to the Co-production of Knowledge and
Power in Action Research’, International Journal of Action
Research, 9(1): 67–94.
Olson, K., and Smyth, J. D. (2014). ‘Accuracy of Within-Household
Selection in Web and Mail Surveys of the General Population’, Field
Methods, 26: 56–69.
Oltmann, S. (2016). ‘Qualitative Interviews: A Methodological
Discussion of the Interviewer and Respondent Contexts’, Forum:
Qualitative Social Research, 17(2): 1–16.
Oncini, F. (2018). ‘The Holy Gram: Strategy and Tactics in the
Primary School Canteen’, Journal of Contemporary Ethnography,
47(5): 640–70.
O’Neill, M., and Roberts, B. (2019). Walking Methods: Research on
the Move. London: Routledge.
Onwuegbuzie, A. J., and Collins, K. M. T. (2007). ‘A Typology of
Mixed Methods Sampling Designs in Social Sciences Research’,
Qualitative Report, 12(2): 281–316,
https://doi.org/10.46743/2160-3715/2007.1638 (accessed 1 March
2021).
Onwuegbuzie, A. J., and Leech, N. L. (2010). ‘Generalization
Practices in Qualitative Research: A Mixed Methods Case Study’,
Quality and Quantity, 44: 881–92.
Onwuegbuzie, A. J., Dickinson, W. B., Leech, N. L., and Zoran, A. G.
(2009). ‘A Qualitative Framework for Collecting and Analyzing Data
in Focus Group Research’, International Journal of Qualitative
Methods, 8(3): 1–21.
Oppenheim, A. N. (2008). Questionnaire Design, Interviewing and
Attitude Measurement, 3rd edn. London: Pinter.
O’Reilly, K. (2000). The British on the Costa del Sol: Transnational
Identities and Local Communities. London: Routledge.
O’Reilly, K. A. (2010). Service Undone: A Grounded Theory of
Strategically Constructed Silos and their Impact on Customer-
Company Interactions from the Perspective of Retail Employees.
Doctoral dissertation, Utah State University,
http://digitalcommons.usu.edu/etd/669 (accessed 1 March 2021).
O’Reilly, K. A., Paper, D., and Marx, S. (2012). ‘Demystifying
Grounded Theory for Business Research’, Organizational Research
Methods, 15: 247–62.
O’Reilly, M., and Parker, N. (2013). ‘“Unsatisfactory Saturation”: A
Critical Exploration of the Notion of Saturated Sample Sizes in
Qualitative Research’, Qualitative Research, 13(2): 190–7.
O’Reilly, M., Dixon-Woods, M., Angell, E., Ashcroft, R., and Bryman,
A. (2009). ‘Doing Accountability: A Discourse Analysis of Research
Ethics Committee Letters’, Sociology of Health and Illness, 31: 246–
61.
Ostrov, J., and Hart, E. (2013). ‘Observational Methods’, in T. Little
(ed.), The Oxford Handbook of Quantitative Methods in Psychology,
vol. 1. Oxford: Oxford University Press.
Ozgur, C., Dou, M., Li, Y., and Rogers, G. (2017). ‘Selection of
Statistical Software for Data Scientists and Teachers’, Journal of
Modern Applied Statistical Methods, 16(1): 753–74.
Padilla, J., and Leighton, J. (2017). ‘Cognitive Interviewing and
Think Aloud Methods’, in B. Zumbo and A. Hubley (eds),
Understanding and Investigating Response Processes in Validation
Research. New York: Springer.
Pahl, J. (1990). ‘Household Spending, Personal Spending and the
Control of Money in Marriage’, Sociology, 24: 119–38.
Palys, T. (2008). ‘Purposive Sampling’, in L. M. Given (ed.), The
Sage Encyclopedia of Qualitative Research Methods, vol. 2.
Thousand Oaks, CA: Sage.
Park, C. (2003). ‘In Other (People’s) Words: Plagiarism by University
Students—Literature and Lessons’, Assessment and Evaluation in
Higher Education, 28: 471–88.
Parker, M. (2000). Organizational Culture and Identity. London:
Sage.
Pasquandrea, S. (2014). ‘“They Might Read a Fly Speck”: Musical
Literacy as a Discursive Resource in Louis Armstrong’s
Autobiographies’, Social Semiotics, 24: 514–29.
Paterson, B. L. (2012). ‘“It Looks Great But How Do I Know if it
Fits?”: An Introduction to Meta-Synthesis Research’, in K. Hannes
and C. Lockwood (eds), Synthesizing Qualitative Research:
Choosing the Right Approach. Chichester, UK: Wiley.
Patton, M. Q. (1990). Qualitative Evaluation and Research Methods,
2nd edn. Beverly Hills, CA: Sage.
Patton, M. Q. (2015). Qualitative Evaluation and Research Methods,
4th edn. Thousand Oaks, CA: Sage.
Paulus, T. M., Lester, J. N., and Britt, V. G. (2013). ‘Constructing
Hopes and Fears Around Technology: A Discourse Analysis of
Introductory Research Methods Texts’, Qualitative Inquiry, 19(9):
639–51.
Paulus, T., Warren, A., and Lester, J. N. (2016). ‘Applying
Conversation Analysis Methods to Online Talk: A Literature Review’,
Discourse, Context & Media, 12: 1–10.
Pearson, G. (2009). ‘The Researcher as Hooligan: Where
“Participant” Observation Means Breaking the Law’, International
Journal of Social Research Methodology, 12: 243–55.
Pearson, G. (2012). An Ethnography of Football Fans: Cans, Cops
and Carnivals. Manchester, UK: Manchester University Press.
Pedersen, M. J., and Nielsen, C. V. (2016). ‘Improving Survey
Response Rates in Online Panels: Effects of Low-Cost Incentives and
Cost-Free Text Appeal Interventions’, Social Science Computer
Review, 34(2): 229–43.
Peek, L., and Fothergill, A. (2009). ‘Using Focus Groups: Lessons
from Studying Daycare Centers, 9/11, and Hurricane Katrina’,
Qualitative Research, 9: 31–59.
Pehrson, S., Stambulova, N. B., and Olsson, K. (2017). ‘Revisiting the
Empirical Model “Phases in the Junior-to-Senior Transition of
Swedish Ice Hockey Players”: External Validation through Focus
Groups and Interviews’, International Journal of Sports Science &
Coaching, 12(6): 747–61.
Penn, R., Rose, M., and Rubery, J. (1994). Skill and Occupational
Change. Oxford: Oxford University Press.
Peräkylä, A. (1997). ‘Reliability and Validity in Research Based on
Transcripts’, in D. Silverman (ed.), Qualitative Research: Theory,
Method and Practice. London: Sage.
Pettigrew, A. (1997). ‘What is a Processual Analysis?’, Scandinavian
Journal of Management, 13: 337–48.
Pew Research Center (2019). Social Media Fact Sheet,
www.pewinternet.org/fact-sheet/social-media/ (accessed 1 March
2021).
Pezzella, F., Fetzer, M., and Keller, T. (2019). ‘The Dark Figure of
Hate Crime Underreporting’, American Behavioral Scientist,
https://doi.org/10.1177/0002764218823844.
Pheko, M. M. (2018). ‘Autoethnography and Cognitive Adaptation:
Two Powerful Buffers Against the Negative Consequences of
Workplace Bullying and Academic Mobbing’, International Journal
of Qualitative Studies on Health and Well-Being, 13(1): 1459134.
Phelps Moultrie, J., Magee, P. A., and Paredes Scribner, S. M. (2017).
‘Talk About a Racial Eclipse: Narratives of Institutional Evasion in an
Urban School–University Partnership’, Journal of Cases in
Educational Leadership, 20(1): 6–21.
Phillips, N., and Hardy, C. (2002). Discourse Analysis: Investigating
Processes of Social Construction. Thousand Oaks, CA: Sage.
Phoenix, A. (2006). ‘Interrogating Intersectionality: Productive
Ways of Theorising Multiple Positioning’. Kvinder, køn & forskning,
2–3.
Phoenix, C., Smith, P., and Sparkes, A. C. (2010). ‘Narrative Analysis
in Aging Studies: A Typology for Consideration’, Journal of Aging
Studies, 24: 1–11.
Pickering, L., and Kara, H. (2017). ‘Presenting and Representing
Others: Towards an Ethics of Engagement’, International Journal of
Social Research Methodology, 20(3): 299–309.
Pinch, T., and Clark, C. (1986). ‘The Hard Sell: “Patter Merchanting”
and the Strategic (Re)production and Local Management of
Economic Reasoning in the Sales Routines of Market Pitchers’,
Sociology, 20: 169–91.
Pink, S. (2004). ‘Visual Methods’, in C. Seale, G. Gobo, J. F.
Gubrium, and D. Silverman (eds), Qualitative Research Practice.
London: Sage.
Pink, S. (2015). Doing Sensory Ethnography. London: Sage.
Pink, S. (2020). Doing Visual Ethnography, 4th edn. London: Sage.
Piper, N. (2015). ‘Jamie Oliver and Cultural Intermediation’, Food,
Culture & Society, 18(2): 245–64.
Piza, E., Welsh, B., Farrington, D., and Thomas, A. (2019). ‘CCTV
Surveillance for Crime Prevention: A 40-Year Systematic Review
with Meta-analysis’, Criminology and Public Policy, 18(1): 135–59,
https://onlinelibrary.wiley.com/doi/full/10.1111/1745–
9133.12419.
Platell, M., Martin, K., Fisher, C., and Cook, A. (2020). ‘Comparing
Adolescent and Service Provider Perceptions on the Barriers to
Mental Health Service Use: A Sequential Mixed Methods Approach’,
Children and Youth Services Review, 115, 105101,
https://doi.org/10.1016/j.childyouth.2020.105101.
Platt, J. (1981). ‘The Social Construction of “Positivism” and its
Significance in British Sociology, 1950–80’, in P. Abrams, R. Deem,
J. Finch, and P. Rock (eds), Practice and Progress: British Sociology
1950–1980. London: George Allen & Unwin.
Platt, J. (1984). ‘The Affluent Worker Revisited’, in C. Bell and H.
Roberts (eds), Social Researching: Politics, Problems, Practice.
London: Routledge & Kegan Paul.
Platt, J. (1986). ‘Functionalism and the Survey: The Relation of
Theory and Method’, Sociological Review, 34: 501–36.
Platt, J. (1996). A History of Sociological Research Methods in
America 1920–1960. Cambridge: Cambridge University Press.
Plummer, K. (1983). Documents of Life: An Introduction to the
Problems and Literature of a Humanistic Method. London: Allen &
Unwin.
Plummer, K. (2001). ‘The Call of Life Stories in Ethnographic
Research’, in P. Atkinson, A. Coffey, S. Delamont, J. Lofland, and L.
Lofland (eds), Handbook of Ethnography. London: Sage.
Polsky, N. (1967). Hustlers, Beats and Others. Chicago: Aldine.
Poortinga, W., Bickerstaff, K., Langford, I., Niewöhner, J., and
Pidgeon, N. (2004). ‘The British 2001 Foot and Mouth Crisis: A
Comparative Study of Public Risk Perceptions, Trust and Beliefs
about Government Policy in Two Communities’, Journal of Risk
Research, 7: 73–90.
Potter, J. (1996). Representing Reality: Discourse, Rhetoric and
Social Construction. London: Sage.
Potter, J. (1997). ‘Discourse Analysis as a Way of Analysing Naturally
Occurring Talk’, in D. Silverman (ed.), Qualitative Research:
Theory, Method and Practice. London: Sage.
Potter, J. (2004). ‘Discourse Analysis’, in M. Hardy and A. Bryman
(eds), Handbook of Data Analysis. London: Sage.
Potter, J., and Hepburn, A. (2012). ‘Discourse Analysis’, in S. Becker,
A. Bryman, and H. Ferguson (eds), Understanding Research:
Methods and Approaches for Social Work and Social Policy. Bristol,
UK: Policy Press.
Potter, J., and Hepburn, A. (2019). ‘Shaming Interrogatives:
Admonishments, the Social Psychology of Emotion, and Discursive
Practices of Behaviour Modification in Family Mealtimes’, British
Journal of Social Psychology, 59(2): 347–64.
Potter, J., and Wetherell, M. (1987). Discourse and Social
Psychology: Beyond Attitudes and Behaviour. London: Sage.
Potter, J., and Wetherell, M. (1994). ‘Analyzing Discourse’, in A.
Bryman and R. G. Burgess (eds), Analyzing Qualitative Data.
London: Routledge.
Potter, J., Wetherell, M., and Chitty, A. (1991). ‘Quantification
Rhetoric: Cancer on Television’, Discourse and Society, 2: 333–65.
Poulton, E. (2012). ‘“If You Had Balls, You’d Be One of Us!” Doing
Gendered Research: Methodological Reflections on Being a Female
Academic Researcher in the Hyper-Masculine Subculture of
“Football Hooliganism”’, Sociological Research Online, 17: 4,
www.socresonline.org.uk/17/4/4.html (accessed 1 March 2021).
Prada, M., Rodrigues D., Garrido, M., Lopesa, D., Cavalheiro, B., and
Gaspar, R. (2018). ‘Motives, Frequency and Attitudes Toward Emoji
and Emoticon Use’, Telematics and Informatics, 35(7): 1925–34.
Pratt, M. G. (2008). ‘Fitting Oval Pegs into Round Holes: Tensions in
Evaluating and Publishing Qualitative Research in Top-Tier North
American Journals’, Organizational Research Methods, 11: 481–
509.
Preisendörfer, P., and Wolter, F. (2014). ‘Who Is Telling the Truth? A
Validation Study on Determinants of Response Behavior in Surveys’,
Public Opinion Quarterly, 78: 126–46.
Proksch, S.-O., and Slapin, J. B. (2010). ‘Position Taking in
European Parliament Speeches’, British Journal of Political Science,
40: 587–611.
Psathas, G. (1995). Conversation Analysis: The Study of Talk-in-
Interaction. Thousand Oaks, CA: Sage.
Puigvert, L., Valls, R., Garcia Yeste, C., Aguilar, C., & Merrill, B.
(2019). ‘Resistance to and Transformations of Gender-Based
Violence in Spanish Universities: A Communicative Evaluation of
Social Impact’, Journal of Mixed Methods Research, 13(3): 361–80,
https://doi.org/10.1177/1558689817731170.
Punch, M. (1979). Policing the Inner City: A Study of Amsterdam’s
Warmoesstraat. London: Macmillan.
Punch, M. (1994). ‘Politics and Ethics in Qualitative Research’, in N.
K. Denzin and Y. S. Lincoln (eds), Handbook of Qualitative
Research. Thousand Oaks, CA: Sage.
Rada, V., and Martín, V. (2014). ‘Random Route and Quota
Sampling: Do They Offer Any Advantage over Probability Sampling
Methods?’, Open Journal of Statistics, 4(5): 391–401.
Radley, A., and Chamberlain, K. (2001). ‘Health Psychology and the
Study of the Case: From Method to Analytic Concern’, Social Science
and Medicine, 53: 321–32.
Raskind, I. G., Shelton, R. C., Comeau, D. L., Cooper, H. L., Griffith,
D. M., and Kegler, M. C. (2019). ‘A Review of Qualitative Data
Analysis Practices in Health Education and Health Behavior
Research’, Health Education & Behavior, 46(1): 32–9.
Ravn, S. (2018). ‘“I Would Never Start a Fight but …”: Young
Masculinities, Perceptions of Violence, and Symbolic Boundary
Work in Focus Groups’, Men and Masculinities, 21(2): 291–309.
Rayburn, R. L., and Guittar, N. A. (2013). ‘“This is Where You Are
Supposed to Be”: How Homeless Individuals Cope with Stigma’,
Sociological Spectrum, 33: 159–74.
Reason, P., and Rowan, J. (1981). Human Inquiry: A Sourcebook of
New Paradigm Research. New York: John Wiley & Sons.
Reed, M. (2000). ‘The Limits of Discourse Analysis in Organizational
Analysis’, Organization, 7: 524–30.
Rees, R., Caird, J., Dickson, K., Vigurs, C., and Thomas, J. (2013).
The Views of Young People in the UK About Obesity, Body Size,
Shape and Weight: A Systematic Review. London: EPPI-Centre,
Social Science Research Unit, Institute of Education, University of
London.
Reiner, R. (2000). ‘Police Research’, in R. D. King and E. Wincup
(eds), Doing Research on Crime and Justice. Oxford: Oxford
University Press.
Reinharz, S. (1992). Feminist Methods in Social Research. New
York: Oxford University Press.
Reisner, S. L., Randazzo, R. K., White Hughto, J. M., Peitzmeier, S.,
DuBois, L. Z., Pardee, D. J., Marrow, E., McLean, S., and Potter, J.
(2018). ‘Sensitive Health Topics with Underserved Patient
Populations: Methodological Considerations for Online Focus Group
Discussions’, Qualitative Health Research, 28(10): 1658–73.
Revilla, M. A., Saris, W. E., and Krosnick, J. A. (2014). ‘Choosing the
Number of Categories in Agree–Disagree Scales’, Sociological
Methods and Research, 43: 73–97.
Rhodes, R. A., and Tiernan, A. (2015). ‘Focus Groups and Prime
Ministers’ Chiefs of Staff’, Journal of Organizational Ethnography,
4(2): 208–22.
Riessman, C. K. (1993). Narrative Analysis. Newbury Park, CA:
Sage.
Riessman, C. K. (2004). ‘Narrative Interviewing’, in M. S. Lewis-
Beck, A. Bryman, and T. F. Liao (eds), The Sage Encyclopedia of
Social Science Research Methods. Thousand Oaks, CA: Sage.
Riessman, C. K. (2008). Narrative Methods for the Human Sciences.
Thousand Oaks, CA: Sage.
Riffe, D., Lacy, S., and Fico, F. (2014). Analysing Media Messages:
Using Quantitative Content Analysis in Research. London:
Routledge.
Ritchie, J., Spencer, L., and O’Connor, W. (2003). ‘Carrying out
Qualitative Analysis’, in J. Ritchie and J. Lewis (eds), Qualitative
Research Practice: A Guide for Social Science Students and
Researchers. London: Sage.
Ritzer, G. (1975). ‘Sociology: A Multiple Paradigm Science’,
American Sociologist, 10: 156–67.
Rivera, J. (2019). ‘When Attaining the Best Sample is Out of Reach:
Nonprobability Alternatives when Engaging in Public Administration
Research’, Journal of Public Affairs Education, 25(3): 314–42.
Robinson, S., and Leonard, K. (2019). Designing Quality Survey
Questions. London: Sage.
Röder, A., and Mühlau, P. (2014). ‘Are They Acculturating? Europe’s
Immigrants and Gender Egalitarianism’, Social Forces, 92: 899–
928.
Rogers, N. (2019). ‘Holding Court: The Social Regulation of
Masculinity in University Pickup Basketball’, Journal of
Contemporary Ethnography, 48(6): 731–49, doi:
10.1177/0891241619827369.
Rojek, C. (1995). Decentring Leisure: Rethinking Leisure Theory.
London: Sage.
Rorty, R. (1979). Philosophy and the Mirror of Nature. Princeton:
Princeton University Press.
Rose, G. (2016). Visual Methodologies: An Introduction to
Researching with Visual Materials, 4th edn. London: Sage.
Rosenfeld, M. (2017). ‘Marriage, Choice, and Couplehood in the Age
of the Internet’, Sociological Science, 4: 90–510.
Rosenhan, D. L. (1973). ‘On Being Sane in Insane Places’, Science,
179: 350–8.
Rosenthal, R., and Jacobson, L. (1968). Pygmalion in the
Classroom: Teacher Expectation and Pupils’ Intellectual
Development. New York: Holt, Rinehart & Winston.
Rosnow, R. L., and Rosenthal, R. (1997). People Studying People:
Artifacts and Ethics in Behavioral Research. New York: W. H.
Freeman.
Roulston, K., deMarrais, K., and Lewis, J. (2003). ‘Learning to
Interview in the Social Sciences’, Qualitative Inquiry, 9: 643–68.
Rowan, K. (2016). ‘“Who Are You in This Body?” Identifying Demons
and the Path to Deliverance in a London Pentecostal Church’,
Language in Society, 45(2): 247.
Rubinstein, J. (1973). City Police. New York: Ballantine.
Rübsamen, N., Akmatov, M., Castell, S., Karch, A., and Mikolajczyk,
T. (2017). ‘Comparison of Response Patterns in Different Survey
Designs: A Longitudinal Panel with Mixed-Mode and Online-Only
Design’, Emerging Themes in Epidemiology, 14:4.
Ryan, G. W., and Bernard, H. R. (2003). ‘Techniques to Identify
Themes’, Field Methods, 15: 85–109.
Ryan, L. (2015). ‘“Inside” and “Outside” of What or Where?
Researching Migration through Multi-Positionalities’, Forum:
Qualitative Sozialforschung/Forum: Qualitative Social Research,
16(2).
Ryan, L., Kofman, E., and Aaron, P. (2011). ‘Insiders and Outsiders:
Working with Peer Researchers in Researching Muslim
Communities’, International Journal of Social Research
Methodology, 14(1): 49–60.
Ryan, S. (2009). ‘On the “Mop-Floor”: Researching Employment
Relations in the Hidden World of Commercial Cleaning’, in K.
Townsend and J. Burgess (eds), Method in the Madness: Research
Stories you Won’t Read in Textbooks. Oxford: Chandos.
Sacks, H., Schegloff, E. A., and Jefferson, G. (1974). ‘A Simplest
Systematics for the Organization of Turn-Taking in Conversation’,
Language, 50: 696–735.
Salancik, G. R. (1979). ‘Field Stimulations for Organizational
Behavior Research’, Administrative Science Quarterly, 24: 638–49.
Saldaña, J. (2021). The Coding Manual for Qualitative Researchers,
3rd edn. London: Sage.
Salkind, N., and Fry, B. (2019). Statistics for People Who (Think
They) Hate Statistics. London: Sage.
Sallaz, J. J. (2009). The Labor of Luck: Casino Capitalism in the
United States and South Africa. Berkeley: University of California
Press.
Sammons, P., and Davis, S. (2017). ‘Mixed Methods Approaches and
their Application in Educational Research’, in D. Wyse, N. Selwyn,
and E. Smith(eds), The BERA/SAGE Handbook of Educational
Research, vol. 2. London: SAGE, doi: 10.4135/9781473983953.n24.
Sampson, H., and Thomas, M. (2003). ‘Lone Researchers at Sea:
Gender, Risk and Responsibility’, Qualitative Research, 3: 165–89.
Samuel, R. (1976). ‘Oral History and Local History’, History
Workshop Journal, 1: 191–208.
Sanders, C. R. (2009). ‘Colorful Writing: Conducting and Living with
a Tattoo Ethnography’, in A. J. Puddephatt, W. Shaffir, and S. W.
Kleinknecht (eds), Ethnographies Revisited: Constructing Theory in
the Field. London: Routledge.
Sandoval, C. (2000). ‘US Third World Feminism: Differential Social
Movement’, in C. Sandoval, Methodology of the Oppressed.
Minneapolis, MN: University of Minnesota Press, 40–63.
Sanjek, R. (1990). ‘A Vocabulary for Fieldnotes’, in R. Sanjek (ed.),
Fieldnotes: The Making of Anthropology. Ithaca, NY: Cornell
University Press.
Sarsby, J. (1984). ‘The Fieldwork Experience’, in R. F. Ellen (ed.),
Ethnographic Research: A Guide to General Conduct. London:
Academic Press.
Savage, M. (2010). Identities and Social Change in Britain since
1940: The Politics of Method. Oxford: Oxford University Press.
Savage, M. (2016). ‘The Fall and Rise of Class Analysis in British
Sociology, 1950–2016’, Tempo Social, 28(2): 57–72.
Savage, M., and Burrows, R. (2007). ‘The Coming Crisis of Empirical
Sociology’, Sociology, 41: 885–99.
Savage, M., Devine, F., Cunningham, N., Taylor, M., Li, Y.,
Hjellbrekke, J., Le Roux, B., Friedman, S., and Miles, A. (2013). ‘A
New Model of Social Class? Findings from the BBC’s Great British
Class Survey Experiment’, Sociology, 47(2): 219–50,
https://doi.org/10.1177/0038038513481128.
Schaeffer, N. (2017). ‘Survey Interviewing: Departures from the
Script’, in D. L. Vannette and J. A. Krosnick (eds), The Palgrave
Handbook of Survey Research. Basingstoke, UK: Palgrave, 109–19.
Schaeffer, N. C., Dykema, J., and Maynard, D. W. (2010).
‘Interviewers and Interviewing’, in P. V. Marsden and J. D. Wright
(eds), Handbook of Survey Research, 2nd edn. Bingley, UK:
Emerald.
Schegloff, E. A. (1997). ‘Whose Text? Whose Context?’, Discourse
and Society, 8: 165–87.
Scheper-Hughes, N. (2004). ‘Parts Unknown: Undercover
Ethnography of the Organs-Trafficking Underworld’, Ethnography,
5: 29–73.
Schneider, C. J. (2016). ‘Police Presentational Strategies on Twitter
in Canada’, Policing and Society, 26(2): 129–47, doi:
10.1080/10439463.2014.922085.
Schneider, C. J., and Trottier, D. (2012). ‘The 2011 Vancouver Riot
and the Role of Facebook in Crowd-Sourced Policing’, BC Studies,
175: 57–72.
Schneider, S. M., and Foot, K. A. (2004). ‘The Web as an Object of
Study’, New Media and Society, 6: 114–22.
Schreier, M. (2014). ‘Ways of Doing Qualitative Content Analysis:
Disentangling Terms and Terminologies’, Forum: Qualitative
Sozialforschung/Forum: Qualitative Social Research,
15(1/January).
Schrøder, K. C. (2019). ‘Audience Reception Research in a Post-
broadcasting Digital Age’, Television & New Media, 20(2): 155–69.
Schuman, H., and Presser, S. (1981). Questions and Answers in
Attitude Surveys: Experiments on Question Form, Wording, and
Context. San Diego, CA: Academic Press.
Schutz, A. (1962). Collected Papers I: The Problem of Social Reality.
The Hague: Martinus Nijhof.
Schwartz-Shea, P., and Yanow, D. (2012). Interpretive Research
Design: Concepts and Processes. New York: Routledge.
Scott, J. (1990). A Matter of Record. Cambridge: Polity.
Scourfield, J., Colombo, G., Burnap, P., Evans, R., Jacob, N.,
Williams, M., and Caul, S. (2019). ‘The Number and Characteristics
of Newspaper and Twitter Reports on Suicides and Road Traffic
Deaths in Young People’, Archives of Suicide Research, 23(3): 507–
22.
Seale, C. (1999). The Quality of Qualitative Research. London: Sage.
Seale, C., Charteris-Black, J., MacFarlane, A., and McPherson, A.
(2010). ‘Interviews and Internet Forums: A Comparison of Two
Sources of Qualitative Data’, Qualitative Health Research, 20: 595–
606.
Seale, C., Ziebland, S., and Charteris-Black, J. (2006). ‘Gender,
Cancer Experience and Internet Use: A Comparative Keyword
Analysis of Interviews and Online Cancer Support Groups’, Social
Science and Medicine, 62: 2577–90.
Sebo, P., Maisonneuve, H., Cerutti, B., Fournier, J., Senn, N., and
Haller, D. (2017). ‘Rates, Delays, and Completeness of General
Practitioners’ Responses to a Postal Versus Web-Based Survey: A
Randomized Trial’, Journal of Medical Internet Research, 19(3):
e83.
Seeman, M. (1959). ‘On the Meaning of Alienation’, American
Sociological Review, 24: 783–91.
Seitz, S. (2015). ‘Pixilated Partnerships—Overcoming Obstacles in
Qualitative Interviews via Skype: A Research Note’, Qualitative
Research, 16(2): 229–35.
Sempik, J., Becker, S., and Bryman, A. (2007). ‘The Quality of
Research Evidence in Social Policy: Consensus and Dissension
among Researchers’, Evidence and Policy, 3: 407–23.
Shadish, W. R., Cook, T. D., and Campbell, D. T. (2002).
Experimental and Quasi-experimental Design for Generalized
Causal Inference. Boston: Houghton Mifflin.
Shah, A. P., and Gu, Y. (2020). ‘A Mixed-Methods Approach to
Identifying Sexual Assault Concerns on a University Campus’,
Journal of Aggression Maltreatment & Trauma, 29(6): 643–60,
doi: 10.1080/10926771.2020.1734707.
Shapiro, S., and Humphreys, L. (2013). ‘Exploring Old and New
Media: Comparing Military Blogs to Civil War Letters’, New Media
and Society, 15: 1151–67.
Sharma, G. (2017). ‘Pros And Cons Of Different Sampling
Techniques’, International Journal of Applied Research, 3(7): 749–
52.
Shaw, C. R. (1930). The Jack-Roller. Chicago: University of Chicago
Press.
Sheehan, K., and Hoy, M. G. (1999). ‘Using E-Mail to Survey Internet
Users in the United States: Methodology and Assessment’, Journal
of Computer-Mediated Communication, 4(3),
https://onlinelibrary.wiley.com/doi/full/10.1111/j.1083-
6101.1999.tb00101.x (accessed 20 March 2021).
Sheffield Fairness Commission (2013). Making Sheffield Fairer,
www.sheffield.gov.uk/content/dam/sheffield/docs/your-city-
council/our-plans,-policies-and-
performance/Fairness%20Commission%20Report.pdf (accessed 15
July 2020).
Sheller, M., and Urry, J. (2006). ‘The New Mobilities Paradigm’,
Environment and Planning A, 38: 207–26.
Shepherd, J., Garcia, J., Oliver, S., Harden, A., Rees, R., Brunton, G.,
and Oakley, A. (2002). Barriers to, and Facilitators of the Health of
Young People: A Systematic Review of Evidence on Young People’s
Views and on the Interventions in Mental Health, Physical Activity
and Healthy Eating. Vol. 2: Complete Report. London: EPPI-Centre,
Social Science Research Unit, Institute of Education, University of
London.
https://eppi.ioe.ac.uk/cms/Portals/0/PDF%20reviews%20and%20s
ummaries/Vol%202_Web.pdf?ver=2006-03-02-124349-650.
Shepherd, J., Harden, A., Rees, R., Brunton, G., Garcia, J., Oliver, S.,
and Oakley, A. (2006). ‘Young People and Healthy Eating: A
Systematic Review of Research on Barriers and Facilitators’, Health
Education Review, 21: 239–57.
Shuy, R. W. (2002). ‘In-Person versus Telephone Interviewing’, in J.
F. Gubrium and J. A. Holstein (eds), Handbook of Interview
Research: Context and Method. Thousand Oaks, CA: Sage.
Siegel, M., Johnson, R. M., Tyagi, K., Power, K., Lohsen, M. C.,
Ayers, A. J., and Jernigan, D. H. (2013). ‘Alcohol Brand References
in U.S. Popular Music, 2009–2011’, Substance Use and Misuse, 48:
1475–84.
Sillince, J. A. A., and Brown, A. D. (2009). ‘Multiple Organizational
Identities and Legitimacy: The Rhetoric of Police Websites’, Human
Relations, 62: 1829–56.
Silva, E. B., and Wright, D. (2005). ‘The Judgment of Taste and
Social Position in Focus Group Research’, Sociologia e ricerca
sociale, 76–7: 241–53.
Silva, E. B., and Wright, D. (2008). ‘Researching Cultural Capital:
Complexities in Mixing Methods’, Methodological Innovations
Online, 2(3): 50–62, doi: 10.4256/mio.2008.0005 (accessed 1 March
2021).
Silverman, D. (1984). ‘Going Private: Ceremonial Forms in a Private
Oncology Clinic’, Sociology, 18: 191–204.
Silverman, D. (1985). Qualitative Methodology and Sociology:
Describing the Social World. Aldershot, UK: Gower.
Silverman, D. (2015). Interpreting Qualitative Data. London: Sage.
Silverman, D. (2017). Doing Qualitative Research: A Practical
Handbook, 5th edn. London: Sage.
Simmel, G. (1908). Soziologie: Untersuchungen über die Formen
der Vergesellschaftung. Leipzig: Duncker & Humblot.
Simon, H. (1960). Administrative Behavior: A Study of Decision-
Making Processes in Administrative Organizations, 2nd edn. New
York: Macmillan.
Singer, E. (2003). ‘Exploring the Meaning of Consent: Participation
in Research and Beliefs about Risks and Benefits’, Journal of Official
Statistics, 19: 273–85.
Singer, E., and Couper, M. (2017). ‘Some Methodological Uses of
Responses to Open Questions and Other Verbatim Comments in
Quantitative Surveys’, Methods, Data and Analyses, 11(2): 115–34.
Singh, S., and Wassenaar, D. (2016). ‘Contextualising the Role of the
Gatekeeper in Social Science Research’, South African Journal of
Bioethics and Law, 9(1): 42, doi: 10.7196/SAJBL.2016.v9i1.465.
Sink, A., and Mastro, D. (2017). ‘Depictions of Gender on Primetime
Television: A Quantitative Content Analysis’, Mass Communication
and Society, 20(1): 3–22, 10.1080/15205436.2016.1212243.
Sinyor, M., Schaffer, A., Hull, I., and Peisah, C. (2015). ‘Last Wills
and Testaments in a Large Sample of Suicide Notes: Implications for
Testamentary Capacity’, British Journal of Psychiatry, 206(1): 72–6.
Skeggs, B. (1994). ‘Situating the Production of Feminist
Ethnography’, in M. Maynard and J. Purvis (eds), Researching
Women’s Lives from a Feminist Perspective. London: Taylor &
Francis.
Skeggs, B. (1997). Formations of Class and Gender. London: Sage.
Skeggs, B. (2001). ‘Feminist Ethnography’, in P. Atkinson, A. Coffey,
S. Delamont, J. Lofland, and L. Lofland (eds), Handbook of
Ethnography. London: Sage.
Skoglund, R. I. (2019). ‘When “Words Do Not Work”: Intervening in
Children’s Conflicts in Kindergarten’, International Research in
Early Childhood Education, 9(1): 23–38.
Sleijpen, M., Boeije, H. R., Kleber, R. J., and Mooren, T. (2016).
‘Between Power and Powerlessness: A Meta-ethnography of Sources
of Resilience in Young Refugees’, Ethnicity & Health, 21(2): 158–80.
Sloan, L. (2017). ‘Who Tweets in the United Kingdom? Profiling the
Twitter Population Using the British Social Attitudes Survey 2015’,
Social Media and Society, 3(1): 1–11,
https://doi.org/10.1177/2056305117698981.
Sloan, L., Jessop, C., Al Baghal, T., and Williams, M. (2019). ‘Linking
Survey And Twitter Data: Informed Consent, Disclosure, Security
and Archiving’, Journal of Empirical Research on Human Research
Ethics, 15(1–2): 63–76, 10.1177/1556264619853447.
Smithson, J., and Brannen, J. (2002). ‘Qualitative Methodology in
Cross-National Research’, in J. Brannen, S. Lewis, A. Nilsen, and J.
Smithson (eds), Young Europeans, Work and Family: Futures in
Transition. London: Routledge.
Smyth, J. D., Dillman, D. A., Christian, L. M., and McBride, N.
(2009). ‘Open-Ended Questions in Web Surveys: Can Increasing the
Size of Answer Spaces and Providing Extra Verbal Instructions
Improve Response Quality?’, Public Opinion Quarterly, 73: 325–37.
Smyth, J. D., Dillman, D. A., Christian, L. M., and Stern, M. J.
(2006). ‘Comparing Check-All and Forced-Choice Question Formats
in Web Surveys’, Public Opinion Quarterly, 70: 66–77.
Smyth, J., and Olson, K. (2015). ‘Recording What the Respondent
Says: Does Question Format Matter?’ paper presented at the 2015
Annual Conference of the American Association for Public Opinion
Research, Hollywood, FL.
Snee, H. (2013). ‘Framing the Other: Cosmopolitanism and the
Representation of Difference in Overseas Gap Year Narratives’,
British Journal of Sociology, 64: 142–62.
Snyder, N., and Glueck, W. F. (1980). ‘How Managers Plan: The
Analysis of Managers’ Activities’, Long Range Planning, 13: 70–6.
So, K. K. F., Oh, H., and Min, S. (2018). ‘Motivations and Constraints
of Airbnb Consumers: Findings from a Mixed-Methods Approach’,
Tourism Management, 67: 224–36, doi:
10.1016/j.tourman.2018.01.009.
Solheim, J., Rege, M., and McTigue, E. (2017). ‘Study Protocol: “Two
Teachers”: A Randomized Controlled Trial Investigating Individual
and Complementary Effects of Teacher–Student Ratio in Literacy
Instruction and Professional Development for Teachers’,
International Journal of Educational Research, 86: 122–30.
Spratto, E., and Bandalos, D. (2020). ‘Attitudinal Survey
Characteristics Impacting Participant Responses’, Journal of
Experimental Education, 88(4): 620–42.
Sprokkereef, A., Larkin, E., Pole, C. J., and Burgess, R. G. (1995).
‘The Data, the Team, and the Ethnography’, in R. G. Burgess (ed.),
Computing and Qualitative Research, Studies in Qualitative
Methodology, vol. 5. Greenwich, CT: JAI Press, 81–103.
Stacey, J. (1988). ‘Can there Be a Feminist Ethnography?’, Women’s
International Studies Forum, 11: 21–7.
Stacey, M. (1960). Tradition and Change: A Study of Banbury.
London: Oxford University Press.
Stake, R. E. (1995). The Art of Case Study Research. Thousand Oaks,
CA: Sage.
Stanley, L., and Temple, B. (1995). ‘Doing the Business? Evaluating
Software Packages to Aid the Analysis of Qualitative Data Sets’, in R.
G. Burgess (ed.), Computing and Qualitative Research, Studies in
Qualitative Methodology, vol. 5. Greenwich, CT: JAI Press, 169–97.
Stauff, M. (2018). ‘The Assertive Image: Referentiality and
Reflexivity in Sports Photography’, Historical Social
Research/Historische Sozialforschung, 43(2): 53–71.
Steele, S., Ruskin, G., McKee, M., and Stuckler, D. (2019). ‘“Always
Read the Small Print”: A Case Study of Commercial Research
Funding, Disclosure and Agreements with Coca-Cola’, Journal of
Public Health Policy, 40: 273–85,
https://doi.org/10.1057/s41271-019-00170-9.
Stewart, K., and Williams, M. (2005). ‘Researching Online
Populations: The Use of Online Focus Groups for Social Research’,
Qualitative Research, 5: 395–416.
Stoddart, M., Haluza-DeLay, R., and Tindall, D. (2016). ‘Canadian
News Media Coverage of Climate Change: Historical Trajectories,
Dominant Frames, and International Comparisons’, Society and
Natural Resources, 29(2): 218–32.
Strauss, A. (1987). Qualitative Analysis for Social Scientists. New
York: Cambridge University Press.
Strauss, A., and Corbin, J. M. (1990). Basics of Qualitative
Research: Grounded Theory Procedures and Techniques. Newbury
Park, CA: Sage.
Strauss, A., and Corbin, J. M. (1998). Basics of Qualitative
Research: Techniques and Procedures for Developing Grounded
Theory. Thousand Oaks, CA: Sage.
Strauss, A., Schatzman, L., Ehrich, D., Bucher, R., and Sabshin, M.
(1973). ‘The Hospital and its Negotiated Order’, in G. Salaman and K.
Thompson (eds), People and Organizations. London: Longman.
Streiner, D. L., and Sidani, S. (2010). When Research Goes Off the
Rails: Why it Happens and What You Can Do About It. New York:
Guilford.
Stroobant, J., De Dobbelaer, R., and Raeymaeckers, K. (2018).
‘Tracing the Sources: A Comparative Content Analysis of Belgian
Health News’, Journalism Practice, 12(3): 344–61.
Sturges, J. E., and Hanrahan, K. J. (2004). ‘Comparing Telephone
and Face-to-Face Qualitative Interviewing: A Research Note’,
Qualitative Research, 4: 107–18.
Sturgis, P., Brunton-Smith, I., Kuha, J., and Jackson, J. (2014a).
‘Ethnic Diversity, Segregation and the Social Cohesion of
Neighbourhoods in London’, Ethnic and Racial Studies, 37: 1286–
309.
Sturgis, P., Roberts, C., and Smith, P. (2014b). ‘Middle Alternatives
Revisited: How the Neither/Nor Response Acts as a Way of Saying “I
Don’t Know?”’, Sociological Methods and Research, 43: 15–38.
Sudman, S., and Bradburn, N. M. (1982). Asking Questions: A
Practical Guide to Questionnaire Design. San Francisco: Jossey-
Bass.
Surra, C. A., and Ridley, C. A. (1991). ‘Multiple Perspectives on
Interaction: Participants, Peers, and Observers’, in B. M.
Montgomery and S. Duck (eds), Studying Interpersonal Interaction.
New York: Guilford Press, 35–55.
Sutton, R. I., and Rafaeli, A. (1988). ‘Untangling the Relationship
between Displayed Emotions and Organizational Sales: The Case of
Convenience Stores’, Academy of Management Journal, 31: 461–87.
Swain, J. (2004). ‘The Resources and Strategies that 10–11-Year-Old
Boys Use to Construct Masculinities in the School Setting’, British
Educational Research Journal, 30: 167–85.
Sweet, C. (2001). ‘Designing and Conducting Virtual Focus Groups’,
Qualitative Market Research, 4: 130–5.
Synnot, A., Hill, S., Summers, M., and Taylor, M. (2014). ‘Comparing
Face-to-Face and Online Qualitative Research with People with
Multiple Sclerosis’, Qualitative Health Research, 24(3): 431–8.
Tabachnick, B., and Fidell, L. (2018). Using Multivariate Statistics.
London: Pearson.
Tadajewski, M. (2016). ‘Focus Groups: History, Epistemology and
Non-individualistic Consumer Research’, Consumption Markets &
Culture, 19(4): 319–45.
Tarr, J. (2013). Overly Honest Social Science? The Value of
Acknowledging Bias, Subjectivity and the Messiness of Research,
https://blogs.lse.ac.uk/impactofsocialsciences/2013/03/13/overly-
honest-social-science/.
Tashakkori, A., and Teddlie, C. (eds) (2010). Handbook of Mixed
Methods in Social and Behavioral Research, 2nd edn. Los Angeles:
Sage.
Taylor, J., Lamont, A., and Murray, M. (2018). ‘Talking about
Sunbed Tanning in Online Discussion Forums: Assertions and
Arguments’, Psychology & Health, 33(4): 518–36.
Tcherni, M., Davies, A., Lopes, G., and Lizotte, A. (2016). ‘The Dark
Figure of Online Property Crime: Is Cyberspace Hiding a Crime
Wave?’, Justice Quarterly, 33(5): 890–911.
Teddlie, C., and Yu, F. (2007). ‘Mixed Methods Sampling: A
Typology with Examples’, Journal of Mixed Methods Research, 1:
77–100.
Thelwall, M. (2017). ‘Sentiment Analysis’, in L. Sloan and A. Quan-
Haase (eds), The SAGE Handbook of Social Media Research
Methods. London: Sage.
Thomas, D. R. (2006). ‘A General Inductive Approach for Analyzing
Qualitative Evaluation Data’, American Journal of Evaluation, 27:
237–46.
Thomas, G., Fisher, R., Whitmarsh, L., Milfont, T., and Poortinga, W.
(2018). ‘The Impact of Parenthood on Environmental Attitudes and
Behaviour: A Longitudinal Investigation of the Legacy Hypothesis’,
Population and Environment, 39(3): 261–76.
Thomas, J., and Harden, A. (2008). ‘Methods for the Thematic
Synthesis of Qualitative Research in Systematic Reviews’, BMC
Medical Research Methodology, 8,
www.ncbi.nlm.nih.gov/pmc/articles/PMC2478656/pdf/1471-2288-
8-45.pdf (accessed 1 March 2021).
Thomas, J., Harden, A., and Newman, M. (2012). ‘Synthesis:
Combining Results Systematically and Appropriately’, in D. Gough,
S. Oliver, and J. Thomas (eds), An Introduction to Systematic
Reviews. London: Sage.
Thompson, C., Lewis, D., Greenhalgh, T., Taylor, S., and Cummins,
S. (2013). ‘A Health and Social Legacy for East London: Narratives of
“Problem” and “Solution” Around London 2012’, Sociological
Research Online, 18(1), www.socresonline.org.uk/18/2/1.html
(accessed 1 March 2021).
Thornberg, R. (2018). ‘School Bullying and Fitting into the Peer
Landscape: A Grounded Theory Field Study’, British Journal of
Sociology of Education, 39(1): 144–58.
Thornberg, R., and Charmaz, K. (2014). ‘Grounded Theory and
Theoretical Coding’, in U. Flick (ed.), SAGE Handbook of Qualitative
Data Analysis. London: Sage.
Timulak, L. (2014). ‘Qualitative Meta-Analysis’, in U. Flick (ed.),
SAGE Handbook of Qualitative Data Analysis. London: Sage.
Tinati, R., Halford, S., Carr, L., and Pope, C. (2014). ‘Big Data:
Methodological Challenges and Approaches for Sociological
Analysis’, Sociology, 48: 663–81.
Tourangeau, R., and Smith, T. W. (1996). ‘Asking Sensitive
Questions: The Impact of Data Collection Mode, Question Format,
and Question Context’, Public Opinion Quarterly, 60: 275–304.
Tourangeau, R., and Yan, T. (2007). ‘Sensitive Questions in Surveys’,
Psychological Bulletin, 133: 859–83.
Tourangeau, R., Conrad, F. G., and Couper, M. P. (2013). The Science
of Web Surveys. Oxford: Oxford University Press.
Townsend, K., and Burgess, J. (eds) (2009). Method in the Madness:
Research Stories you won’t Read in Textbooks. Oxford: Chandos.
Tracy, S. J. (2010). ‘Qualitative Quality: Eight “Big Tent” Criteria for
Excellent Qualitative Research’, Qualitative Inquiry, 16: 837–51.
Tranfield, D., Denyer, D., and Smart, P. (2003). ‘Towards a
Methodology for Developing Evidence-Informed Management
Knowledge by Means of Systematic Review’, British Journal of
Management, 14: 207–22.
Trouille, D., and Tavory, I. (2016). ‘Shadowing: Warrants for
Intersituational Variation in Ethnography’, Sociological Methods &
Research, 48(3): 534–60.
Trow, M. (1957). ‘Comment on “Participant Observation and
Interviewing: A Comparison”’, Human Organization, 16: 33–5.
Tufte, E. (1983). Visual Display of Quantitative Information.
Cheshire, CT: Graphic Press.
Turner, V. W., and Bruner, E. (eds.) (1986). The Anthropology of
Experience. Chicago: University of Illinois Press.
Tuinman M., Lehmann, V., and Hagedoorn, M. (2018). ‘Do Single
People Want to Date a Cancer Survivor? A Vignette Study’, PLoS
ONE, 13(3).
Turnbull, C. (1973). The Mountain People. London: Cape.
Tuttas, C. A. (2014). ‘Lessons Learned using Web Conference
Technology for Online Focus Group Interviews’, Qualitative Health
Research, 25(1): 122–33.
Twine, F. W. (2006). ‘Visual Ethnography and Racial Theory: Family
Photographs as Archives of Interracial Intimacies’, Ethnic and Racial
Studies, 29: 487–511.
Twine, R. (2017). ‘Materially Constituting a Sustainable Food
Transition: The Case of Vegan Eating Practice’, Sociology, 52(1):
166–81.
Twitter (2019). ‘Display Requirements: Tweets’,
https://developer.twitter.com/en/developer-terms/display-
requirements.html.
Tyrer, S., and Heyman, B. (2016). ‘Sampling in Epidemiological
Research: Issues, Hazards and Pitfalls’, British Journal of
Psychology Bulletin, 40(2): 57–60.
Uekusa, S. (2019). ‘Surfing with Bourdieu! A Qualitative Analysis of
the Fluid Power Relations among Surfers in the Line-Ups’, Journal
of Contemporary Ethnography, 48(4): 538–62,
https://doi.org/10.1177/0891241618802879.
Underhill, C., and Olmsted, M. G. (2003). ‘An Experimental
Comparison of Computer-Mediated and Face-to-Face Focus Groups’,
Social Science Computer Review, 21: 506–12.
Underwood, M. (2017). ‘Exploring the Social Lives of Image and
Performance Enhancing Drugs: An Online Ethnography of the Zyzz
Fandom of Recreational Bodybuilders’, International Journal of
Drug Policy, 39: 78–85.
Uprichard, E., and Dawney, L. (2019). ‘Data Diffraction: Challenging
Data Integration in Mixed Methods Research’, Journal of Mixed
Methods Research, 13(1): 19–32, doi: 10.1177/1558689816674650.
Upton, A. (2011). ‘In Testing Times: Conducting an Ethnographic
Study of Animal Rights Protestors’, Sociological Research Online, 16,
www.socresonline.org.uk/16/4/1.html (accessed 1 March 2021).
Vacchelli, E. (2018). Embodied Research in Migration Studies:
Using Creative and Participatory Approaches. Bristol, UK: Policy
Press.
van Bezouw, M. J., Garyfallou, A., Oană, I. E., and Rojon, S. (2019).
‘A Methodology for Cross-National Comparative Focus Group
Research: Illustrations from Discussions about Political Protest’,
Quality & Quantity, 53(6): 2719–39.
Vancea, M., Shore, J., and Utzet, M. (2019). ‘Role of Employment-
Related Inequalities in Young Adults’ Life Satisfaction: A
Comparative Study in Five European Welfare State Regimes’,
Scandinavian Journal of Public Health, 47(3): 357–65.
van Dijk, T. A. (1997). ‘Discourse as Interaction in Society’, in T. A.
van Dijk (ed.), Discourse as Social Interaction, Discourse Studies: A
Multidisciplinary Introduction, vol. 2. Newbury Park, CA: Sage.
Van Maanen, J. (1978). ‘On Watching the Watchers’, in P. Manning
and J. Van Maanen (eds), Policing: The View from the Street. Santa
Monica, CA: Goodyear.
Van Maanen, J. (1988). Tales of the Field: On Writing Ethnography.
Chicago: University of Chicago Press.
Van Maanen, J. (1991a). ‘Playing Back the Tape: Early Days in the
Field’, in W. B. Shaffir and R. A. Stebbins (eds), Experiencing
Fieldwork: An Inside View of Qualitative Research. Newbury Park,
CA: Sage.
Van Maanen, J. (1991b). ‘The Smile Factory: Work at Disneyland’, in
P. J. Frost, L. F. Moore, M. R. Louis, C. C. Lundberg, and J. Martin
(eds), Reframing Organizational Culture. Newbury Park, CA: Sage.
Van Maanen, J. (2010). ‘A Song for my Supper: More Tales of the
Field’, Organizational Research Methods, 13: 240–55.
Van Maanen, J. (2011). Tales of the Field: On Writing Ethnography,
2nd edn. Chicago: University of Chicago Press.
Van Maanen, J., and Kolb, D. (1985). ‘The Professional Apprentice:
Observations on Fieldwork Roles in Two Organizational Settings’,
Research in the Sociology of Organizations, 4: 1–33.
Van Selm, M., and Jankowski, N. W. (2006). ‘Conducting Online
Surveys’, Quality and Quantity, 40(3): 435–56.
Vasquez, J. M., and Wetzel, C. (2009). ‘Tradition and the Invention
of Racial Selves: Symbolic Boundaries, Collective Authenticity, and
Contemporary Struggles for Racial Equality’, Ethnic and Racial
Studies, 32: 1557–75.
Vaughan, D. (1996). The Challenger Launch Decision: Risky
Technology, Culture, and Deviance at NASA. Chicago: University of
Chicago Press.
Vaughan, D. (2004). ‘Theorizing Disaster: Analogy, Historical
Ethnography, and the Challenger Accident’, Ethnography, 5: 315–
47.
Vaughan, D. (2006). ‘The Social Shaping of Commission Reports’,
Sociological Forum, 21: 291–306.
Venkatesh, S. (2008). Gang Leader for a Day: A Rogue Sociologist
Crosses the Line. London: Allen Lane.
Verhagen, S., Hasmi, L., Drukker, M., van Os, J., and Delespaul, P.
(2016). ‘Use of the Experience Sampling Method in the Context of
Clinical Trials’, Evidence Based Mental Health, 19(3): 86–9.
Vincent, J., Kian, E. M., Pedersen, P. M., Kutz, A., and Hill, J. S.
(2010). ‘England Expects: English Newspapers’ Narratives about the
English Football Team in the 2006 World Cup’, International
Review of the Sociology of Sport, 45: 199–223.
Voas, D. (2014). ‘Towards a Sociology of Attitudes’, Sociological
Research Online, 19, www.socresonline.org.uk/19/1/12.html
(accessed 1 March 2021).
Vogl, S. (2013). ‘Telephone versus Face-to-Face Interviews: Mode
Effect in Semi-structured Interviews with Children’, Sociological
Methodology, 43(1): 133–77.
Vogt, W. (ed.) (2011). Sage Quantitative Research Methods. London:
Sage.
Wacquant, L. (2004). Body and Soul: Notebooks of an Apprentice
Boxer. Oxford: Oxford University Press.
Wacquant, L. (2009). ‘Habitus as Topic and Tool: Reflections on
Becoming a Prize Fighter’, in A. J. Puddephatt, W. Shaffir, and S. W.
Kleinknecht (eds), Ethnographies Revisited: Constructing Theory in
the Field. London: Routledge.
Wagg, S. (2010). ‘Cristiano Meets Mr Spleen: Global Football
Celebrity, Mythic Manchester and the Portuguese Diaspora’, Sport in
Society, 13: 919–34.
Wagg, S. (2017). ‘Leisure and “the Civilising Process”’, in K.
Spracklen, B. Lashua, E. Sharpe, and S. Swain (eds), The Palgrave
Handbook of Leisure Theory. Basingstoke, UK: Palgrave.
Walby, S., and Towers, J. (2017). ‘Measuring Violence to End
Violence: Mainstreaming Gender’, Journal of Gender-Based
Violence, 1(1): 11–31.
Walker, J. (2010). ‘Measuring Plagiarism: Researching What
Students Do, Not What They Say’, Studies in Higher Education, 35:
41–59.
Walsh, I., Holton, J. A., Bailyn, L., Fernandez, W., Levian, N., and
Glaser, B. (2015). ‘What Grounded Theory is … A Critically Reflective
Conversation Among Scholars’, Organizational Research Methods,
18(4): 581–99, 10.1177/1094428114565028.
Walters, R. (2020). ‘Relinquishing Control in Focus Groups: The Use
of Activities in Feminist Research with Young People to Improve
Moderator Performance’, Qualitative Research, 20(4): 361–77.
Wansink, B., Mukund, A., and Weislogel, A. (2016). ‘Food Art Does
Not Reflect Reality: A Quantitative Content Analysis of Meals in
Popular Paintings’, SAGE Open, 6(3): 1–10.
Warr, D. J. (2005). ‘“It Was Fun … But We Don’t Usually Talk about
These Things”: Analyzing Sociable Interaction in Focus Groups’,
Qualitative Inquiry, 11: 200–25.
Warren, C. A. B. (2002). ‘Qualitative Interviewing’, in J. F. Gubrium
and J. A. Holstein (eds), Handbook of Interview Research: Context
and Method. Thousand Oaks, CA: Sage.
Warren, L. (2018). ‘Representing Self-Representing Ageing’, in A.
Walker (ed), The New Dynamics of Ageing, vol 2. Bristol, UK: Policy
Press, 219–42.
Warwick, D. P. (1982). ‘Tearoom Trade: Means and End in Social
Research’, in M. Bulmer (ed.), Social Research Ethics: An
Examination of the Merits of Covert Participant Observation.
London: Macmillan, 38–58.
Wästerfors, D., and Åkerström, M. (2016). ‘Case History Discourse:
A Rhetoric of Troublesome Youngsters and Faceless Treatment’,
European Journal of Social Work, 19(6): 871–86.
Waters, J. (2015). ‘Snowball Sampling: A Cautionary Tale Involving a
Study of Older Drug Users’, International Journal of Social
Research Methodology, 18(4): 367–80.
Watson, A., Ward, J., and Fair, J. (2018). ‘Staging Atmosphere:
Collective Emotional Labour on the Film Set’, Social & Cultural
Geography, https://doi.org/10.1080/14649365.2018.1551563.
Watson, N., and Wilkins, R. (2011). ‘Experimental Change from
Paper-Based Interviewing to Computer-Assisted Interviewing in the
HILDA Survey’, HILDA Discussion Paper Series No. 2/11, Melbourne
Institute of Applied Economic and Social Research, University of
Melbourne.
Watts, L. (2008). ‘The Art and Craft of Train Travel’, Social and
Cultural Geography, 9: 711–26.
Weaver, A., and Atkinson, P. (1994). Microcomputing and
Qualitative Data Analysis. Aldershot, UK: Avebury.
Webb, E. J., Campbell, D. T., Schwartz, R. D., and Sechrest, L.
(1966). Unobtrusive Measures: Nonreactive Measures in the Social
Sciences. Chicago: Rand McNally.
Weber, M. (1947). The Theory of Social and Economic Organization,
trans A. M. Henderson and T. Parsons. New York: Free Press.
Weinmann, T., Thomas, S., Brilmayer, S., Heinrich, S., and Radon,
K. (2012). ‘Testing Skype as an Interview Method in Epidemiologic
Research: Response and Feasibility’, International Journal of Public
Health, 57: 959–61.
West, B., and Blom, A. (2017). ‘Explaining Interviewer Effects: A
Research Synthesis’, Journal of Survey Statistics and Methodology,
5(2): 175–211.
Westmarland, L. (2001). ‘Blowing the Whistle on Police Violence:
Ethics, Research and Ethnography’, British Journal of Criminology,
41: 523–35.
Wetherell, M. (1998). ‘Positioning and Interpretative Repertoires:
Conversation Analysis and Post-Structuralism in Dialogue’,
Discourse and Society, 9: 387–412.
Whitaker, C., Stevelink, S., and Fear, N. (2017). ‘The Use of Facebook
in Recruiting Participants for Health Research Purposes: A
Systematic Review’, Journal of Medical Internet Research, 19(8):
e290.
White, P. (2017). Developing Research Questions, 2nd edn. London:
Palgrave.
Whyte, W. F. (1955). Street Corner Society, 2nd edn. Chicago:
University of Chicago Press.
Widdicombe, S. (1993). ‘Autobiography and Change: Rhetoric and
Authenticity of “Gothic” Style’, in E. Burman and I. Parker (eds),
Discourse Analytic Research: Readings and Repertoires of Text.
London: Routledge.
Wikipedia (n.d., a). ‘List of Hoaxes on Wikipedia’,
https://en.wikipedia.org/wiki/Wikipedia:List_of_hoaxes_on_Wikip
edia (accessed 20 March 2021).
Wikipedia (n.d., b). ‘Wikipedia: Size Comparisons’,
https://en.wikipedia.org/wiki/Wikipedia:Size_comparisons
(accessed 20 March 2020).
Wilkinson, J. (2009). ‘Staff and Student Perceptions of Plagiarism
and Cheating’, International Journal of Teaching and Learning in
Higher Education, 20: 98–105.
Wilkinson, S. (1998). ‘Focus Groups in Feminist Research: Power,
Interaction, and the Co-Production of Meaning’, Women’s Studies
International Forum, 21: 111–25.
Wilkinson, S. (1999). ‘Focus Group Methodology: A Review’,
International Journal of Social Research Methodology, 1: 181–203.
Wilkinson, S., and Kitzinger, C. (2008). ‘Conversation Analysis’, in C.
Willig and W. Stainton-Rogers (eds), SAGE Handbook of Qualitative
Research in Psychology. London: Sage.
Williams, H. M., Topping, A., Coomarasamy, A., and Jones, L. L.
(2020). ‘Men and Miscarriage: A Systematic Review and Thematic
Synthesis’, Qualitative Health Research, 30(1): 133–45.
Williams, M. D. (2000). ‘Interpretivism and Generalisation’,
Sociology, 34: 209–24.
Williams, M. D., Sloan, L., Cheung, S. Y., Sutton, C., Stevens, S., and
Runham, L. (2016). ‘Can’t Count or Won’t Count? Embedding
Quantitative Methods in Substantive Sociology Curricula: A Quasi-
experiment’, Sociology, 50(3): 435–52, 10.1177/0038038515587652.
Williams, M. L. (2016). ‘Guardians Upon High: An Application of
Routine Activities Theory to Online Identity Theft in Europe at The
Country and Individual Level’, British Journal of Criminology,
56(1): 21–48, 10.1093/bjc/azv011.
Williams, M. L., and Burnap, P. (2016). ‘Cyberhate on Social Media
in the Aftermath of Woolwich: A Case Study in Computational
Criminology and Big Data’, British Journal of Criminology, 56(2):
211–38, 10.1093/bjc/azv059.
Williams, M. L., Burnap, P., and Sloan, L. (2017a). ‘Crime Sensing
with Big Data: The Affordances and Limitations of Using Open
Source Communications to Estimate Crime Patterns’, British
Journal of Criminology, 57(2): 320–40, 10.1093/bjc/azw031.
Williams, M. L., Burnap, P., and Sloan, L. (2017b). ‘Towards an
Ethical Framework for Publishing Twitter Data in Social Research:
Taking into Account Users’ Views, Online Context and Algorithmic
Estimation’, Sociology, 51(6): 1149–68, 10.1177/0038038517708140.
Williams, S., Clausen, M. G., Robertson, A., Peacock, S., and
McPherson, K. (2012). ‘Methodological Reflections on the Use of
Asynchronous Online Focus Groups in Health Research’,
International Journal of Qualitative Methods, 11(4): 368–83.
Willis, R. (2018). ‘Observations Online: Finding the Ethical
Boundaries of Facebook Research’, Research Ethics, 15(1): 1–17.
Wilson, J. Q., and Kelling, G. L. (1982). ‘The Police and
Neighbourhood Safety: Broken Windows’, Atlantic Monthly, 127:
29–38.
Wilson, J. Z. (2008). Prison: Cultural Memory and Dark Tourism.
New York: Peter Lang.
Windsong, E. A. (2016). ‘Incorporating Intersectionality into
Research Design: An Example using Qualitative Interviews’,
International Journal of Social Research Methodology, 21(2): 135–
47, doi: 10.1080/13645579.2016.1268361.
Wingfield, A. H. (2009). ‘Racializing the Glass Escalator:
Reconsidering Men’s Experiences with Women’s Work’, Gender &
Society, 23(1): 5–26.
Winkle-Wagner, R., Kelly, B. T., Luedke, C. L., and Reavis, T. B.
(2019). ‘Authentically Me: Examining Expectations that are Placed
upon Black Women in College’, American Educational Research
Journal, 56(2): 407–43.
Winlow, S., Hobbs, D., Lister, S., and Hadfield, P. (2001). ‘Get Ready
to Duck: Bouncers and the Realities of Ethnographic Research on
Violent Groups’, British Journal of Criminology, 41: 536–48.
Wolcott, H. F. (1990a). Writing up Qualitative Research. Newbury
Park, CA: Sage.
Wolcott, H. F. (1990b). ‘Making a Study “More Ethnographic”’,
Journal of Contemporary Ethnography, 19: 44–72.
Wolf, D. R. (1991). ‘High Risk Methodology: Reflections on Leaving
an Outlaw Society’, in W. B. Shaffir and R. A. Stebbins (eds),
Experiencing Fieldwork: An Inside View of Qualitative Research.
Newbury Park, CA: Sage.
Wolfinger, N. H. (2002). ‘On Writing Field Notes: Collection
Strategies and Background Expectancies’, Qualitative Research, 2:
85–95.
Wolkowitz, C. (2012). ‘Flesh and Stone Revisited: The Body Work
Landscape of South Florida’, Sociological Research Online, 17,
www.socresonline.org.uk/17/2/26.html (accessed 1 March 2021).
Wood, K., Patterson, C., Katikireddi, S. V., and Hilton, S. (2014).
‘Harms to “Others” from Alcohol Consumption in the Minimum Unit
Pricing Policy Debate: A Qualitative Content Analysis of UK
Newspapers (2005–12)’, Addiction, 109: 578–84.
Wood, R. T., and Williams, R. J. (2007). ‘“How Much Money Do You
Spend on Gambling?” The Comparative Validity of Question
Wordings to Assess Gambling Expenditure’, International Journal
of Social Research Methodology, 10: 63–77.
Woodfield, K. (ed.) (2018). The Ethics of Online Research, vol. 2.
Advances in Research Ethics and Integrity. Bingley, UK: Emerald.
Woodfield, K., and Iphofen, R. (2018). ‘Introduction to Volume 2:
The Ethics of Online Research’, in K. Woodfield (ed.), The Ethics of
Online Research. Bingley, UK: Emerald, 1–12.
Woods, M., Macklin, R., and Lewis, G. K. (2016). ‘Researcher
Reflexivity: Exploring the Impacts of CAQDAS Use’, International
Journal of Social Research Methodology, 19(4): 385–403.
Woodthorpe, K. (2011). ‘Researching Death: Methodological
Reflections on the Management of Critical Distance’, International
Journal of Social Research Methodology, 14(2): 99–109.
Wright, C., Nyberg, D., and Grant, D. (2012). ‘“Hippies on the Third
Floor”: Climate Change, Narrative Identity and the Micro-Politics of
Corporate Environmentalism’, Organization Studies, 33: 1451–75.
Wright, C. J., Darko, N., Standen, P. J., and Patel, T. G. (2010).
‘Visual Research Methods: Using Cameras to Empower Socially
Excluded Youth’, Sociology, 44: 541–58.
Wu, M.-Y., and Pearce, P. L. (2014). ‘Appraising Netnography:
Towards Insights about new Markets in the Digital Tourist Era’,
Current Issues in Tourism, 17: 463–74.
Yang, K., and Banamah, A. (2013). ‘Quota Sampling as an Alternative
to Probability Sampling? An Experimental Study’, Sociological
Research Online, 19, www.socresonline.org.uk/19/1/29.html
(accessed 1 March 2021).
Yeager, D. S., Krosnick, J. A., Chang, L., Javitz, H. S., Levendusky, M.
S., Simpser, A., and Wang, R. (2011). ‘Comparing the Accuracy of
RDD Telephone Surveys and Internet Surveys with Probability and
Non-Probability Samples’, Public Opinion Quarterly, 75: 709–47.
Yin, R. K. (2009). Case Study Research: Design and Methods, 4th
edn. Los Angeles: Sage.
Yin, R. K. (2017). Case Study Research and Applications: Design
and Methods, 6th edn. Los Angeles: Sage.
Young, N., and Dugas, E. (2011). ‘Representations of Climate Change
in Canadian National Print Media: The Banalization of Global
Warming’, Canadian Review of Sociology, 48: 1–22.
Zempi, I. (2017). ‘Researching Victimisation using Auto-
ethnography: Wearing the Muslim Veil in Public’, Methodological
Innovations, 10(1): 2059799117720617.
Zhang, B., Mildenberger, M., Howe, P., Marlon, J., Rosenthal, S., and
Leiserowitz, A. (2020). ‘Quota Sampling using Facebook
Advertisements’, Political Science Research and Methods, 8(3):
558–64.
Zhang, C., and Conrad, F. (2018). ‘Intervening to Reduce Satisficing
Behaviors in Web Surveys: Evidence from Two Experiments on How
it Works’, Social Science Computer Review, 36(1): 57–81.
Zhang, X., Kuchinke, L., Woud, M., Velten, J., and Margraf, J.
(2017). ‘Survey Method Matters: Online/Offline Questionnaires and
Face-to-Face or Telephone Interviews Differ’, Computers in Human
Behavior, 71: 172–80.
Zhang, Y., and Shaw, J. D. (2012). ‘Publishing in AMJ—Part 5:
Crafting the Methods and Results’, Academy of Management
Journal, 55: 8–12.
Zickar, M. J., and Carter, N. T. (2010). ‘Reconnecting with the Spirit
of Workplace Ethnography: A Historical Review’, Organizational
Research Methods, 13: 304–19.
Znaniecki, F. (1934). The Method of Sociology. New York: Farrar &
Rinehart.
ZuWallack, R. (2009). ‘Piloting Data Collection via Cell Phones:
Results, Experiences, and Lessons Learned’, Field Methods, 21: 388–
406.
Name Index
Abawi, O. 560
Abbas, A. 377, 383
Akard, T. 185
Akerlof, K. 237
Åkerström, M. 515
Alatinga, K. A. 569
Allison, G. T. 59
Altheide, D. L. 407, 516–19
Alvesson, M. 495
Amrhein, V. 340
Anderson, A. 241
Anderson, B. 229
Anderson, E. 404
Anderson, L. N. 288
Antonucci, L. 64
Aresi, G. 569
Armstrong, L. 499–500
Asch, S. E. 473
Askins, K. 59
Atkinson, P. 17, 59, 388, 392, 399, 405–6, 514, 532, 542, 543, 549
Atkinson, R. 445–6
Atrey, S. 30, 31
Attride-Stirling, J. 541
B
Badenes-Ribera, L. 304
Bahr, H. M. 61
Bampton, R. 439
Banks, M. 413
Bansak, K. 240–1
Barber, K. 410
Baumgartner, J. C. 186
Bazeley, P. 539
Bell, C. 394
Bengtsson, M. 52
Beninger, K. 119–20
Bernhard, L. 273
Bharti, S. 311
Bhaskar, R. 24
Bickerstaff, K. 561
Birkelund, G. 63
Birt, L. 364
Bisdee, D. 54
Blaikie, N. 148
Blauner, R. 416
Blommaert, L. 48, 50
Bock, A. 289
Bock, J. J. 59
Boepple, L. 286–7
Bogdan, R. 26
Bonner, P. 397
Booker, C. 75
Boole, G. 95
Borkowska, M. 57
Bosnjak, M. 186
boyd, d 113
Braekman, E. 223–4
Brandtzaeg, P. 311
Bregoli, I. 568
Brewster, Z. W. 146
Britt, V. G. 551
Britton, J. 354
Brooks, R. 511
Brown, A. D. 510
Brown, D. 104
Brüggen, E. 472
Bruner, E. 351t
Bryant, A. 526
Bryman, A. 59, 65t, 66, 161, 324, 338, 339, 351, 365, 373, 374, 379,
383, 384, 386, 413, 414, 433–5, 457, 458–9, 461, 463, 529, 535, 536t,
556, 557, 560, 562, 564, 571, 572, 598
Buckle, J. L. 133
Burger, J. M. 111
Burrows, R. 135
Butler, A. E. 382
Butler, T. 388
Cahill, M. 546
Calder, B. J. 456
Callaghan, G. 59
Carson, A. 544
Carter, N. T. 420
Carter-Harris, L. 185
Cassidy, R. 394
Catterall, M. 550
Cavendish, R. 59
Charlton, T. 49
Charmaz, K. 381, 431, 526, 527, 528, 528t, 529, 532, 533, 537
Chatzitheochari, S. 230
Chin, M. G. 158, 252
Cicourel, A. V. 53, 159, 160
Clairborn, W. L. 47
Clark, A. 436–7
Clark, C. 491–2
Clayman, S. 478
Clifford, J. 351t
Cloward, R. A. 18
Coffey, A. 514, 532, 542, 549
Collins, K. M. T. 386
Collins, P. 505
Connelly, R. 566
Conrad, F. G. 185
Cook, T. D. 45, 46
Cortazzi, M. 544
Corti, L. 229, 232
Crandall, R. 113
Cravens Pickens, J. D. 527
Crawford, S. D. 186
Crenshaw, K. 29, 30
Creswell, J. W. 556, 567, 598–9
Crist, E. 570
Crompton, R. 63
Crook, C. 230
Curasi, C. F. 439–40, 470
Curtice, J. 75
Cyr, J. 453, 465
Dacin, M. T. 357
Dai, N. T. 386
Dale, A. 294
Datta, J. 539
Davidson, J. 548
Davis, K. 273, 287
Davis, S. 574
Deacon, D. 59, 66, 457, 458–9, 461, 463
Deakin, H. 440
de Bruijne, M. 217
de Certeau, M. 359
Delamont, S. 266, 543
De Leeuw, E. D. 246
Demasi, M. A. 489–90
de Rada, V. 224
DeVault, M. L. 447
De Vaus, D. 142, 150, 157
Diersch, N. 200
Dillman, D. A. 215, 217, 249
Dingwall, R. 111
Ditton, J. 59, 394, 395, 398, 403
Domínguez, J. 224
Dugas, E. 274
Durkheim, É. 25, 33, 34, 596
Dwyer, S. C. 133
Eaton, M. A. 364
Edmiston, D. 436
Elford, J. 440
Elliott, H. 229
Ellis, C. 410
Emerson, R. M. 421
Emmel, N. 436–7
Erel, U. 352
Eschleman, K. J. 154
Evans, A. 440
Fairclough, N. 493
Faraday, A. 445
Farkas, J. 517
Faulk, N. 570
Fenton, N. 59, 66, 277, 457, 458–9, 461, 463, 559, 572–3
Ferguson, H. 438
Festinger, L. 60, 110, 113, 118, 125
Fetters, M. D. 556
Fidell, L. 339
Field, A. 338
Fielding, N. G. 112, 548
Finch, J. 242–3
Fink, K. 506–7
Finn, M. 436–7
Fish, R. 419
Fisher, M. J. 351t
Flanders, N. 259
Fletcher, J. 112
Flick, U. 350
Flyvbjerg, B. 61
Foster, L. 142, 170–1, 178, 211, 318, 330, 340, 342, 343
Frean, A. 49
Fricker, R. 185
Fulton, J. 225
Gallaher, C. M. 32–3
Gambetta, D. 404
Gans, H. J. 59, 112
Garcia, A. C. 411
Garfinkel, H. 477
Garthwaite, K. 409, 410
Geer, B. 447, 449
Geertz, C. 59, 351t, 355
Gicevic, S. 273
Gilbert, G. N. 484, 488
Giles, D. 112, 118, 119, 125, 411
Gill, R. 486, 487
Gill, V. T. 478
Gioia, D. A. 541
Giordano, C. 229
Giulianotti, R. 402
Glaser, B. G. 21, 380, 526, 527, 531, 533
Glucksmann, M. 394
Goetz, J. P. 363
Goffman, A. 131–2, 409
Goffman, E. 289, 291
Gottdiener, M. 519
Goulding, C. 531
Gouldner, A. 131
Grant, D. 441
Grassian, D. T. 559
Grazian, D. 443–4
Greaves, F. 277, 311, 513
Green, J. 539
Greenland, S. 180
Grogan, S. 540, 558
Gross, G. 447
Groves, R. M. 127
Gu, Y. 559
Guba, E. 42, 363, 364, 365, 366, 369
Guégen, N. 264
Guest, G. 387, 455, 456, 460
Guittar, N. A. 426
Gunderman, R. 126
Gunn, E. 275
Guschwan, M. 60, 396, 399, 408, 409, 421
Guthrie, L. F. 65t
Gwet, K. 155
Halford, S. 129
Hall, W. S. 65t
Hallett, R. E. 410
Hamill, H. 404
Hamilton, D. 266
Hammersley, M. 366, 368, 369, 374, 388, 392, 392, 399, 420–1
Hand, M. 130
Hanrahan, K. J. 438
Hantrais, L. 62
Haraway, D. 132
Harden, A. 547
Hardie-Bick, J. 397
Hardy, C. 492
Hardy, M. 373, 374
Hare, R. D. 545, 546
Hauken, M. A. 599–602
Hay, I. 116
Healy, K. 330
Heath, S. 196
Heneghan, M. 178
Henwood, K. 436–7
Hepburn, A. 485
Herbell, K. 185
Hitchcock, A. 507
Hobbs, D. 401, 402
Hochschild, A. R. 18, 362
Hoefnagels, M. 338
Hofmans, J. 232
Holbrook, B. 549
Holdaway, S. 59, 112, 402
Holsti, O. R. 271, 272
Homan, R. 112, 119
Hughes, G. 133–4
Humphreys, L. 110, 113, 117, 118, 289, 310, 311, 502–3
Humphreys, M. 116, 401, 408
Husserl, E. 25, 354
Hutchby, I. 481
I
Iacono, V. L. 440
Ibrahim, Q. 177
Iphofen, R. 129–30
Irvine, A. 194, 438
Israel, M. 116
Jack, I. 505–6
Jackson, E. 63
Jackson, K. 550
Jackson, W. 409
Jacob, C. 264
Jacobs, M. 463
Jacobson, L. 43–4, 45, 46–7, 51, 110, 113, 125
Jæger, M. 207
James, N. 440
Jamieson, J. 561
Janmaat, J. 295–6
Jefferson, G. 480
Jenkins, N. 436
Jennings, W. G. 273, 587
Jerolmack, C. 449
Jessee, S. 246
John, I. D. 273
Johnston, M. 294
Jones, I. R. 383, 430
Judge, M. 158
Kahan, D. 285
Kalton, G. 179
Kaminska, O. 195
Kandola, B. 458–9, 463
Kane, L. 126
Kapsis, R. E. 507
Kara, H. 352, 597
Kärreman, D. 495
Katz, J. 525, 526
Keating, A. 295–6
Kelling, G. L. 20, 86
Kellogg, K. C. 64
Kelly, Y. 75
Kirk, J. 42
Kitzinger, C. 483
Kitzinger, J. 465, 466, 467
Kivits, J. 440
Knepper, P. 499
Kolb, D. 395
Kowalski, A. 357
Kozinets, R. V. 411–13
Kramer, A. D. I. 118
Krause, M. 357
Kress, G. 352
Kriesi, H. 273
Krippendorff, K. 271, 272
Lafleur, J. M. 59
Lamont, M. 449
LaPiere, R. T. 256, 262
Laub, J. H. 445
Laurent, D. 185
Laurie, H. 55
Lawrence, T. B. 409
Lazarsfeld, P. 152
Le Blanc, A. M. 550
LeCompte, M. D. 363
Ledford, C. J. 288
Lewis, O. 59
Li, J. 561
Lieberman, J. D. 310
Liebling, A. 131, 132
Liebow, E. 60
Light, P. 230
Lijadi, A. A. 468, 469, 472
Likert, R. 152
Lincoln, Y. S. 42, 350, 351–2, 351t, 363, 364, 365–6, 369
Lindesmith, A. R. 526
Linvill, D. L. 528–9
Livingstone, S. 237
Lloyd, A. 118
Lofland, J. 354, 355, 405, 428, 442
Lofland, L. 354, 355, 405, 428, 442
Lombroso, C. 499
Longhi, S. 55
Longhurst, R. 367
Lumsden, K. 362
Lynch, M. 368
Lynd, H. M. 60, 61
Lynd, R. S. 60, 61
Lynn, M. 146
Lynn, P. 57, 195
Macgilchrist, F. 494
MacInnes, J. 142, 338
MacKay, J. 272
Maclaran, P. 550
Macnaghten, P. 463
Madriz, M. 474
Maitlis, S. 409
Malinowski, B. 229
Malsch, B. 386
Malterud, K. 386
Mangione, T. W. 192
Mann, C. 439, 467, 468
Marcus, G. E. 351t
Markwell, K. 413
Marsh, C. 51, 566
Marshall, G. 174, 280
Martin, P. 262, 263
Mason, J. 363, 436
Mason, M. 386
Mason, W. 354–5, 370, 371, 383, 556
Masterman, M. 564
Mastro, D. 273, 277, 279
Matthews, K. L. 469
Mattley, C. 395
Mattsson, C. 493–4
Mawhinney, L. 385
Mayhew, P. 200
Mazmanian, M. 427
McCabe, S. E. 223
McCaig, C. 310
McCall, L. 30, 31
McCall, M. J. 265, 267
McCrudden, M. T. 558, 561, 569
McDonald, P. 74
McKernan, B. 542
Mead, G. H. 26
Mead, M. 60, 61
Mellahi, K. 215
Merleau-Ponty, M. 160
Merton, R. K. 18, 19, 133
Mescoli, E. 59
Messing, J. 157
Meterko, M. 182–3
Midgley, C. 49
Mies, M. 34, 130
Miles, M. B. 524
Milgram, S. 110, 111, 113, 119, 125
Miller, D. 409
Miller, M. L. 42
Millward, P. 388
Minson, J. 207
Mirza, H. S. 35
Mishler, E. G. 542
Miske, O. 240
Moody, J. 330
Mooney, A. 360
Morgan, D. L. 454, 457, 459, 461, 462, 463, 467
Morgan, G. 564
Morgan, M. 473–4
Morgan, R. 134
Morley, D. 454
Morris, J. S. 186
Morris, N. 527
Moser, C. A. 179
Mullan, K. 230
Müller, K. 503–4
Munday, J. 466
Munir, K. 357
Musa, I. 240
Naper, A. 275
Natow, R. S. 507
Nelson, L. K. 533
Neuendorf, K. 272
Neuman, W. 142
Nichols, K. 354
Nilsen, A. 59
Noblit, G. W. 545, 546
Nosowska, G. 264–5
Noy, C. 384
Noyes, J. 545
Obama, B. 275
O’Boyle, T. 27
O’Cathain, A. 370, 571, 598
O’Connell, R. 360
Olmsted, M. G. 472
Olson, K. 214
O’Neill, M. 352
Onwuegbuzie, A. J. 386, 387
Oppenheim, A. N. 192
O’Reilly, K. A. 20–1, 22, 59, 357, 398
O’Reilly, M. 381–2, 487, 490–1
Orlikowski, W. J. 427
Ostrov, J. 265
Ozgur, C. 316
Pahl, J. 196
Palys, T. 379
Parker, M. 435–6
Parker, N. 381–2
Pasquandrea, S. 499
Patton, M. Q. 379
Paulus, T. M. 483, 551
Pearce, P. L. 413
Pettigrew, A. 355
Pheko, M. M. 409
Phillips, N. 492
Phoenix, A. 30
Phoenix, C. 543
Pickering, L. 597
Pinch, T. 491–2
Plummer, K. 445
Polsky, N. 402
Poortinga, W. 560
Preisendörfer, P. 212
Psathas, G. 478
Puigvert, L. 556
Punch, M. 112, 126, 402
Rafaeli, A. 333
Raskind, I. G. 370–1
Ravn, S. 461
Rayburn, R. L. 426
Reason, P. 352
Reed, M. 495
Rees, R. 93
Reiner, R. 134
Rhodes, R. A. 465–6
Rinke, C. R. 385
Robson, G. 388
Röder, A. 19, 20, 21, 36, 62, 297
Romney, M. 275
Ronaldo, C. 509–10
Rooney, W. 509
Rose, G. 413
Rose, M. 63
Rosenfeld, M. 240
Rosenhan, D. L. 110, 125
Rothe, H. 219
Roulston, K. 433
Rowan, J. 352
Rowan, K. 479–80, 481
Roy, D. 40
Rubery, J. 63
Rubinstein, J. 402
Rübsamen, N. 224
Ryan, G. W. 539
Sacker, A. 75
Salancik, G. R. 263
Sammons, P. 574
Sampson, H. 134
Sampson, R. J. 445
Sanders, C. R. 404
Sanjek, R. 405
Sarsby, J. 392
Schaeffer, N. C. 192
Schegloff, E. A. 494
Schepers, A. 275–6, 289, 291
Scott, S. 397
Sebo, P. 186
Seeman, M. 154
Seitz, S. 440
Sempik, J. 383
Shadish, W. R. 47
Shah, A. P. 559
Shapiro, S. 502–3
Shaw, C. R. 59
Shaw, J. D. 587
Sheller, M. 437
Simon, H. 196
Singh, S. 134
Sink, A. 273, 277, 279
Sinyor, M. 167
Skeggs, B. 355, 400–1, 419, 420
Skoglund, R. I. 457
Slapin, J. B. 273
Sleijpen, M. 546
Smith, T. W. 213
Smithson, J. 462
Smyth, J. D. 214, 224, 249
Snee, H. 511–12, 516
Spanish, M. T. 454, 459, 462, 463
Stacey, J. 419–20
Stacey, M. 59
Stake, R. E. 59, 61
Steinbach, R. 539
Stewart, F. 439, 467, 468
Stoddart, M. 274
Strauss, A. L. 21, 29, 380–1, 526, 527, 528, 528t, 529, 532, 533
Stroobant, J. 274
Sturges, J. E. 438
Swidler, A. 449
Synnot, A. 469
T
Tabachnick, B. 339
Tarr, J. 14
Taylor, J. 490–1
Taylor, S. J. 26
Teddlie, C. 379, 381, 556
Thomas, D. R. 541
Thomas, G. 296
Thomas, J. 547
Thomas, M. 134
Thompson, C. 506, 507
Thompson, J. K. 286–7
Thompson, P. 59
Thornberg, R. 528, 531
Tiernan, A. 465–6
Tinati, R. 310, 313, 593
Todd, A. 287
Tracey, P. 357
Tracy, S. J. 366
Tranfield, D. 90, 93
Trottier, D. 518
Trouille, D. 356
Trow, M. 449
Tufte, E. 330–1
Tuinman, M. 243
Turnbull, C. 34
Turner, V. W. 351t
Twine, F. W. 437
Twine, R. 426
Underhill, C. 472
Underwood, M. 412
Upton, A. 398, 399
Urry, J. 437
Vancea, M. 149
van Dijk, T. A. 492
Venkatesh, S. 409
Vincent, J. 509
Voas, D. 240
Vogl, S. 438
Vogt, W. 142
W
Wagg, S. 509–10
Wakefield, K. 440
Walby, S. 161
Walker, J. 103
Walters, R. 455
Walther, E. 200
Wansink, B. 273, 290
Warren, C. A. B. 386
Warren, L. 415, 417, 417f
Warren, P. L. 528–9
Warwick, D. P. 395
Wassenaar, D. 134
Wästerfors, D. 515
Waterhouse, J. 74
Watson, A. 362
Watson, T. 116, 401, 408
Watts, L. 407
Westmarland, L. 116
Wetherell, M. 484, 487, 488, 491, 494
Wetzel, C. 388
Whitaker, C. 185
White, P. 75
Wiggins, R. D. 440
Wijnant, A. 217
Wilkinson, S. 465, 483
Willems, P. 472
Williams, H. M. 547
Williams, J. J. 569
Willis, R. 512
Wilson, A. 158
Wilson, J. Q. 20, 86
Wilson, J. Z. 501
Wingfield, A. H. 30
Winkle-Wagner, R. 446
Winlow, S. 118
Wood, K. 517
Woodthorpe, K. 403
Wooffitt, R. 481
WrinklerPrins, A. M. G. A. 32–3
Young, N. 274
Yu, F. 379, 381
Zauszniewski, J. 185
Zeitlyn, D. 413
Zempi, I. 59
Zhang, B. 173
Zhang, X. 213
Zhang, Y. 587
Zickar, M. J. 420
Znaniecki, F. 525
ZuWallack, R. 195
Subject Index
access to organizations
closed settings 79, 395–6, 397
sampling considerations 79
Alexa 442
alternative hypothesis 148, 341
definition 525
theory 361
anonymity 115, 116, 119, 124–5, 129, 130
ethnography 408
anthropology 355
attrition 57
behaviour
context 267–8
vs meaning 373
bias
definition 311
ethnography 420
biographical research
biographies 499–501
Cramér’s V 337
methods 333t
Pearson’s r 335–6, 335t, 336f
phi 337
blogs
coding 536
content analysis 277
causality and 64
covert research 59
inductive research 62
as intensive analysis 62
location 59–60
longitudinal cases 61
reliability 61
replicability 61
representative cases 60
and research strategy 65t
revelatory cases 60
sampling 11, 377–8
types of case 60–1
unique cases 60
unit of analysis 59–60
validity 61–2
case-to-case transfer 387
successionist understanding of 64
causal mechanisms 64
in newspapers 96
classical ethnography 409, 410
closed-ended questions
processing 238
self-completion questionnaires 211, 212, 213, 217, 218
coding 12
axial 528, 528t, 529, 540
identifying 9
indicators 151–2
and inductive theory 9
Likert scale 151
measurement 150–4
multiple-indicator measures of see multiple-indicator measures
qualitative research 361–2, 373
sampling 167
definition 271–2
digital news reports, counting words in 277
disadvantages 291
dispositions 279
transparency 290
units of analysis 276–9
as unobtrusive method 290–1
of visual materials 288–9, 290
of behaviour 267–8
coding 536
conversation analysis 478, 479, 483
critical discourse analysis 477, 492
interviews 193–6
participant observation 448
photographs 505
sampling 377, 381f, 388
assumptions 478–9
comparison with discourse analysis 484, 485–6, 494–5
and context 478, 479, 483
definition 478
nature of 477–8
notational symbols 480
online 481, 483
preference organization 481, 482
principles of 478–9
recording 441
reflexivity 477
repair mechanisms 481
archives 521
core categories, grounded theory 529
correlation analysis 150, 154
correlation coefficient 343–4
Cramér’s V 342
bivariate analysis 337
correlation and statistical significance 344
loss of 402
crime statistics
convergent validity 158
limitations of 307–9
definition 492
ethnography 494
evaluating 495
interpreting digital media 517
epistemology 24
generative mechanisms and 64
cross-cultural research 62–3, 64
official statistics 305
features 50–3
generalizability of findings 145
and internal validity 53
more than one case, data collection 51
nomothetic nature of 60
non-manipulable variables 53
patterns of association 51–2
qualitative research within 53–4
quantitative data 51
quantitative research 51, 53, 54, 148
relationships between variables 333
reliability 53
coding 535
digital, as hybrid 567
documents as sources of see documents
effective presentation 330
definition 13t
ethnography 371
methods 11–12
participant observation 12
questionnaires 11
research methods and 11–12
sampling cases 11
semi-structured interviews 12
structured interviews 11
time considerations 73
data-processing error 188, 188f
definition 19
ethnography 409
principle of 23
process of deduction 20f
testing hypotheses 20
descriptive statistics
diagrams
diary
advantages and disadvantages of use 231
research diary 80
‘researcher-driven’ 229
definition 484
discourse as form of action 485, 486–7
documents 516
empiricism 488
ethnography 494
evaluating 492–5
extreme case formulation 490–1
nature of 484–7
themes 486–7
theory 361
discussion forums, online 287, 410–13, 510–12
Disney project
coded text 536t, 539f
qualitative interview 433–5
semiotics 519
definition 498
digital media 510–14
discovering 502
meaning 498
narrative analysis 542
nature of 498
official 506–8
semiotics 519–20
virtual 510–14
‘don’t know’ option 246
Douglas, J. D. 25
Dragon 442
emotion 402–3
emotional labour 362
empirical realism 24
empiricism
definition 19
discourse analysis 488
endnotes 100–2
enhancement, mixed methods research 561–2
epistemology
definition 7
discourse analysis 484–5, 494
ethnography 494
interpretivism 24–7
mixed methods research 563–5
naturalism 31, 32
positivism 23–4
epoche 33
errors
coding as cause 193
non-sampling 168
question order 199
access to organizations 74
anonymity see anonymity
children 128
choice of research topic, and 74
universalism 111–12
visual data 130
ethnography 256
confidentiality 116
definition 393
feminist 419–20
field notes see field notes
forms 394t
nature of 392
official documents 506
online 512
overt vs covert 394–5
passive 402–3
postmodernism 353
Eurobarometer 62
translations 214
evaluation research 49
event sampling 232
references 99
time management 71
logic of comparison 50
significance 49–50
experimental realism 47
experimental writing 351t
external validity
definition 42
feminism
conscious partiality 34
ethnography 419–20
fixed-choice questions 159, 192, 193, 204, 235, 237, 238, 245, 250
flexibility
focus groups
anonymity 470, 472
CAQDAS 550
collective identity in 466
conducting 456–65
consensus within 465–7
definition 453
disagreements within 465–7
naturalism 454
nature of 453–6
origins 454
uses 454–6
follow-up questions 429–30, 542
footnotes 100–2
forced-choice questions 239, 247–8
F statistic 344
functionalism, mixed methods research 565
funding of research 8
ethnography 420
politics 133–4
G
Gambling Commission 156
Gantt charts 71, 72f, 72, 73
gender
dichotomous variable vs social construction 323
gender studies 30
limits to 187
moderatum generalizations 370, 387
probability sampling 377
generative mechanisms 64
concepts 529
constant comparison 518, 527, 530
criticisms 532–3
interpretivism 533
iterative strategy 20–1, 22, 360
memos 527, 533
museums study 531
analysing 467
complementary and argumentative interactions 465–7
symbolic interactionism 454
H
harm
hermeneutics 24–5, 26
double hermeneutic 26, 27
heterogeneity of population 183, 387
idiographic approach 60
illegal activity, ethnography 402
and causality 41
experiments 43
quantitative research 144 see also dependent variables
and concepts 9, 12
cross-sectional design
and data collection 12
and deductive theory 19–23, 22f
definition 20
definition 41
experimental design 43, 44–6, 47, 50
longitudinal design 57
qualitative research 363
quality criteria 41
quasi-experiments 49
and research strategy 42
threats to 45 see also external validity
interpretivism 7, 27
embedded methods argument 565
epistemology 24–7
grounded theory 533
intertextuality 514
critical discourse analysis 492
interval/ratio variables 41, 324t, 325f
diagrams 327
dispersion measures 331
frequency tables 326
measures of central tendency 331
interview guide
focus groups 463
qualitative interviews 428–31, 428f
semi-structured interviews 425–6, 433–6
interviews
aims, in social research 191
audio-recording digitally 441, 444
cognitive 250–1
comparison of different methods 225, 226–7, 227t, 468–72
computer-assisted see CAPI; CATI
context 193–6
diary 229
group 196, 453–4
labelling theory 18
laboratory experiments 43, 47, 48, 265
prompting 204–5
response sets 217, 220, 248
use in secondary data set 153
variables 324
line graphs 334–5
linguistic devices 490
listening skills 432
identifying concepts 9
practical considerations 35
quantitative vs qualitative research similarities 374
identifying concepts 9
importance of conducting 87
keywords 97
librarians 94–5
limitation of content 88
ongoing 88, 97
online databases 94–7
logic of comparison 50
longitudinal design 40, 43, 50, 54, 66
British Household Panel Survey (BHPS) 55
problems 57
qualitative interviews 448
reliability 54, 57, 155
replication 54, 57
research strategy and 58–9, 65t
secondary data analysis, and 295–6
Marxism 34
Marx Memorial Library 500
Masculinities, Identities and Risk: Transition in the Lives of Men as
Fathers study 58
masculinity, inclusive 595–8
mass media
content analysis 11
documents 509–10
longitudinal design 57
qualitative research 363
measures of central tendency 331
measures of dispersion 331–2
median 331
memory problems 248
memos 527, 533, 534–5, 536
systematic review 90
meta-ethnography 93, 545–6
definition 90
validity 572
value of 562
writing up 590–1, 598–602
naive empiricism 19
narrative analysis 524, 541–4
biographical research 542
criticisms 543
documents 542
features 541–2
naturalism
definition 42
epistemology 31, 32
ethnography 399
Cramér’s V 337
diagrams 326–7
nomothetic approach 60
nodes 548
problems, potential 549
qualitative data analysis 524
covert 394–5
issues resistant to 448
participant see participant observation
observation schedule
content analysis 272
structured observation 256, 258–61, 260f, 262
Office for National Statistics (ONS) 202, 280, 300, 305, 309
official documents
deriving from private sources 506–8
deriving from state 506, 507
cross-sectional design 50
ecological fallacy 309, 310
indicators of concept 151
limitations 305–10
online communities
covert participant observation 411, 413
as data sources 510–12
discussion forums 287, 410–13, 510–12
ontology
constructionism 27, 28–9, 350
definition 7
advantages 235
coding 236, 237–8
compared with closed-ended questions 235–9; see also closed-
ended questions
operationalism 161
operationalization 148
opportunistic sampling 379
data collection 12
definition 258, 392–3
nature of 392
online 411–12
practical considerations 36
vs qualitative interviews 446–9
photographs 501–6
visual objects 501–6
personal experience, and research interests 17, 74
phenomenalism 23
phenomenology 25–6
quantitative research 149
value-free research 33
phi
bivariate analysis 337
correlation and statistical significance 344
photo-elicitation technique 130, 414–15, 436–7
photographs 413–19
contextual meaning 505
field notes 405
plagiarism 103–4
accusation of in The Da Vinci Code 104
detection software 103
ethnography 409
paradigm argument 563–4
qualitative research 351t
postcolonialism 34
postexperimental enquiry 351t
postmodern ethnography 409, 410
postmodernism
CAQDAS 550
definition 353
qualitative research 352
definition 167
generalization 377
online surveys 184–5
qualitative research 377
qualities 174–5
quantitative research 144–5, 162, 171–6, 377
random numbers, generating 171, 172
sample size 181
proofreading 583
properties, grounded theory 529
proposal, research 78–9
pseudonyms 115, 124, 125
public ethnography 409
publishing findings 135
purposive sampling 177–8
common forms of 380–5
nature of 378–80
non-probability sampling 377, 378
qualitative research 377, 378
stratified 379
and theoretical sampling 380–2, 383
advantages 448
asking questions 429–31
conducting 431–44
email 439–40
ending 444
ethical issues 436, 443, 448
ethnography 449
evaluating 446–9
feminist research 447
flexibility requirement 426, 427, 433
interview guide 428–31, 428f
types 425–7
vignettes 436, 437 see also semi-structured interviews;
unstructured interviews; video interviews
qualitative/quantitative divide 565–7
coding 237
compared with quantitative research 372–4, 373t; see also mixed
methods research
concepts 361–2, 373
confirmability 364, 365, 366
critique 369–71
cross-sectional design, within 53–4
data collection 352–3, 359–60, 371
nature of 350–3
observation 256
online focus groups 467–72
triangulation 364
trustworthiness 363–6
unstructured interviews 357
validity 42, 362–5, 366, 368, 369
credibility 42
dependability 42
documents 498
quantitative research 42
relevance 42
reliability 40, 53
replication 40, 53
and research strategy 42
transferability 42, 363, 364–6
data collection 12
research appraisal criteria 12, 369
structured observation 260
approach to 317–18
bivariate 333–7
multivariate 338–9
measurement 144
meta-analysis 93
multiple-indicator measures 151–2
nature of 142–3
observation 256
operationalism 161
perceptions of 143
quality criteria 42
reliability and validity testing 161–2
replication 145, 146
research questions 74
themes 144–6
validity see validity
writing up 587, 590–4, 599
quasi-experiments 39, 43, 47–9, 66
quasi-quantification 566–7
queer methodologies 352
questionnaires
attitudes 240
beliefs 240–1
choosing the right types of question 241
closed-ended questions see closed-ended questions
piloting 250–1
pre-coded 192, 193
pre-testing 250–1
types 240–1
vignette 242–3
questions, qualitative interviews
ending 431
types 429–31
random sampling
simple 171
reactive effects
realism
epistemology 24, 28
record-keeping 80
numeric 100–2
reflexivity 14, 27
values 33–4, 35
visual ethnography 415
comparative research 63
cross-sectional design 53
definition 155
replication
case study design 61
comparative design 63
experiments 47
longitudinal design 54, 57
quality criteria 40
quantitative research 145, 146
research design 53
research questions 75
representative case 60
representativeness
of findings 263
of samples 11, 42, 62, 144–5, 167, 185, 214–15, 250, 388, 389
research design
case studies 59–62
choosing a 42–3
comparative 62–5
concept 39
cross-sectional 50–4
definition 39
experimental 43–50
generalization 39, 41
longitudinal 54–9
ResearchGate 96
research methods
concept 39
nature of 17
research preparation 79
research project, gym users study 318–20
data 321f
diagrams 327–8, 327f, 328f
Pearson’s r 335
phi 337
questionnaire 318–20, 319f
descriptive 75–6
ethnography 404
importance of 10–11
and literature review 8, 10–11, 75, 88
practical considerations 35
and puzzles 75
qualitative research 360
qualitative vs quantitative research similarities 374
quantitative research 74
replication 75
revising and developing 10–11
sources 74–5
research strategy
internal validity 42
response rates
safety issues
analysis 184
basic terms and concepts 167–8
cross-sectional design 53
people 262
population sample 167
survey research 11
using more than one approach 388–9 see also sample size;
sampling bias; sampling error; sampling frames
definition 168
sampling frames
adequacy of 168
definition 167
generic purposive sampling 383
online surveys 184, 185
definition 294
Demographic and Health Surveys Program 297
discourse analysis 485
meta-analysis 303–4
Multiple Indicator Cluster Surveys 297
reanalysis 297
sampling 168
self-completion questionnaires
additional data, inability to collect 214
administration 212
advantages 212–13
anonymity 226–7
data collection 11
disadvantages 213–15
indicators 151
instructions, clarity 218–21
Likert scale see Likert scale
Siri 442
observing 256–8
social media
social research
context 6–8
definition 4
epistemological approach 7
feminist influence on 34
interpretivist approach 7
‘messiness’ of 13–14
methods 4, 5
ontological approach 7, 31
politics in 7–8, 130–5
process 8, 13t
researcher training 5
scientific approach 7
strategies see social research strategies
user involvement 7
epistemology 7, 23–7
interpretivism 7
methods 17
ontology 7, 27–31
social science 4
Sociology 591
sociology, research methods 135
software
bibliographic 99
choosing 316
Spearman’s rho
SPSS 316
stigma 27
stratified purposive sampling 379
contexts 193–6
data collection 11
problems 206–8
prompting 204–5
vs qualitative interviews 193, 425
questions 191–2, 198–201
rapport 204
types 193
research design 39
survey research 166
advantages 265
challenges 265–8
conducting 258–63
cross-sectional design 50
data analysis 264–5
definition 258
strategies 261–2
successionism 64
supervisors 69–70, 77, 78, 81
confidentiality and anonymity 125
steps 165f
longitudinal research 56
mixed methods research 32, 33, 566
systematic review 84
conducting 90
definition 89–93
evaluation 93
example 91–2, 92t
limitations 93
meta-analysis 90
meta-ethnography 90
rapport 204
response rates 194
of theories 31
test–retest method 154–5
blogs 511–12
content analysis 277–9, 516–19
theoretical saturation
definition 381
empiricism 19
grand theory 18
grounded theory 360, 529 see also grounded theory
setting targets 70
timetables 71, 72f, 78, 79
transcribing interviews 80
writing and analysis 74
training, researcher 5
transcription 441–4
coding 534, 537
ethnography 405
focus groups 464–5, 468, 470, 472–3, 550
triangulation 306
documents 520
research designs 42
t-test 340
Twitter 20, 32, 85–6, 151, 296, 300, 310, 311–13, 513–14, 518, 528,
591–4
age concerns 128
data set 302t see also British Household Panel Survey (BHPS)
definition 325
median 331
mode 331
unobtrusive methods
content analysis 290–1
definition 306–7
official statistics as a form of 305, 306
unstructured interviews
content analysis 272
flexibility 426
qualitative interviews 426–8
transcripts 272
user involvement 7
validity
case study design 61–2
comparative research 64
concurrent 156–7
construct 157–8
convergent 158
criterion 156–7
definition 156
ecological see ecological validity
inferential 41
predictive 157
qualitative research 42, 362–5, 366, 368, 369
testing 161–2
value-free research 33
values 7, 8, 21, 28, 29, 31, 33–5, 130, 131, 145, 241
variables
coding 149, 279–80
confounding 338
intervening 338
research design 39
Verstehen 25, 26
rapport 204
photographs 413–19
realist and reflexive approaches 415
websites
content analysis 285–6, 510, 511
weighting 169
WhatsApp
ethical issues 128–9
acknowledgements 585–6
aim 601
appendices 590
context 592–3
signposting 581
starting early 578–9
structuring of writing 579, 584–90, 584f, 591–602
Zoom