Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

The SAGE Handbook of Qualitative Data

Collection
The Concept of ‘Data’ in Digital Research

Contributors: Author:Simon Lindgren


Book Title: The SAGE Handbook of Qualitative Data Collection
Chapter Title: "The Concept of ‘Data’ in Digital Research"
Pub. Date: 2018
Access Date: March 23, 2021
Publishing Company: SAGE Publications Ltd
City: 55 City Road
Print ISBN: 9781473952133
Online ISBN: 9781526416070
DOI: http://dx.doi.org/10.4135/9781526416070.n28
Print pages: 441-450
© 2018 SAGE Publications Ltd All Rights Reserved.
This PDF has been generated from SAGE Knowledge. Please note that the pagination of the online
version will vary from the pagination of the print book.
SAGE SAGE Reference
© Uwe Flick, 2018

The Concept of ‘Data’ in Digital Research

The Concept of ‘Data’ in Digital Research


Simon Lindgren

The emergence and development of the Internet and social media has changed the parameters for social
interaction. The transformation also changes how we think about research methods, and how we think about
what constitutes data. Social and cultural research about the Internet and digital media is – due to its relative
newness – a key area of methodological development. Our routine ways of going about research are rapidly
transformed when one tries to capture the fast evolving patterns of sociality online and through digital tools.
Research on digital media is – still some years into the ‘information age’ – giving rise to new methods, as
well as new challenges and opportunities for analysing society and human behaviour (Sandvig and Hargittai,
2015).

Of course, with the Internet being such a big and ever-present part of today's societies, there is no one
way of defining what ‘digital research’ is. It could be any type of study, using any kind of existing and
established research method to say something about life in digital society. In the end, choosing a method
for research comes down to the many choices that are made in relation to the aims of the study, the type
of data to be analysed, personal preferences of the researcher, and so on. In this chapter, however, I will
discuss a few important aspects and concepts that come to the fore when approaching the notion of data
– and data collection – in relation to online settings. First, I will deal with the issue of so-called ‘big data',
and how the increased complexity of our social data environment more generally challenges the division
between ‘qualitative’ and ‘quantitative’ altogether. The tensions between the availability of huge amounts of
texts online, and the close reading ethos of qualitative research, is a topic that will be dealt with throughout
the chapter. Furthermore, it is not an uncontroversial topic, in relation to a long-standing debate between
‘interpretive’ and ‘positivist’ methods (see Maxwell, Chapter 2, this volume). Some qualitative researchers will
argue that no matter how ‘big’ data are available, close reading of smaller selections is always the way to go,
while others will be more inspired by the opportunities and move in the direction of more mixed approaches.

In relation to that latter strategy I will, second, discuss what it means to conceive of one's approach as a
methodological bricolage. Finally, I will underline the importance of maintaining a mindset of openness and
experimentation when approaching the forms and potentials of digital data.

THE CHALLENGE OF BIG DATA

In recent years, following the breakthrough of social media, there has been extensive publicity and discussion
around the idea and phenomenon of big data. This buzzword has been used to refer to not only the huge
amounts of data about people's preferences and behaviours that are now generated and collected online,
but also to an entire process of social transformation. The increased softwarisation of society, the prevalence
of algorithms, as well as the rise of calculated publics, has made many hope that the gigantic datasets that
are enabled will make the world a better place. In popular media, in data science, in relation to business
and global development, in fields like policing and security, politics, healthcare, education and agriculture, big
data has been seen as enabling completely new analyses and actions (Lupton, 2014). People from diverse
fields – computer science, economics, mathematics, political science, bio-informatics, sociology – have been
fascinated with the new possibilities and have started to hunt for data access, looking to get their hands on
the huge masses of information generated in the digital society by and about people's interactions (boyd and
Crawford, 2012).

Ever since the early days of computing, different forms of data have always been generated and stored.
But the recent development towards so-called big data has been said to be revolutionary. Aside from the
data that people generate on social platforms, more and more objects in our everyday lives – ranging from
televisions and cars to refrigerators and lamps – have become digitised, ‘smart', and attached to the Internet.
This development is related to the advent of the so-called Internet of Things, a paradigm for research and
development, drawing on ‘the pervasive presence around us of a variety of things or objects – such as
The SAGE Handbook of Qualitative Data Collection
Page 2 of 9
SAGE SAGE Reference
© Uwe Flick, 2018

Radio-Frequency IDentification (RFID) tags, sensors, actuators, mobile phones, etc. – which, through unique
addressing schemes, are able to interact with each other’ (Atzori et al., 2010, p. 2787).

There are lots of opportunities today for monitoring people's social and natural environment. Anything, from
phone call logs and web browsing history, to details on location and body movements through embedded
GPS, compasses, gyroscopes and accelerometers, can be registered. Many apps, such as health trackers
or apps for ‘checking in’ at different restaurants and cafés, can be downloaded for free. This is, in many
cases, because the data generated by the users in turn is a product that can be sold by those who developed
the app. This development is related to an entire debate about surveillance in digital society (Andrejevic,
2007), where some have argued that big data is big as in ‘Big Brother'. Others have underlined how the new
technologies also enable ‘sousveillance’ – citizen users ‘watching from below'. Citizen technology researchers
Steve Mann and Joseph Ferenbok write:

New media has enabled a secondary gaze that moves along the power and veillance axis in different
directions than surveillance practices. Sousveillance acts as a balancing force in a mediated society.
Sousveillance does not exactly or necessarily counteract surveillance, but co-exists with surveillance within a
social system that then provides a kind of feedback loop for different forms of looking – potentially creating a
balancing force. (Mann and Ferenbok, 2013, p. 26)

So, there is a duality between threat and possibility here. Internet researchers Kate Crawford and danah boyd
note that with big data, ‘as with all socio-technical phenomena, the currents of hope and fear often obscure
the more nuanced and subtle shifts that are underway’ (boyd and Crawford, 2012, p. 664).

Big data has been hailed as revolutionary, and as something that will make it possible not only to build
more profitable business, but to make society better altogether. There is a growing industry for harvesting
and selling social data for profit, and huge data storage centres are being built to deal with all of the data.
Companies in the field of social media and digital information, such as Facebook, Microsoft, and Google,
and online retailing companies like Amazon have been leading the way by developing ever new ways of
harnessing user data for targeted and customised product development and advertising.

The big data collected in this manner has been argued to offer much greater precision and much more
accurate predictive powers than previous forms of data. Merging data from multiple databases offers better
precision and more predictive power. Furthermore, when combined with the current capacity of digital
technology when it comes to harvesting, storage and analysis, this type of data is also said to offer
opportunities, superior to anything ever seen before, to delve deeper into assessing human behaviours.

A COMPLEX DATA ENVIRONMENT

The emergence of big data is in fact just one out of many transformations within our general data environment,
that affect opportunities as well as challenges when it comes to doing social research in digital society. For
example, Kingsley Purdam, an expert in research methods, and his data scientist colleague Mark Elliot aptly
point out that what is commonly talked about as ‘big’ data is in fact defined by several other things than just its
large size: it registers things as they happen in real-time, it offers new possibilities to combine and compare
datasets, and so on (Purdam and Elliot, 2015). However, even such characterisations are still not sufficient,
say Purdam and Elliot. This is because those definitions still seem to assume that data is ‘something we
have', when in fact ‘the reality and scale of the data transformation is that data is now something we are
becoming immersed and embedded in'.

The notion of a ‘data environment’ underlines that people today are at the same time generators of, but
also generated by, this new environment. ‘Instead of people being researched', Purdam and Elliot say, ‘they
are the research’ (Purdam and Elliot, 2015, p. 26). Their point, more concretely, is that new data types
have emerged – and are constantly emerging – that demand new flexible approaches. Doing digital social
research, therefore, often entails discovering and experimenting with challenges and possibilities of ever
new types and combinations of information. In trying to describe the constantly changing data environment,
Purdam and Elliot (2015, pp. 28–9), outline an eight-point typology of different data types based on how the
data in question has been generated:
The SAGE Handbook of Qualitative Data Collection
Page 3 of 9
SAGE SAGE Reference
© Uwe Flick, 2018

• 1.
Orthodox intentional data: Data collected and used with the respondent's explicit agreement. All so-
called orthodox social science data (e.g. survey, focus group or interview data, and data collected via
observation) would come into this category. New orthodox methods continue to be developed.
• 2.
Participative intentional data: In this category data are collected through some interactive process.
This includes some new data forms, such as crowd-sourced data […] and is a potential growth area.
• 3.
Consequential data: Information that is collected as a necessary transaction that is secondary
to some (other) interaction (e.g. administrative records, electronic health records, commercial
transaction data and data from online game playing all come into this category).
• 4.
Self-published data: Data deliberately self-recorded and published that can potentially be used for
social science research either with or without explicit permission, given the information has been
made public (e.g. long-form blogs, CVs and profiles).
• 5.
Social media data: Data generated through some public, social process that can potentially be used
for social science research, either with or without permission (e.g. micro-blogging platforms such as
Twitter and Facebook, and, perhaps, online game data).
• 6.
Data traces: Data that is ‘left’ (possibly unknowingly) through digital encounters, such as online
search histories and purchasing, which can be used for social science research either by default use
agreements or with explicit permission.
• 7.
Found data: Data that is available in the public domain, such as observations of public spaces, which
can include covert research methods.
• 8.
Synthetic data: Where data has been simulated, imputed or synthesised. This can be derived from,
or combined with, other data types.

The most important point here is that while social research traditionally relies on orthodox intentional data
(1), such as surveys and interviews, digital society has enabled much more far reaching registration and
collection of participative intentional data (2), consequential data (3), self-published data (4), and found data
(7). These are types of data that indeed existed before digitally networked tools and platforms but which
have been expanded and accentuated. The remaining types – social media data (5), data traces (6), and, at
least chiefly, synthetic data (8) – are specific to digital society. Researchers that analyse this society therefore
face dramatically altered conditions for the generation and gathering of data about social processes and
interaction. As stated earlier, some researchers look for ways to apply the previously developed qualitative or
quantitative perspectives in this new setting, while others – myself included – would argue that there is much
to gain from trying to leave the ‘qualitative’ versus ‘quantitative’ paradigm behind.

TRANSGRESSING THE DIVISION BETWEEN QUALITATIVE AND QUANTITATIVE

The demarcation line, and sometimes open conflict, between qualitative and quantitative methodological
approaches persists today as one of the Gordian knots of social science. In scholarly discourse, traces
remain of a continuing Methodenstreit (method dispute) (Mennell, 1975). Scholars who prefer case-oriented
methods will argue that in-depth understandings of a smaller set of observations is crucial for understanding
the complexities of reality, and those who prefer variable-oriented approaches will argue that only the highly
systematised analysis of larger numbers of cases will allow for making reliable statements about the true
order of things.

But today it is becoming increasingly popular to employ combinations of qualitative and quantitative methods
(see Hesse-Biber, Chapter 35, this volume, and Morse et al., Chapter 36, this volume), at the same time
benefiting from their various strengths and balancing their respective weaknesses (Ragin, 2000; Brady and
Collier, 2010). However, many mixed methods approaches rely on rigid definitions of the two respective
paradigms to be combined, and suggest frameworks based on different forms of complementarity or
The SAGE Handbook of Qualitative Data Collection
Page 4 of 9
SAGE SAGE Reference
© Uwe Flick, 2018

‘triangulation’ (Jick, 1979; Denzin, 2012; see Flick, Chapter 34, this volume).

Today, the massive amount of text content that is generated on social media and through other forms of
computer-mediated communication has made some previously ‘qualitative’ scholars interested in how such
data can be best used without succumbing to mere word-counting, and how the many technologies available
for analysing it can be best harnessed. For example, when researching with digital data, we may find
ourselves wanting to make sense of a corpus of blog posts, forum comments, YouTube video descriptions,
Facebook postings, tweets, and so on. In such cases, not only can automated methods for text mining
complement qualitative approaches, but they can also be repurposed for more interpretive uses. Literary
scholar Franco Moretti has coined the idea that text mining, as opposed to close reading of the text data, can
be seen as distant reading. He goes so far as to say that there is an analytical point to not close reading texts,
since this removes focus from the more general patterns that he thinks research should be focused on:

The trouble with close reading … is that it necessarily depends on an extremely small canon … You invest
so much in individual texts only if you think that very few of them really matter. Otherwise, it doesn't make
sense … What we really need is a little pact with the devil: we know how to read texts, now let's learn how not
to read them. Distant reading: where distance … is a condition of knowledge: it allows you to focus on units
that are much smaller or much larger than the text: devices, themes, tropes – or genres and systems. And if,
between the very small and the very large, the text itself disappears, well, it is one of those cases when one
can justifiably say, Less is more. If we want to understand the system in its entirety, we must accept losing
something. We always pay a price for theoretical knowledge: reality is infinitely rich; concepts are abstract,
are poor. But it's precisely this ‘poverty’ that makes it possible to handle them, and therefore to know. (Moretti,
2013, pp. 48–9)

In relation to more interpretive approaches then, distant reading demands that the researcher is prepared
to move away from conventional close reading in order to be able to grasp larger sets of data, and also to
lose some degree of qualitative detail because of this. No matter if one would agree with Moretti's notion
or not, we are now facing the challenge of large online texts that lay bare the fact that meaning-making
happens in large numbers, as well as the fact that these large numbers in turn cannot be understood without
in-depth interpretation. This inspires us to try to find entirely new approaches to qualitatively interpreting
large masses of text. Texts are irrevocably embedded in arbitrary systems of language and culture from
which their understanding must not be disconnected. While texts may be quantitatively deconstructed through
approaches in content analysis (Krippendorff, 2004), physics (Bernhardsson et al., 2010) or computational
linguistics, these methods will dissolve the data in ways that leave variable-oriented strategies as the only
way to proceed with the analysis.

REVEALING THE MESSY DETAILS

In today's world, large amounts of social data are registered and aggregated independently of initiatives from
researchers. This is illustrated by work such as that of computational sociologists Scott Golder and Michael
Macy, whose research mapped people's affective states throughout the day, as expressed via Twitter posts
in 84 countries, generating results of high interest to its subject-area, but using a research design that was
by necessity dictated by the availability and character of the timestamped and text-based social media data
(Golder and Macy, 2011). Examples of similar studies exist in several other fields where, while the issues
dealt with are of high relevance, it is nonetheless the case that the researchers have confronted data that was
largely already at hand and constituted in certain ways. This illustrates that the choices of the researcher, as
regards designing the data may, at least in some respects, be increasingly backgrounded in digital society.
While choosing between a qualitative or quantitative approach – as in opting for a survey or for in-depth
interviews (see Roulston and Choi, Chapter 15, this volume) – will still have continued relevance in many
contexts, scholars are now increasingly also facing the challenge of thinking up and constructing some of their
‘methods’ after the fact.

One of Purdam and Elliot's main points with presenting their typology, discussed in the previous section, is to
argue that the complexity of today's data environment forces researchers to constantly think about the highly
variable characteristics of data that they encounter or seek out. And one of the key challenges of entering this
type of terrain is to constantly try out new ways of doing things. To know that the data we elicit or download,
The SAGE Handbook of Qualitative Data Collection
Page 5 of 9
SAGE SAGE Reference
© Uwe Flick, 2018

as well as the strategies we choose for making sense of it, are appropriate, we may test our strategy to see
whether it produces good research results. The dilemma is, however, that in order to know that the results
are good we must already have developed the appropriate method (Sandvig and Hargittai, 2015). Because of
this constant – and potentially endless – need for experimentation and discovery, investigations drawing on
new tools and approaches risk quickly getting stuck and intellectually unproductive.

Let's say that you are researching some aspect of social interaction on a platform like YouTube, and have
decided that analysing user comments to videos seems to be the way to go. Now, if this had been survey
responses, or interview transcriptions, you could rely on an entire canon of literature on methods and well
established research practices for how to work with such data. Even though you might want to do things
in new ways, or challenge the conventional ways of going about the research, you would at least have a
sort of baseline, or common practice, to relate to and argue with. But in the case of YouTube comments,
you would have to do a lot more groundwork. First, for example, you would have to find a way of collecting
the comments. If the number of comments was large enough for it to be inconvenient to manually copy and
paste them – which is often the case – you would have to find some tool or other for automatically capturing
and downloading them. This risks putting you in a situation where you end up trialling and erroring yourself
through a variety of browser plugins, scripts or applications that may or may not do what you want them to.
This process can be very time-consuming, and it is not uncommon that the researcher becomes so engaged
with this very quest for a tool, that he or she – instead of doing the social research that was initially intended
– starts devoting aeons of time to scouring the net for ever ‘better’ tools, or to learning how to code. And this
is only the first step out of several subsequent ones, where other challenges may throw you off track.

Once the comments are collected and ordered, there is a wide range of epistemological, ontological and
ethical issues to deal with. What are the comments really? Are they individual utterances or conversations?
How should you, if at all, take the likes and dislikes posted to the comments into consideration? Do all the
comments relate to the YouTube video in question, or can the comment threads take on lives of their own,
becoming forums for discussing other issues than those instigated by the video? How can you, ethically, use
these data for research? Do you need the informed consent of all the people who have posted in the thread?
And so on, basically, ad infinitum. In sum, because of such multidimensional complexity and undecidedness,
research on digital society must embrace research methods as a creative act. Instead of relying on blueprints,
copying and pasting run-of-the-mill methods sections into our papers, researchers must ‘reveal the messy
details of what they are actually doing, aiming toward mutual reflection, creativity, and learning that advances
the state of the art’ (Sandvig and Hargittai, 2015, p. 5).

METHODOLOGICAL BRICOLAGE

Nearly twenty years ago, Steve Jones wrote in the preface to a book about researching the Internet that ‘we
are still coming to grips with the changes that we feel are brought about by networked communication of the
type so prominently made visible by the Internet’ (Jones, 1999, p. x). And this is in fact still the case. Research
on digital society has continued to be a sort of trading zone between conventional academic disciplines – it is
truly transdisciplinary. In their book about Internet inquiry, Annette Markham and Nancy Baym explain that:

While most disciplines have awakened to an understanding of the importance of the Internet in their fields,
most do not have a richly developed core of scholars who agree on methodological approaches or standards.
This absence of disciplinary boundaries keeps Internet studies both desirable and frustrating. (Markham and
Baym, 2009, p. xiv)

This frustration, they argue, makes researchers of the digital society push the boundaries of ‘disciplinary
belonging’ in ways that most academic research would benefit from doing more of. Furthermore, they write
that as ‘few people who study the Internet are trained by a person, let alone a program, that gave them
specialized guidance on how to do it well', researchers of the digital society are by necessity forced to actively
and critically navigate a landscape of old and new methods, seeking out ways of engaging with data that suit
their particular projects (Markham and Baym, 2009, pp. xiii–xiv; see Markham, Chapter 33, this volume). As
Jones explained already in 1999, it is seldom a workable solution to just straight ahead apply existing theories
and methods when studying the digital society. Some such perspectives and approaches can most likely be,
and have also to some extent been, repurposed for digital media research – for example, survey methods
The SAGE Handbook of Qualitative Data Collection
Page 6 of 9
SAGE SAGE Reference
© Uwe Flick, 2018

and interviews. But one must remember that the Internet, with its networked social tools and platforms, is ‘a
different sort of object', possessing an ‘essential changeability’ that demands a conscious shift of focus and
method (Jones, 1999, p. xi).

Because of this, digital research often demands that the person carrying out the data collection and analysis is
even more critical, and more reflecting, than what is already demanded by scholarship in general. The specific
challenges of doing digital social research have, Markham and Baym (2009, pp. vii–viii) argue, ‘prompted its
researchers to confront, head-on, numerous questions that lurk less visibly in traditional research contexts'.

Against this background, and in relation to the discussion above regarding potentially transgressing the
‘qualitative'/ ‘quantitative’ divide, the best strategy is methodological pragmatism: Focusing on the problem
to be researched and on what type of knowledge is sought after. Instead of positioning oneself in one
corner or another of the existing field of methods literature one can instead, methodologists Yvonna Lincoln
and Norman Denzin say, conceive of one's research strategy as a form of bricolage (Denzin and Lincoln,
2000). Bricolage is a French term – popularised by cultural anthropologist Claude Lévi-Strauss in the 1960s
– which refers to the process of improvising and putting pre-existing things together in new and adaptive
ways (Lévi-Strauss, 1966). From that perspective, our research approach is not fully chosen beforehand,
but rather emerges as a patchwork of solutions – old or new – to problems faced while carrying out the
research. As put by critical pedagogy researcher Joe Kincheloe (2005, pp. 324–5): ‘We actively construct our
research methods from the tools at hand rather than passively receiving the “correct,” universally applicable
methodologies', and we ‘steer clear of preexisting guidelines and checklists developed outside the specific
demands of the inquiry at hand'. So, developing your method as a bricolage means putting your specific
research task at the centre, letting your particular combination and application of methods take shape in
relation to the needs that characterise the given task.

The previously discussed demand for reflexivity on behalf of the digital researcher operates on several
different levels. Like the bricolage approach described above, Markham and Baym also argue that the
research design is a constantly ongoing process and that it is to be expected that any study will be reframed
continuously throughout the process of research. They write:

Different questions occur at different stages of a research process, and the same questions reappear at
different points. Second, the constitution of data is the result of a series of decisions at critical junctures in the
design and conduct of a study. The endless and jumbled network of links that comprise our research sites and
subjects create endless sources of information that could be used as data in a project. We must constantly
and thoroughly evaluate what will count as data and how we are distinguishing side issues from key sources
of information. Reflexivity may enable us to minimize or at least acknowledge the ways in which our culturally
embedded rationalities influence what is eventually labeled ‘data'. (Markham and Baym, 2009, p. xviii)

As emphasised by Jones, it is important when researching the specificities of the Internet, to remember that
its uses are always contextualised. Research subjects and data are part of physical space as much as they
are part of ‘cyberspace'. This means, Jones says, that:

As a result the notion that our research should be ‘grounded’ takes on even greater significance when it
comes to Internet research. That makes Internet research particularly interesting – and demanding. Not only
is it important to be aware of and attuned to the diversity of online experience, it is important to recognize that
online experience is at all times tethered in some fashion to offline experience. (Jones, 1999, p. xii)

So, while it is exciting to study the Internet and digital society, it is also especially challenging. New platforms,
concepts, and social practices emerge fast enough for making the ‘Internet’ in itself, into a compelling area
of inquiry. The field, Markham and Baym (2009, p. xiii) write, has a ‘self-replenishing novelty [that] always
holds out the promise for unique intellectual spaces'. But, as discussed above, new terrains of research bring
with them new challenges and difficulties. First, there is a need for constant reflection about the role of the
self in research. Processes of digital social research highlight that researchers are actually co-creators of the
field of study. Our choices are often made in contexts where there are no standard agreed-upon rules for
research design and practice, and this makes such choices more meaningful. Furthermore, the sometimes
disembodied character of digital social settings makes it important to think extra hard about the relationship

The SAGE Handbook of Qualitative Data Collection


Page 7 of 9
SAGE SAGE Reference
© Uwe Flick, 2018

between researcher and researched.

CONCLUSION

Christian Sandvig and Eszter Hargittai discuss how digital media and the Internet can be seen as offering
new tools for answering new, or old, questions in new ways. They give an example of how things that were
not conceived as research instruments can still become used as such:

In this view, online games like World of Warcraft were created by private companies to allow people to
pretend to be night elves (or more accurately, for the company to make money from what people spend on
subscriptions allowing them to pretend to be night elves). Yet these games might hold the potential to answer
basic questions about the networked structure of human interaction. (Sandvig and Hargittai, 2015, p. 8)

Employing digital media as a research instrument offers ‘a new kind of microscope’ that we can use to shed
light on both new issues that are specific to digital society, and on basic questions about human social life that
are more long-standing (Sandvig and Hargittai, 2015, p. 6). Naturally, because of the multifaceted character
of digitally networked tools and platforms, such uses can be of a wide variety. They can draw on new tools for
collecting data via web scrapers, APIs or online repositories. And they can also include new devices and ways
of analysing data, in the form of computerised language processing, the harnessing of geolocative hardware,
new visualisation techniques, and so on. One example of when digital society as such metamorphose into
research method is in the case of big data, as discussed earlier. But, Sandvig and Hargittai argue, the big
data examples are not the most fascinating ones.

We instead see that the actual revolution in digital research instrumentation is going on now, all around us,
in smaller, ‘ordinary’ research projects. We see it in the use of crowdsourcing to replace traditional pools
of research participants; the use of hyperlink networks as a new source of data to study the relationships
between organizations; or in the idea that writing your own Web-based application is now a viable data
collection strategy. (Sandvig and Hargittai, 2015, p. 11)

The totality of all such innovations, experimentations, and renegotiations, are – as pointed out by Sandvig
and Hargittai – today's examples of what historian of science Derek J. de Solla Price called instruments of
revelation. When discussing the scientific revolution historically, he argued that its dominant driving force had
been ‘the use of a series of instruments of revelation that expanded the explicandum of science in many
and almost fortuitous directions'. He also wrote of the importance of ‘the social forces binding the amateurs
together’ (Price, 1986, p. 246). So, in the case of research on the Internet and digital media research then,
we are now at that stage. A point where researchers often act like curiously experimenting enthusiasts –
‘amateurs’ – in testing and devising new ‘instruments of revelation'.

Further Reading

Hargittai, Eszter and Sandvig, Christian (eds.) (2015) Digital Research Confidential. Cambridge, MA: MIT
Press.
Kincheloe, J. L. (2005) ‘On to the next level', Qualitative Inquiry 11(3): 323–50.
Markham, Annette and Baym, Nancy (eds.) (2009) Internet Inquiry. Los Angeles, CA: Sage.

References

Andrejevic, Mark (2007) iSpy: Surveillance and Power in the Interactive Era. Lawrence: University Press of
Kansas.
Atzori, L., Iera, A., and Morabito, G. (2010) ‘The internet of things: A survey', Computer Networks 54(15):
2787–805.
Bernhardsson, S., da Rocha, Luis E. C., and Minnhagen, P. (2010) ‘Size-dependent word frequencies and
translational invariance of books', Physica A: Statistical Mechanics and its Applications 389(2): 330–41.
boyd, d. and Crawford, K. (2012) ‘Critical questions for big data', Information, Communication & Society 15(5):
The SAGE Handbook of Qualitative Data Collection
Page 8 of 9
SAGE SAGE Reference
© Uwe Flick, 2018

662–79.
Brady, Henry E. and Collier, David (2010) Rethinking Social Inquiry. Lanham, MD: Rowman & Littlefield
Publishers.
Denzin, N. K. (2012) ‘Triangulation 2.0', Journal of Mixed Methods Research 6(2): 80–8.
Denzin, Norman K. and Lincoln, Yvonna S. (eds.) (2000) Handbook of Qualitative Research. Thousand Oaks,
CA: Sage.
Glaser, B. G. (1965) ‘The constant comparative method of qualitative analysis', Social Problems 12(4):
436–45.
Glaser, Barney G. and Strauss, Anselm L. (1967) The Discovery of Grounded Theory. New York: Aldine de
Gruyter.
Golder, S. A and Macy, M. W. (2011) ‘Diurnal and seasonal mood vary with work, sleep, and day length across
diverse cultures', Science 333(6051): 1878–1881.
Jick, T. D. (1979) ‘Mixing qualitative and quantitative methods', Administrative Science Quarterly 24(4):
602–11.
Jones, Steve (1999) ‘Preface', in Steve Jones (ed.), Doing Internet Research: Critical Issues and Methods for
Examining the Net. Thousand Oaks, CA: Sage, ix–xiv.
Kincheloe, J. L. (2005) ‘On to the next level', Qualitative Inquiry 11(3): 323–50.
Krippendorff, Klaus (2004) Content Analysis. London: Sage.
Lévi-Strauss, Claude (1966) The Savage Mind. Chicago: University of Chicago Press.
Lupton, Deborah (2014) Digital Sociology. Abingdon: Routledge.
Mann, S. and Ferenbok, J. (2013) ‘New media and the power politics of sousveillance in a surveillance-
dominated world', Surveillance & Society 11(1): n.p.
Markham, Annette and Baym, Nancy (eds.) (2009) Internet Inquiry. Los Angeles, CA: Sage.
Mennell, S. (1975) ‘Ethnomethodology and the new “methodenstreit”', Acta Sociologica 18(4): 287–302.
Moretti, Franco. (2013) Distant Reading. London: Verso.
Price, Derek J. de Solla (1986) Little Science, Big Science … And Beyond. New York: Columbia University
Press.
Purdam, Kingsley and Elliot, Mark (2015) ‘The changing social science data landscape', in P. Halfpenny and
R. Proctor (eds.), Innovations in Digital Research Methods. London: Sage, pp. 25–58.
Ragin, Charles C. (2000) Fuzzy-Set Social Science. Chicago: University of Chicago Press.
Sandvig, Christian and Hargittai, Eszter (2015) ‘How to think about digital research', in Eszter Hargittai and
Christian Sandvig (eds.), Digital Research Confidential. Cambridge, MA: MIT Press, pp. 1–28.
http://dx.doi.org/10.4135/9781526416070.n28

The SAGE Handbook of Qualitative Data Collection


Page 9 of 9

You might also like