Professional Documents
Culture Documents
Summary Research Methods For Business Students H 1 13
Summary Research Methods For Business Students H 1 13
Summary Research Methods For Business Students H 1 13
1.1 Introduction
This book teaches the different steps one should take when conducting business and management research.
It will help you to undertake a research project by providing a range of approaches, strategies, techniques
and procedures. Throughout this book the term methods and methodology will be used. However some may
think these terms refer to the same thing, they actually have different meanings. The term ‘methods’ refers
to techniques and procedures used to obtain and analyse data, while ‘methodology’ refers to the theory of
how research should be undertaken.
In order to systematically conduct research based on logical relationships, a researcher must provide an
explanation of the methods used to collect data, prove why the results are meaningful and outline any
limitations to the research. The goal of research is not only to explain, describe, criticize, understand or
analyse something, but also to simply find a clear answer to a specific problem.
Research that only emphasises Mode 1 ways of creating knowledge which only focuses on understanding
business and management processes and their outcomes is called basic, fundamental or pure research.
Another type of research is called applied research where the emphasis is more on Mode 2. In this case
research is only being conducted direct relevance to managers and is presented in ways these managers can
understand and act upon. Pure and applied research are two extremes, in order to successfully conduct
business and management research there has to be a balance between the theoretical (Mode 1) and
practical (Mode 2) part of research. The characteristics of pure/basic and applied science are summarised in
figure 1.1 on page 11.
• Examine own strengths and interests, choose a topic in which you are likely to do well
• Analyse past project titles of your university such as dissertations (projects from undergraduates) and
theses (projects made by postgraduates)
• Search through literature and media (articles in journals, books, reports). Review articles in particular,
since they contain a lot of information about a specific topic and can therefore provide you with many
ideas
• Brainstorming
• Exploring relevance of an idea to business using the literature, articles may be based on abstract ideas
(conceptual thinking) or on empirical studies (collected and analysed data)
Most often it is a combinations of these two ways of thinking that leads to a good research idea.
Refining Ideas
There exist different techniques for refining research techniques, one of which is the Delphi technique. This
approach requires a group of people who are involved with or share the same interest in the research idea
to generate and pick a more specific research idea. Another way to refine a research idea is to is to turn it
into a research question before turning it into a research project. This is called preliminary inquiry.
Integrating Ideas
The integration of the ideas from the techniques is an important part of a research project. This process
includes ‘working up and narrowing down’, which means that each research idea needs to be classified into
its area, its field, and ultimately the precise aspect into which one is interested.
• Descriptive – question usually starts with ‘When’, ‘What’, ‘Who’, ‘Where’, or ‘How’
• Evaluative – question may start with ‘How effective…’ or ‘To what extent….’
Do not make the research question too simple or too difficult to answer. The ‘Goldilocks test’ may be helpful
to determine if a question is too big (when it demands too many resources), too small (provides insufficient
data), too hot (when it is a sensitive subject) or ‘just right’. It is also essential for a research question to
provide new insights.
• Middle range theories – these are significant, but they don’t change the way in which we think like
grand theories do
• A research project needs to be coherent, which means that all the different components of the project
need to be in relationship with each other.
• It needs to be feasible as well. This means that the project should be possible to achieve.
• Background – This is an introduction for the reader to the problem or issue, it gives answers to the
questions ‘what is going to be done’ and ‘for what purpose?’. The background also shows the
relationship between a theory and a particular context and it should demonstrate the relationship
between the research and what has been done before in this subject area.
• Research question and objectives - the background should eventually lead to a statement of the
research questions and objectives and the observable outcomes.
• Method – This is the longest section and reveals how the research will be conducted. It consists of two
parts: Research design and data collection. Research design is an overall overview of the chosen
method and provides the reason for choosing this method. Here you will explain the choice for a certain
research strategy and determine an appropriate time frame for the project . The section ‘data collection’
will specific how and where the data will be collected and will explain the various analysis techniques
that will be used during the research.
• Timescale – In this section you will divide the research into different stages and explain how much
time each stage will approximately take.
• Resources – In this facet of the proposal certain resource categories such as finance, data access and
equipment will be taken into consideration. This section will also include the expenses that may be
involved with these categories.
• References – This section consists of the literature sources to which you have referred to.
1. Use literature in the initial stages of a research, when making research proposal
When a critical review is successful it will provide new insights about a subject area that no one has ever
thought about. It is necessary to show how the new findings and developed theories relate to other
literature about your subject to demonstrate that you are familiar with what has already been said about the
subject.
• Include the key academic theories within the chosen research area
• Through clear referencing, enable those reading your project report to find the original publications
which you cite
How to be ‘critical’
Being critical means that one needs to make reasoned judgments about a particular text, by evaluating a
problem with good use of language. This means that one ’s own critical stance needs to be based on clear
arguments and references to the literature. Being critical also means making a clear and justified analysis of
the key literature of a research project.
• A series of chapters
Every project report should refer to the key issues from the literature in the discussion and conclusions.
Don’t let the review become an uncritical listing of previous research! It is easy to be critical when
constantly tries to compare or contrast different authors and their ideas. The review should be a funnel in
which you:
1. Start at a general level and narrow it down to research questions and objectives
5. Provide detailed findings and show how they are related to the literature
Primary literature First occurrence of a piece of work. Includes public sources as reports and documents, but
also unpublished work such as letters and memo’s.
Most of the times this kind of literature is very detailed, but not easy to access, therefore it is sometimes
referred to as grey literature.
Secondary literature Is aimed at a wider audience, easier to locate and better covered by tertiary literature.
This includes books, journals and newspapers.
Tertiary literature Also called search tools, to locate primary and secondary literature. They include online
search tools, databases, and dictionaries.
Especially journals are a essential literature source for virtually any research, since they provide a
researcher with information which focussed on his subject area. Nowadays it is easy to access journals via
online databases. Refereed academic journals only publish articles which are evaluated by academics before
their publication. These articles are therefore characterised by their quality and suitability. Professional
journals are made for their members by various organisations. Their articles are usually more of practical
nature than those of refereed academic journals.
Defining parameters
One way to start searching for parameters is to browse lecture notes and course textbooks and make notes
for research question.
1. Discussion
4. Brainstorming
While most it is very tempting to start a literature search with using a search engine such as Google, this
must be handled with care, as the research project should be an academic piece of work and hence must
utilise academic sources. Search should therefore be used to provide access to academic literature.
Conducting literature search can be done by:
2. Obtaining literature referenced in books and journal articles you already read
It is important to make notes of the literature one has read, because it will help thinking though the ideas in
the literature in relation to the research. When making notes there are three sets of information one needs
to record:
• Bibliographic details
• Supplementary information
Ontology
Ontology is a philosophical position that refers to the nature of reality. One aspect of ontology is
objectivism. This means that things exist with a purpose independent of those social actors concerned with
their existence.
Another aspect is subjectivism, which holds that social occurrences are created through the perceptions and
consequent actions of the involved social actors. People who adopt a subjectivist way of thinking find it is
necessary to explore the details of a situation to be able to understand what is going on. This is termed
social constructionism.
Objectivists think that the culture of an organisation is something that an organisation ‘has’ while
subjectivist tend to view the culture as something an organisation ‘is’. Management theory is leaning
towards the objectivist way of thinking.
Epistemology
Epistemology regards what constitutes acceptable knowledge in an area of study. It addresses the
questions: ‘What is knowledge?’, ‘How is knowledge acquired?’ and ‘What do people know?’.
Positivism
The philosophy of positivism refers to the philosophical stance of a natural scientist. This philosophy holds
that collecting data about an observable reality and searching for regularities and causal relationships will
lead to the creation of a new theory or new generalisations. Other characterizations of positivism are:
• The researcher is independent of the subject of the research, he is value-neutral (his feelings are
included in the research)
Realism
Realism claims that whatever we sense is reality: objects exist without concern of the human mind.
Therefore realism contradicts idealism, which states that only the mind and its contents exist. Just like
positivism, realism also assumes a scientific approach to the development of knowledge. There exist two
kinds of realism:
• Direct realism – what you see is what you get, what we perceive and experience with our senses
displays the world in an accurate way.
• Critical realism – what we experience are sensations, images of existing things in the real world, not the
existing things themselves. What we experience are mere illusions.
There is a difference between these two kinds of realism regarding the capacity of research to change the
world. A direct realist would state that the world is relatively unchangeable whereas a critical realist would
claim that the researcher’s understanding to that which is being studied could be changed. Many researchers
claim that what we explore is just part of the bigger picture. Thus researchers usually adopt a critical
realism point of view.
Interpretivism
Interpretivism claims that it is necessary for researchers to understand the differences between humans in
our role as social actors. We interpret our daily social roles in accordance with the meaning we give to these
roles. Interpretivism stems from two intellectual heritages
• Phenomenology considers the way in which we as humans make sense of the world around us
• Symbolic interactionism: we are all in a continual process of interpreting the social world we live in and
we interpret the actions of the people that interact with us. These interpretations lead to adjustments of
our own meaning and actions.
It is important for a researcher to understand the world of his research subjects and to understand the world
from their point of view.
Axiology
Axiology is a strand of philosophy that studies judgments about value. This includes values in the fields of
ethics and aesthetics. One’s own values play a crucial role in all stages of the research process. Our values
are the guiding line for all our actions (Heron 1996).
Research Paradigms
The term paradigm is frequently used in the social sciences, but it often leads to confusion because of its
many meanings. Here we define paradigm as a way of examining social occurrences from which particular
understandings of these phenomena can be gained and explanations attempted. In Figure 4.2 on page 141
there is an image of how the four paradigms can be arranged:
• Radical structuralist paradigm – this paradigm is concerned with understanding structural patterns
within organisations (hierarchies for example) and reporting relationships and the extent to which these
relationships may produce dysfunctionalities.
• Interpretive paradigm – when adopting this paradigm one is concerned with understanding the
fundamental meanings attached to organisational life. Instead of rationalities this one wishes to
discover irrationalities. In this paradigm being involved in the everyday activities of the organisation in
order to understand and explain what is happening is more important that to try to change things.
• Radical humanist paradigm – this dimension adopts a critical perspective of organisational life. It
emphasises the consequences of one’s words and deeds on others. Working with this paradigm one
wishes to change things.
3. Examine these premises and the logic of the argument that produced them, relate it to existing theories
5. Analyze the results, If they are not consistent with the premises the theory is false and should be
rejected, or modified. If the results are consistent that a new theory is formed.
• Reliability. Every research should use a highly structured methodology, so that it is easy to replicate. If
this is the case the research is reliable.
• Generalisation.
Induction
With inductive reasoning it is not true that when a set of premises are true that a clear conclusion can be
formed. This is because the conclusion is based on observations made by humans, and humans make
mistakes. A conclusion is therefore never guaranteed.
Abduction
A third approach, called abduction, starts with a conclusion: a surprise fact. With a set of premises one
subsequently tries to prove the conclusion. An abductive approach does not move from theory to data
(deduction) or from data to theory (induction), but rather moves back and forth between the two, combining
deduction and induction.
5.1 Introduction
A researcher must be able to explain why he chooses a particular research design. This justification should
be based upon the research questions and objectives and should also be consistent with his research
philosophies.
• This research method is often associated with positivism. But may also be associated with
interpretivism when data is drawn from qualitative numbers.
• Quantative research is generally associated with a deductive approach, which means that the focus is
on using data to test a certain theory or certain theories. However, it could be associated with an
inductive approach in some cases.
• This method explores the relationships between variables after which they are measured numerically
and analysed using statistical techniques.
• This research method is often associated with an interpretive philosophy, because researchers need to
make sense of the phenomenon being studied. Qualitative research is often referred to naturalistic
research since it needs to be conducted in a natural setting, in order to gain trust, participation and
access to meanings and in-depth understanding
• Qualitative research can either be started with an inductive or a deductive approach. But in practice, an
abductive approach is frequently used.
• When conducting qualitative research, participants’ meanings and the relationships between them are
studied using data collection techniques and analytical procedures, to develop a conceptual framework.
• It is usually associated with action research, case study research and ethnography.
• Often associated with critical realism, since this philosophy advocates that while there is an objective
reality to the world we live in, the way in which each of us understand and interpret it will be affected
by our own social conditioning. It could also be associated with pragmatism .
• This method may use either an inductive or a deductive approach. Frequently both approaches are
used.
Figure 5.2 on page 165 shows an image of the different methodolocial choices one could make:
o Multi method: more than one data collection technique is used but this is restricted to either
qualitative or quantitative design
o Mixed methods: both qualitative and quantitative design are mixed in a research design
There exist three research designs one could adopt when conducting research:
1. Exploratory study
This kind of study is a valuable way to ask open questions to discover what is going on and gain new
insights about a subject of interest. Conducting exploratory research is useful when one wishes to
understand something or wants to assess phenomena in a bright light. A view ways to conduct exploratory
research are:
1. To search literature
2. To interview experts
2. Descriptive study
The purpose of a descriptive research is to acquire an accurate profile of happenings, people or situations. It
is possible for descriptive, explanatory and exploratory studies to coexist in one research project, where
they might extend one another. When conducing descriptive research one should be cautious, because
descriptive study may become too descriptive and may therefore lead to worthless outcomes. This is also
the reason why most descriptive studies are often combined with explanatory studies: after describing
something the research will provide a valuable explanation. This is referred to as descripto-explanatory
study.
3. Explanatory study
When performing this kind of study one wishes to determine causal relationships between certain variables.
Generally, a strategy is a plan of approach to achieve a certain goal. A research strategy could therefore be
defined as the various steps a researcher has to take to answer his research question. The choice of a
research question should be guided by one’s research question(s) and objective(s), the cohesiveness with
which these link to the research philosophy, research approach and purpose, and to more pragmatic
concerns such as the extent to existing knowledge and access to participants and other sources of data. The
following strategies will be discussed in this chapter (along with the research design that is linked to them):
Experiment
Archival Research
Ethnography
Action Research
Quantative, qualitative or both
Grounded Theory
Narrative Inquiry
Experiment
The experiment is a type of research that has been used frequently by natural scientist. The goal of an
experiment is to examine the probability of a change in an independent variable causing a change in
another, dependent variable (Hakim 2000). See table 5.2 on page 174 for the different variables and their
meanings. Instead of research questions, an experiment uses hypothesis (predictions). There are two kinds
of hypothesis in an experiment:
• Null hypothesis - which predicts that a significant difference or relationship between the variables does
not exist
When performing an experiment, the null hypothesis is tested statistically. The null hypothesis will be
accepted when the probability that there is no statistical difference is greater than a prescribed value (most
of the times 0.05). In this case the alternative hypothesis will be rejected.
• Classical experiment – a group of participating people is selected and randomly assigned to either a
control or an experimental group. The experimental group will test a manipulation or intervention
(storing) and in the control group no such intervention is made. Because the control group is influenced
by the same external influences as the experimental group any changes to the dependent variable will
have to be caused by the intervention.
• Quasi experiment – also uses an experimental and control group, but the participants will not be
randomly assigned to a group. Matched pair analysis is when a participant in a control group will be
compared to a participant in the experimental group based on matching factors such as gender, age,
occupation etc. this is to create an even greater possibility that the intervention is the cause of change
to the variable.
• Within subject design/repeated measures design – this design uses only one single group to determine
change in a variable. Every participant will be subject to an intervention of the independent variable.
Before the intervention, all participants will be observed, a pre-intervention, to establish a baseline (or
control), after which a planned intervention of the independent variable and observation and
measurement of the dependent variable will follow. This research design requires much less participants
than others, but the side effects may be that the participants become tired or familiar with the
experiment.
Internal validity is the extent to which the findings of the experiment can be attributed to the interventions
instead of any flaws in the research design (such is the case with a laboratory experiment. External validity
is a lot more difficult to establish (when conducting field-based research).
Survey
This research strategy is usually associated with the deductive research approach. It is often used for
exploratory and descriptive research. Because most surveys use questionnaires it is easy for people to
understand and to explain. This is the reason why this kind of research design is so popular. Besides through
questionnaires, data for a survey strategy could also be collected through structured observation and
structured interviews. With a survey, quantative data is collected and be analyzed quantatively using
descriptive statistics. When using a sample one needs to be sure that the sample is representative to the
whole population.
Archival research
An archival research strategy uses administrative records and documents as the main source of data. Not
only historical but also recent data documents could be collected and analysed when adopting this strategy.
With use of an archival research strategy (research) questions with focus upon the past could be answered.
These questions may be exploratory, descriptive or explanatory.
Case study
A case study allows one to explore a research topic or phenomenon , within its context or within real-life
contexts. With a case study there is no clear boundary between that which is being studied (the
phenomenon or topic) and the context within which it is being studied (the real-life ‘case’). This approach is
useful when one wishes to gain a better understanding of the research and a certain phenomenon ,
especially when one wishes to explore existing theory. Below are some characteristics of case studies:
• Case study research could combine qualitative and quantative methods such as questionnaires and
interviews. The use of different data collection techniques within one study to be sure that the data are
telling you what you think they tell you is called triangulation.
• It is possible to use multiple cases within one case study, this is termed literal replication. The cases will
be chosen in such a way that similar results are predicted to be produced from each one. Theoretical
replication is when a contextual factor is deliberately different in a certain set of cases. This approach of
case study is done deductively.
• Embedded case study – when research is focussed on sub units within an organisation and the case will
involve more than one unit of analysis.
Ethnography
This approach is used to study particular groups of people. When conducting ethnographic research one
wishes to explore and analyse people in groups who share the same space (this could be the same street,
work group, organisation or even society) and who interact with each other. Cunliffe distinguishes three
ethnographic strategies:
• Realist Ethnography – this is an objective, factual strategy which wishes to identify ‘true’ meanings.
People are being observed through facts or data about structures and processes, routines and norms,
practices and customs, artefacts and symbols (Cunliffe 2010). A realist ethnographer writes in third
person, which displays his role as impersonal reporter of facts.
• Critical Ethnography – this strategy is designed to analyse and explain the impact of power, privilege
and authority on the people who are subject to these influences.
Action Research
This type of research strategy designed to develop answers to real organisational problems by using a
participative and collaborative approach which uses various forms of knowledge. Action research will
influence the participants and the organisation beyond the research project. As Greenwood and Levin said:
action research is a social process in which a researcher works with members of an organisation to enhance
their situation and their organisation. This type of research has 5 themes:
1. Purpose – the purpose of action research is to promote organisational learning to produce practical
outcomes through identifying issues, planning action, taking action and evaluating action (Coghlan and
Brannick).
2. Process – the process of action research starts with a particular context and with a research question,
but because it moves through several stages (See figure 5.4 on page 183) the focus may change as the
research develops. Each stage of the process begins with diagnosing or constructing ideas, planning,
taking action and finally evaluating action. This cycle will be repeated several times.
3. Participation – this component of action research is critical. For Greenwood and Levin action and
participation are essential parts of an Action Research process. One of the reasons why this is the case
is because the members of an organisation need to cooperate with the researcher and enable him to
study their existing work. Moreover, the participants are required to participate in the form of
collaboration though the cycles to allow any improvement in the organisation to occur. Without
participation this type of research would not be able to work. Action Research enables bottom up
culture change, because organisational members are more likely to implement change they have helped
to develop. Therefore, members of an organisation become more engaged and more willing to make
decision.
5. Implications – One of the implications of Action Research is that participants will raise their expectations
about their future treatment (since they are so involved with the organisation). Another implication is
that the organisation will develop and its culture will change. Also researchers could use the results
from this research and use it to develop theory to inform other contexts.
Grounded theory
Grounded theory is a theory developed from a set of data (using an inductive approach). It was developed
as a way to analyse, interpret and explain the meanings that social actors construct to make sense of their
daily experiences in particular situations. There are three stages:
During all these stages of coding the researcher is constantly comparing each item of data with others.
Constantly coding involves moving between inductive (data to theory) and deductive (theory to data)
thinking: while discovering relationships between codes and interpreting them the researcher is thinking
inductively (he develops his own theory from the relationships between codes). This interpretation needs to
be tested by collecting data from other cases, which means that the researcher is thinking deductively
because he tests his ‘theory’(interpretation) with other data. This is known as the process of abduction.
With the Grounded Theory strategy, sampling is not meant to achieve representativeness but rather to focus
the research on a core theme, relationship or process. This approach is known as theoretical sampling which
ends when theoretical saturation/conceptual destiny is reached. This happens when the data collection does
not continue to reveal any new properties relevant to a category, where categories have become well
developed and understood and relationships between categories have been verified (Straus and Corbin
2008).
Narrative inquiry
The term narrative means story or a personal account which interprets an event or sequence of events
(Saunders 2012). Narrative inquiry refers to a research strategy where a researcher believes that the
experiences of his participants can best be accessed by collecting and analysing these as stories. Narrative
inquiry preserves any chronological connection and sequence of events as told by the participant. In this
way the reader may find it more easy to understand the report and the researcher is able to provide his
interpretation of the events.
With Narrative Inquiry the participant is the narrator of a story about an event, work project, managing or
setting up a business, or organisational change. It may be used in combination with other strategies as
complementary approaches.
Longitudinal studies
Is the research reported more like a ‘diary’ which represents the events over a specific period than it’s called
longitudinal. The advantage of this time horizon is that it’s able to study change and development.
• Reliability – a reliable research is reproducible, meaning that the data collection techniques and analytic
procedures would produce the same findings if they were repeated by some one else or another time.
In order to be reliable one has to work in a structured and methodological way.
• Construct validity – the extent to which the research measures actually measured what the researcher
intended them to assess.
• Internal validity – this is the case when the research displays a causal relationship between two
variables.
• External validity – Concerned with questions such as: “Are the research findings generalised?”, “Would
a researcher find the same in other relevant settings or groups?”.
Before conducting research, a researcher need to be sure he will have access to the data he needs and he
has to think carefully about the possible ethical difficulties he might face. Because business and
management research will inevitably make use of human participants ethical concerns will almost always
rise.
1. Traditional access- this may refer to face-to-face interaction, conversations, correspondence or visiting
data archives.
2. Internet-mediated access – this involves the use of a computer, or computer technologies such as the
Web, email and webcams to be able to gain access to questionnaires, discussions, experiments or
interviews or to gather secondary data.
3. Intranet-mediated access – a variant of internet-mediated access where one gains virtual access as an
organisational employee or worker using its intranet.
4. Hybrid access – this type of access combines traditional and internet-mediated approaches.
The levels of access may vary because the depend on the nature and depth of the access one wishes to
achieve: physical, continuing and cognitive access. Physical access may be difficult because it not all
organisations are prepared to engage in activities which are not necessary for them, since time and effort is
required. Sometimes the gatekeeper (the person who keeps data and decides who may have access to it)
does not allow people to undertake the research (because the organisation does not receive value from it or
the topic is to sensitive).
Many people see access to data as a continuing process and not just one single event. One of the two
reasons for this is that access may be an iterative (herhalend) and incremental (stapsgewijs oplopend)
process. After gaining access to one particular set of data one might seek further to achieve other data in
order to conduct another part of the research. Another reason why access is a continuing process is because
those people from whom one needs to collect data may be different to those who agreed to your request for
access (gatekeepers).
Physical access to data from of an organisation will be granted in a formal manner, though an organisations
management. Therefore it is useful to gain trust from the organisational members. This type of access is
named cognitive access.
Negotiating access
Negotiating access is likely to be an important if one wishes to gain personal entry to an organisation and to
be able to have cognitive access to allow one to connect the necessary data. Therefore it is important to
consider the project’s feasibility (determine whether it is practicable to negotiate access for a research
project) and sufficiency (whether one is able to gain sufficient access to fulfil the research objectives).
likely to face problems of access to data. The status of an internal/participant researcher who wishes to gain
cognitive access could cause suspicions. This is because other organisational members may not know what
the internal/participant researcher will do with the data. Here it is also important for the researcher to be
able to communicate the purpose of his research.
• The amount of time or resources involved in the request for access - the less the better
• The sensitivity about the topic - negative implications are less likely to lead to granting access, thus
highlight positive approach.
• The confidentiality of the data and the anonymity of the organisation need to be ensured.
relationship with participants will rise. As one establishes credibility he can develop the possibility of
achieving a fuller level of access.
• Deontological view – following rules to guide researcher’s conduct. When one acts outside these rules it
can never be justified
• Teleological view – deciding whether an act is justified should be determined by its consequences and
not by predetermined rules.
‘Codes of ethics’ were developed to overcome ethical dilemmas arising from various social norms. Codes of
ethics are a list of principles which outlines the nature of ethical research and a statement of ethical
standards.
Many universities have research ethics committees to ensure that research conducted by students is non-
controversial and pose minimal risk to participants. Research ethics committees review all research
conducted by those in the institution that involves human participation and personal data. Table 6.3 on page
231 and 232 provides the general principles developed to recognise ethical issues.
• Respecting privacy
• Informed consent
• Management of data
The term netiquette refers to ‘net etiquette’, or in other words the social standards which one should use
online. Netiquette concerns the use of emails and messaging since they may be poorly worded and may
seem unfriendly or unclear to the receiver.
6.6 Ethical issues during the specific stages of the research process
Figure 6.1 on page 236 sums up the different ethical issues that could rise at specific stages of the research
process. Most ethical issues can be predetermined and dealt with during the design stage of a research
process. One should be sure that the intended research is in line with the ethical principle of not causing any
harm to participants. When seeking access a researcher should not put any pressure on the members of an
1. The continuum starts with complete lack of consent – this is the case when participants may fear
deception from the researchers part
2. Through inferred consent - the participant makes an agreement which states that he has control over
the way the data is analysed, used, stored and reported
3. And ends with informed consent - participants are fully informed and may ask questions whenever they
want
Moreover, the participants need to be fully aware of the information that is asked from them. A researcher
needs to inform them of this formally with the use of a participant information sheet. It has to include the
requirements and implications of participating, the nature of research, how the data will be analysed,
reported and stored and who to contact when any concerns rise. A more detailed written agreement could
be established as a consent form, which both parties should sign. Consent forms help clarifying boundaries
of consent.
6. Kept securely
7. Processed in accordance with the rights for data subject by the Act
A further category of personal data, known as sensitive personal data, covers information about a
participant’s racial or ethnic origin, political opinions, religious beliefs, physical/mental health, sexual life or
any proceedings or sentence related to an alleged offence.
7.1 Sampling
When conducting scientific research one should always consider the use of sampling. If it’s possible to obtain
and analyse data from every possible case or group member it’s termed a ‘census’. However, this may not
always be the case because one might face some restrictions: time, money and access. This is the reason
A sample should always represent the full set of cases in a way that is meaningful and which we can justify
(Becker 1998). The full set of cases from which the population is taken is called the population. The
population does not necessarily signify people, it could also point to Chinese restaurants or electric cars in a
specific region for example. There are a number of reasons why sampling is a better option than a census:
Researchers such as Barnett argue that using a sample leads to higher overall accuracy than a census. This
is because the researcher focusses on a smaller number of cases to collect data from and therefore has
more time to design and pilot the data collection methods. Moreover, data collected from fewer cases means
Sampling techniques
There two types of sampling techniques (see figure 7.2 on page 261):
• Probability/ representative sampling – the chance of each case to be selected from the population is
already known and is usually equal for all cases. This is when you want to prove something
statistically.
• Non-probability sampling – the chance of each case to be selected from the population is not known
and it’s impossible to make statistical interferences about the characteristics of the population.
Often, probability sampling is associated with survey strategies when one needs to make interferences from
a sample about a population to answer the research question and to meet objectives. Henry (1990) advises
against probability sampling for researches that use less than 50 cases. Because this amount may not be
representative of the entire population. The process of probability sampling passes four stages:
3. Choose the most suitable sample techniques and select the sample
The ‘sampling frame’ for a probability sample is a complete list of all the cases in the population from which
the sample is drawn. It is not possible to select a probability sample without a sampling frame. When no
suitable list exists and you still want to use a probability sampling technique, you will have to compose your
The way in which a researcher defines his sampling frame also raises implications to the extent to which he
can generalise form his sample. If a sampling frame is a list of all customers of an organisation one can only
generalise to that population. Thus, you should not generalise beyond your sampling frame. This is a
mistake many researchers make; they don’t place clear limits on the generalizability of their findings.
The larger a sample’s size the lower the likely error in generalising to the population . The choice of sample
size is governed by
• The confidence one has in the data (whether you are certain that the sample is representative of the
entire population)
• The margin of error one tolerates (the accuracy for estimates made from the sample)
In order to ensure that faked results cannot be present, the analysed data must be normally distributed. The
larger the absolute size of a certain sample, the closer its distribution will be to the normal distribution. This
relationship is known as the ‘central limit theorem’ and it also occurs if the population from which the
sample is drawn isn’t normally distributed. It is proven that any sample size larger than 30 will usually result
in a sampling distribution for the mean that is very close to a normal distribution. This is the reason why
The process of making conclusions about a population on the basis of data describing the sample is called
‘statistical interference’. The ‘law of large number’ holds that large samples are more likely to represent the
population from which they are drawn than smaller samples. Moreover, their means are also more likely to
Response rate
sample is one that represents the population from which it is taken exactly. There are four levels of
Reasons why people don’t respond may be because they refuse to participate to the research, they are
ineligible (they don’t fit the requirements) or they may be unreachable. A research report should always
include the response rate of the research. This could be calculated by the following formula:
A more common way of calculating the response rate excludes ineligible respondents who were unreachable.
It is important to estimate the likely response rate and increase the sample size accordingly to ensure that
you will be able to undertake the analysis at the level of detail required. Once the estimated response rate
and the minimum sample size are determined one could calculate the actual sample size with the following
formula:
na=n ×100re%
One way to estimate the response rate is to analyse the response rates achieved for similar surveys that
have already been undertaken and subsequently base the response rates on these.
Sampling techniques
1. Simple random
2. Systematic random
3. Stratified random
4. Cluster
5. Multi stage
See figure 7.3 for a guideline for selecting the appropriate probability sampling technique.
This is done by selecting the sample at random from the sampling frame using a computer or random
number tables. You do this by numbering each of the cases with a unique number (Starting with 0) and
select cases using random numbers until your actual sample size is reached. This is done without
replacement so that no number could be selected twice. This type of sampling is best used when one has an
accurate and easily accessible sampling frame that lists the entire population. The sample that is eventually
selected can be said to be representative of the entire population, because the numbers were chosen
without bias.
This involves the researcher selecting the sample at regular intervals from the sampling frame. This is done
by numbering each of the cases in a sampling frame (starting with 0), then selecting the first case using a
random number, calculate the sampling fraction and finally select subsequent cases systematically by using
the sampling fraction to determine the frequency of selection. The sampling fraction is the proportion of the
entire population that one needs to select and could be calculated using the following formula:
When the sampling fraction is ¼ one needs to select every fourth case from the sampling frame. Using this
technique one needs to be sure that the list does not contain periodic patterns since this may disturb the
results.
This is a modification of random sampling in which you split the population into two or more relevant and
significant strata (lagen) based on one or more attributes. In other words, the sampling frame is divided into
various subsets after which a random number is drawn from each of the strata. By dividing the population
into a series of relevant strata the sample is more representative because one can ensure that each of the
To do this a researcher chooses the stratification variable(s) and divides the sampling frame into the
discrete strata. Then he numbers each of the cases within each strata with a unique number (starting with
0), after which he selects the sample using either simple random or systematic random sampling.
Cluster sampling
This technique is similar to stratified random sampling as it is required to divide the population into discrete
groups prior to sampling. The groups are called ‘clusters’ and can be based on any naturally occurring
grouping (for example by manufacturing firm or geographical area). Instead of the individual cases within
the population, with this technique the sampling frame consists of the list of clusters. The technique has
three stages:
This technique leads to a sample that represents the whole population less accurately than stratified random
sampling.
Multi-stage sampling
This is a development of cluster sampling. Just like cluster sampling, multi-stage sampling can be used for
any discrete group, including those that are not geographically based. With this technique one modifies a
cluster sample by adding at least one or more stage of sampling that also involves some form of random
sampling. The four phases of multi-stage sampling are depicted in figure 7.4 on page 279. Because this
technique relies on various different sampling frames, one needs to ensure that they are all suitable and
available.
Non-probability sampling provides alternative techniques for selecting samples. There are no rules for
deciding the sample size, but it is important to choose a size that represents the population adequately.
Quota sampling
Quota sampling is a non-random approach for sampling and is often used for structured interviews. This is a
type of stratifies sample in which the selection of cases within strata is totally non-random. To select a quota
sample one needs to divide the population into groups, calculate the quota of each group (based on relevant
data), give each interviewer an ‘assignment’ (this states the number of cases in each quota from which to
collect data) and combine the collected data from the interviews to provide the sample.
This type of sampling involves the researcher to use judgment to select the cases that are most suitable for
answering the research questions and to meet the objectives. There are a number of strategies which one
• Extreme case/ deviant sampling – focus on unusual/special cases that will provide answers to
participants with sufficiently diverse characteristics to generate the maximum variation possible in the
collected data.
• Homogeneous sampling – focuses on one specific subgroup in which all the members are very similar
(age,occupation)
• Critical base sampling – selects critical cases because they are either important or can make a
dramatic point.
• Typical case sampling – these enable the researcher to generate an illustration of what is typical to
those who will read the research report and are unfamiliar with the topic.
• Theoretical sampling – sample selection is dictated by the needs of the theory being developed. Thus
the sampling occurs during the research as more participants are needed.
Volunteer sampling
Snowball sampling is a type of sampling where participants volunteered to participate in the research,
instead of being chosen by the researcher. Self-selection sampling on the other hand occurs when the
Haphazard sampling
This is a type of sampling where sample cases are selected without an obvious relation to the research
questions.
• Documentary secondary data – these are data often used in research projects that also collect primary
data. Documentary secondary data include texts such as books, journals, magazine articles,
newspapers notices, correspondence, minutes of meetings, reports to shareholders, transcripts of
speeches and conversations, diaries, administrative and public records and web page texts.
• Survey-based secondary data – these are data collected for some other goal using a survey strategy,
usually questionnaires. They are available through compiled data tables or as a downloadable matrix of
raw data. These data can be collected by censuses (participation is obligatory), continuous/regular
surveys (censuses repeated over time) or ad hoc surveys (specific in their subject matter).
• Multiple sources – these data can be compiled from documentary or survey secondary data. Different
data sets have been combined to form a new data set prior to your accessing the data. Multiple-source
data can be compiled by extracting and combining particular variables from a couple of surveys to
provide ’longitudinal data’. Alternatively, data can be compiled for the same population over a period of
time using a series of snapshots to form ‘cohort studies’.
1. Establish the likely availability of secondary data (for example with the use of tertiary data)
2. Locating the data you require (for example via online databases or journals)
Disadvantages
Secondary data has been collected for a specific purpose different from your research questions and
objectives. The consequence of this is that the data might not be suitable for your research. Furthermore, it
may be difficult or expensive to gain access to the data.
use provide you with the information you need to answer the research questions and objectives. To ensure
that the data collected is reliable and valid it is useful to have a clear explanation of the techniques that
were used to collect the data.
Another way to test the suitability of secondary data is to determine to what extent the data covers the
population about which you need data. This is to be sure that unwanted data can be excluded and to ensure
that sufficient data remain for analyses to be undertaken once those unwanted data have been excluded.
When looking for data one needs to be sure that measurement bias is not present. Measurement bias may
occur for two reasons:
• Deliberate or intentional distortion of data – occurs when data are purposefully recorded inaccurately.
Other research approaches than observation usually focus on two distinct kinds of people that participate to
the research: respondents (those who simply complete questionnaires) and participants ( those who take
part in most types of qualitative research). These labels do not work with observation, since the researcher
is actually participating in the environment of the people he observes. Thus, in observational research those
who are being observed are called ‘informants’.
2. Complete observer – researcher does not reveal the purpose of the activity to the members of the
group, but he does not take part in the activities of the group.
3. Observer-as-participant – researcher only observes (does not participate in) the activities of the group,
although his purpose is known to those whom he is studying.
4. Participant-as-observer – researchers takes part in the activities of the group and also reveals his
purpose as a researcher.
• Primary observations: data that explain what happened or what was said at the time (like in a diary).
• Secondary data: statements by observers of what happened or what was said; this involves
interpretations.
• Experiential data: data on perceptions and feelings as the researcher experiences the process he is
researching.
Data collection
Robson (2011) argues that the process of data collection always starts with ‘descriptive observation’. This is
when the researcher concentrates on observing and describing the physical setting, key informants and their
activities, particular events and their sequence and finally the attendant processes and emotions involved.
After stage the researcher is able to write a ‘narrative account’ in which he develops a framework of theory
that will help him to understand and explain to other what is going on in the research setting. In order to do
this it may be helpful to focus on particular events through ‘focused observation’.
Data analysis
Just like other qualitative data, those collected form participant observation will start to be analysed from
the minute one collects them. In other words, data collection and analysis occur simultaneously.
Observer bias occurs when the observer uses his or her own subjective view or disposition to interpret
happenings in the setting he or she observes. This may the case when the observer is already a member of
the group he wishes to observe ( an employee in an organisation) and therefore fails to interpret the setting
objectively. When using covert observation (when the purpose of research is not known to the informants) it
is difficult to check with the informants whether your interpretations were valid. Using overt (informants are
aware of the purpose of the research) observation enables the researcher to ask the informants to read
some of the secondary observation that relate to them. This is referred to as ‘informant verification’.
The observer effect happens when the presence of the observer affects the behaviour of those being
observed, which could lead to unreliable and invalid data. A solution to this is to act covertly or to achieve
minimal interaction (when the researcher melts in the background) . But informants may become familiar
with the researchers presence, minimizing the observer effect; this is termed ‘habituation’.
• Structured interviews use questionnaires based on predetermined and standardised sets of questions
and are referred to as interviewer-administered questionnaires. They are used for the collection of
quantifiable data. Often referred to as quantitative research interviews.
• Semi structured interviews are non-standardised and are often referred to as qualitative research
interviews. The researcher has a list of themes and sometimes some key questions, but he does not use
them in a structured way. The researcher may use some questions in some interviews and other
questions in others.
• Unstructured interviews are informal. This is used to explore in depth a particular area in which the
researcher is interested. Therefore these interviews are termed in-depth interviews. An informant
interview is one in which the interviewee is free to talk about events, behaviour and beliefs regarding
the subject area and it is the interviewee who guides the conduct of the interview. A focused interview
is one in which the interviewer has greater control over the direction of the interview while in the
meantime allowing the interviewee’s opinions to emerge.
Interviews may be conducted on a one-to-one basis, between a researcher and a participant, but they may
also be conducted on a one-to-may basis, between a researcher and a group of people (see figure 10.1 on
page 375).
• The purpose of the research – when undertaking an exploratory or explanatory research it is likely that
one includes in-depth or semi-structured research interviews in his design. Interpretivists are also likely
to use these types of interviews since it allows them to let the interviewees explain their responses.
• The significance of establishing personal contact – sometimes participants wish to reflect on events
without needing to write anything down (as with filling out a questionnaire).
• The nature of questions – the use of semi-structured and in depth interviews is most advantageous
when there are a large number of questions to be answered, the questions are either complex or open-
ended and where the order and logic of questioning may need be varied.
• Length of time required and the process’ completeness – some negotiation is always possible and the
interview can take place at a time that pleases the interviewee.
this issue, since it will always be present. In-depth and semi-structured interviews will always be flexible and
because different questions were used with different participants it would be difficult to repeat this.
The next data quality issue is bias. There are three types of bias to consider:
1. Interviewer bias – this is when comments, tone or non-verbal behaviour of the interviewer may lead to
bias in the way interviewees respond to the questions
3. Participation bias – this is bias that results from the nature of the individuals or organisational
participants who agreed to be interviewed. Because the interview may be time-consuming, participants
might become tired and less willing to talk.
The issue of generalizability refers to the extent to which the results of a research are applicable to other
settings. This is an issue with semi-structured and in-depth interview based researches because they are
often based on sample sized samples.
Validity may also be an issue because it refers to the extent to which the researcher succeeded in gaining
access to a participant’s knowledge and experience, and has been able to infer meanings that the participant
intended.
• Level of knowledge: the researcher should always be familiar with the research topic and the
organisational or situational context in which the interview will take place. Interviewing participants
from different cultures requires the interviewer to gain some knowledge about those cultures in order to
successfully undertake the interviews and to avoid misinterpretations.
• Developing interview themes and providing the interviewee with information before the interview: this
provides the interviewees the opportunity to prepare themselves for the interview.
• The appropriateness of the intended interview location: it is possible that the location where one
conducts interviews will influence the data he collects. The location should be convenient for the
participants, it should be a place in which they feel comfortable and where the interview is unlikely to
be disturbed.
• Appropriateness of the researcher’s appearance: an interviewer’s appearance may affect the perception
of the interviewee, it may have an effect on his/her credibility or result in a failure to gain their
confidence.
• Nature of the first comments when the interview starts: especially when the interviewee has never met
the interviewer, the first few minutes of conversation will have a large impact on the outcome of the
interview – this is also related to the issues of credibility and the interviewee’s confidence.
• Approach to questioning: the questions need to be phrased clearly so that the interviewee understands
them and the interviewer should have a neutral tone of voice. The use of open questions should help to
avoid bias. Questions that are seeking to lead the interviewee should be avoided. One approach to
questioning is the ‘critical incident technique’ in which participants are asked to give a detailed
description of a critical incident relevant to the research question.
• Appropriate use of different types of questions: when conducting in-depth or semi-structured interviews
one should consider formulating appropriate. There are various types of questions:
o Open questions – these allow interviewee’s to define and describe a situation or event.
o Specific/Closed questions – these may be used as opening questions when one commences
questioning about a particular interview theme.
o Probing questions – these types of questions can be used to provide for further exploration of an
interviewee’s response. They are used to dive deeper into the research topic.
• Nature and impact of the interviewer’s behaviour during the interview: this also relates to the issues of
credibility and the interviewee’s confidence.
• Scope to summarise and testing understanding: the interviewee may test his understanding by
summarising an explanation given by the interviewee.
• Dealing with difficult participants: always remain polite with difficult participants and do not show
irritation.
11.1 Questionnaires
Questionnaires refer to all methods of data collection in which each person is asked to respond to the same
set of questions in a predetermined order. The design of a questionnaire will affect the response rate and
reliability and validity of the collected data. These can be maximised by:
• Pilot testing
o posted via mail to respondents who post them back: postal/mail questionnaires
o or delivered by hand to each respondent after which they are collected later: delivery and collection
questionnaires
• Interviewer-completed questionnaires are recorded by the interviewer himself on the basis of each
respondent’s answers.
o Telephone questionnaires
o Structured interviews: those questionnaires where the interviewer physically meets respondents and
asks the questions face-to-face.
Sometimes it might occur that the respondent’s answers reduce your data’s reliability simply because they
have insufficient knowledge or experience and may purposefully guess at the answer. This is known as
‘uninformed response’. Respondents to self-completed questionnaires are sometimes likely to discuss their
answers with others, and thereby contaminating their response.
Types of variables
There are three types of data variables that can be collected with a questionnaire:
• Opinion variables: these variables record how respondents feel about something or what they believe is
true or false.
• Behaviour variables: these data include what people did in the past, do now or will do in the future.
• Attributes variables: these contain data about the respondent’s characteristics. Attributes are thing
respondents possess.
To be sure that the data collected will lead to answers on to the research questions and achievement of the
objectives it could be helpful to create a data requirements table (see table 11.2 on page 425). Investigative
questions are those that need to be answered in order to successfully address to each research question and
to meet each objective.
Reliability testing
Mitchell (1996) outlined three approaches to asses reliability:
• Test re-test – this is done by comparing data collected with those from the same questionnaire under as
near equivalent conditions possible. In other words, conducting the same questionnaire interview twice
and test if the results are similar.
• Internal consistency – this involves correlating the responses to question in the questionnaire with each
other. Thus measuring whether the responses are consistent across a subgroup or all of the questions
of a questionnaire. ‘Cronbach’s alfa’ is often used a statistic to measure the responses using a particular
scale
• Alternative form – this is comparing the responses to alternative forms of the same questions or groups
of questions.
Designing questions
To design individual questions for a questionnaire, researcher could adopt or adapt questions used in other
questionnaires or they could develop their own questions. The first two options are helpful when the
researcher wishes to compare his findings with another study. This method ensures the reliability of the
questions since they have already been tested.
Just like in other types of surveys, questionnaires may use open, closed or forced-choice questions (when
the respondent has to choose from a given set of answers). Other kinds of questions are
• List question - where the respondent is given a list of responses, any of which he could choose (by
ticking the box for example)
• Category questions - these only allow the respondent to choose one category only
• Ranking questions - ask the respondent to place things in rank order (for example when a respondent
has to rank certain issues on degree of importance to him).
• Rating questions – these are used to collect opinion data. They use the Likert-style rating in which a
respondent has to express how strongly he or she agrees or disagrees with a statement. A variation of
this is the semantic differential rating question, where the respondent has to rate something on a series
of bipolar scales (fast – slow; good-bad etc.).
• Quantity/self-coded questions– these are questions of which the answers are numbers which
demonstrate the amount of a characteristic.
• Matrix questions – questions that enable one to record the responses to two or more similar questions
at the same time.
• Descriptive/nominal data – these data can simply count the number of occurrences in each category of
a variable. When a variable is divided into two categories (female/male for example) than the data are
known as dichotomous data.
• Ranked/ordinal data – these are data that are a more precise form than categorical data. An example of
ranked data may be answers to rating or scale questions.
Alternatively, numerical data are those whose values are numerically measured or counted as quantities
(Berman 2008). Numerical data are therefore more precise than categorical ones because one can assign
each data value a position on a numerical scale. Numerical data can be subdivided in two ways: based on
interval and ratio data: or based on continuous or discrete data. Interval data can state the difference
(interval) between any two data values of a certain variable, whereas ratio data can calculate the relative
difference (ratio) between any two data values of a certain variable. Continuous data are those whose
values can take any value (given that you measure them accurately) while discrete data can be measured
precisely (often whole numbers/integers).
After determining the types of data that are to be collected the researcher can start to enter the data into
data computer data processing software (RSS/EXCELL). To do this the data need to be coded using
numerical codes. This enables the researcher to enter the data quickly with fewer errors. When this is done
the data should be checked for errors.
Exploring variables
The easiest way of summarising the data is by using tables. However, tables do not demonstrate visual
significance to highest or lowest values so it may be that diagrams are a better option for summarising the
data. Another way to present data is by using a bar chart, where the height or length of each bar represents
the frequency of occurrence. Bar charts are similar to histograms, another type of data presenting, where
the area of each bar represents the frequency of occurrence and where the continuous nature of the data is
emphasised by the absence of gaps between bars. Finally, a pictogram, also similar to a bar chart, shows a
series of pictures chosen to represent the data. Other kind of data presentation are:
• Pie chart – this is a diagram that is divided into proportional segments according to the share each has
of the total value.
Shapes of diagrams
If a diagram shows a bunching to the left and a long tail to the right (figure 12.3 on page 291) then the data
are ‘positively skewed’. If this is the other way around then the data are ‘negatively skewed’. When the data
are equally distributed on each side of the highest frequency they are ‘symmetrically skewed’.
A bell-shaped curve is called a normal distribution. With the indicator ‘kurtosis’ one can compare a diagrams
pointedness or flatness with that of the normal distribution. When a distribution is flatter then it is called
platykurtic and the kurtosis value is negative . When the distribution is more peaked, than it is leptokurtic
and the kurtosis value is positive.
Comparing variables
Contingency tables or cross tabulation are approaches one could use examine the interdependence between
variables. Other approaches are:
• Percentage component bar chart – this is used to compare proportions between variables.
• Comparative proportional pie chart – this is used to compare proportions of each category or value as
well as the totals between variables.
• Scatter graphs or scatter plots – this diagram is often used to explore the possible relationships
between ranked and numerical data variables by plotting one variable against another
• The median – the middle value or mid-point after the data have been ranked
The dispersion (how data are distributed around the central tendency) could be described by:
• Inter-quartile range – the difference within the middle 50 per cent of values
• Standard deviation – extent to which the value differs from the mean
• Range – the difference between the lowest and the highest values
• Coefficient of variation – this is to compare the relative spread of data between distributions of different
magnitudes, for example hundreds of tons with billions of tons (calculated by dividing the standard
deviation by the mean and multiply the answer by 100)
• The categories of the contingency table are mutually exclusive. Each observation falls into one category
only
• Not more than 25 per cent of the cells can have expected values of less than 5. When the table consists
of two rows and two columns, no expected values can be less than 10.
• Correlations: this is when a change in one variable leads to a change in another variable, but it is not
clear which variable has caused the other to change
• Cause-and-effect relationship: when a change in one or more variables cause a change in another
variable
The correlation coefficient quantifies the strength of a linear relationship between two ranked or numerical
variables between a number of +1 and -1. A value of +1 means positive correlation, which means that the
two variables are exactly related and when one increases, the other one will increase as well. A value of -1
demonstrates a negative correlation, where the two variables are precisely related, but when one increases
the other one decreases.
Qualitative research interviews are normally audio-recorded and then transcribed, which means that they
are reproduced as written (usually with use of a word processor) account using the actual words. Along with
the actual words, a researcher also has to note down the tone in which they were said as well as the non-
verbal communication of the interviewee. The researcher should also make sure that the transcription is
accurate by correcting for errors, this is known as data cleaning.
• Interim summaries – these can be made during the analysis and outline what you have found so far,
whether you trust your findings and what you need to do to improve the quality.
• Transcript summaries – these compress long statements into shorter ones in which the key element of
what was said is rephrased in a couple of words.
• Document summary – describes the goal of a document and lists a few key points.
• Self-memos – these record ideas that occur to the researcher about any aspect of the research
• Reflective diary – in this the researcher writes his reflections about his experiences of undertaking the
research, what he has learnt, and how he will seek to apply his learning as the research progresses.
1. Categorising data: creating categories into which the data will be divided
2. Unitising data: the units of data are attached to the appropriate categories that you have devised
4. Developing testable propositions: the existence of relationships need to be tested if one is able to
conclude that there is a relationship, by developing testable propositions. This could be done by seeking
alternative explanations and negative examples.
According to Miles and Huberman (1994) the process of analysis consist of three concurrent processes: data
reduction, data display, and drawing conclusions. With data reduction they mean summarising and
simplifying the collected data and/or selectively focusing on specific parts of these data. Data display
focusses on organising and assembling the data into summary diagrams or other visual displays. All data
that is not summarised or reduced is called ‘extended text’. Data displays allow the researcher to make
comparisons between aspects of the data and to explore relationships, key themes, patterns and trends.
• Focused coding - reanalysing the data in order to test which of the initial codes may be used to
categorise larger units of data
• Labelled selective coding - the integration of categories around a core category in order to generate a
theory.
When conducting grounded theory research it could be helpful to use theoretical sampling to find a sample.
Theoretical sampling is choosing samples following analysis of initial data to further develop analytical
categories and concepts. In order to this one it is important to constantly compare the collected data with
the categories and concepts being used. Theoretical sampling is used until theoretical saturation is reached.
Template analysis
A template is a list of the codes or categories for the themes discovered from the collected data. This type of
analysis uses both a deductive and an inductive approach in order to analyse the codes. Other than in
Grounded Theory, Template Analysis permits prior specification of codes to analyse data, while Grounded
Theory tries to hold everything as purely inductive as possible. Also, Grounded Theory is more structured
than Template Analysis. Just as in Grounded Theory data are both coded and analysed to discover themes,
patterns and relationships. The template approach enables the researcher to display the codes and
categories hierarchically.
Analytic Induction
This is the process of collecting and analysing of strategically selected cases in order to empirically establish
the causes of a particular phenomenon. An explanation is developed by extensively examining the process
being explored. This is done through repeated cycles of developing and testing propositions. Unlike
Grounded Theory, this approach focuses more on existing knowledge and theory than on participant’s data.
Narrative Analysis
Using this approach, a researcher collects data through narratives such as experiences of the participants.
Narratives cannot be easily fragmented since the essence of the story might be lost in the process. Instead,
the narratives need to be either left intact, or they need to be ‘re-storied’, into new narratives in a more
coherent form.
Discourse Analysis
This is a general term that covers a very wide variety of approaches to the analysis of language. It also
explores how and why individuals’ language is used by individuals in particular social contexts. Put
differently, this approach explores how language (discourse) in the form of speech an text
reproduces/changes the social world (Phiillips 2002) . Researchers who use this approach are often
subjective ontologists.
Explanation building
This approach involves an attempt to build an explanation by collecting data and analysing them, rather
than testing a predicted explanation. This approach is similar to Grounded Theory but is designed to test a
theoretical proposition while Grounded Theory is designed to construct a theory inductively.