Professional Documents
Culture Documents
1.1 Why Study Research?: Applied Research: Has A Practical Solving Emphasis Conducted in Order To Reveal Answers
1.1 Why Study Research?: Applied Research: Has A Practical Solving Emphasis Conducted in Order To Reveal Answers
Air Swiss example in searching for new partners to team up with. Dilemma
Analysis of 6 possible partners on specific factors to come up with a possible
managerial proposition Business research
- How did recent developments (which factors) affect business research process?
- Descriptive: To discover answers to; what, who, when, where and sometimes how
questions. Describe or define a certain subject.
Involves collection of data and examination of distribution of a single event or
characteristic (research variable), or 2 or more variables
- Explanatory: To answer why and how questions. Explains reasons why something
happens.
Correlational study for example, where the relation between 2 or more variables is
measured.
The use of hypothesis to account for the forces that caused a certain phenomenon.
Applied research: has a practical solving emphasis Conducted in order to reveal answers
to specific questions
Pure and basic research developing new ideas that do not answers specific
problems/question.
For example, developing an algorithm for performing certain performance checks is pure
research. Applied research would be applying this algorithm to provide answers to a
question.
Basically, purposeful, clearly defined focus and plausible goals, defensible, ethical and
replicable procedures and with evidence of objectivity. Reporting of findings should be
complete and honest. Appropriate techniques should be used and conclusions should be
justified by findings. Reporting in an academic and professional tone and language.
1.5 Research philosophies; positivism and interpretivism (phenomenology)
Positivism: View of the world is external and objective (feit), researcher is independent and
value-free influence. Assumptions that are observed are objective, quantitative. three
categories: True, False or Meaningless (none of both), usually large samples.
For example, KPI of a company to assess performance large sample size is needed for a
representative conclusion! (quantitative research)
Realism: What we perceive is really real, and not in our minds or an illusion.
Induction: to draw a conclusion from one or more facts. The conclusion explains the facts,
and the facts explain the conclusion. The task of research is largely to confirm or reject
hypothesis and to design methods to discover and measure other evidence.
Induction is used to draw hypothesis. For example, the failure of a marketing campaign can
be induced by a poor execution of the campaign, but also by a hurricane in the city.
Combining induction and deduction: Induction occurs when we observe a fact and ask “why
is this?” In order to answer this question, we come up with a hypothesis, deduction is the
process where we find facts that support the hypothesis.
Empirical data Originated or based on observation or experience, rather than theory and
facts.
Height, width, depth, profit, running, walking, skipping, crawling or hopping are all concepts.
They symbolize a conception of properties.
Attitudes are abstract, but we want to measure it using carefully selected concepts
- Depends on how clearly we conceptualize
- How good others understand it
Abstract concepts are called constructs: Personality, presentation, language skills are
constructs. Spelling, vocabulary, keyboard speed and manuscript errors are concepts to
measure the constructs.
Construct refers to an image or idea specifically invented for a given research or theory-
building purpose.
Operational definitions: A concept that has been operationalized to measure what it needs
to measure. For example, Class status in ‘hours of credit’.
Variable: Variable is another word for construct, but refers usually to a numerical value
- Dichotomous variable: 0 or 1 for example employed/unemployed
- Continuous variable: For example, temperature, it can take values within a certain
range
- Independent variable: Leadership style
- Dependent variable: Job satisfaction (it is dependent on the leadership style)
- Moderating variable: is a second independent variable that is believed to have a
significant contributory or contingent effect on the original IV-DV relationship.
It is normally hypothesized that the IV causes the DV to occur.
“Training (IV) will lead to higher productivity (DV), especially among young workers (MV)”
“Training (IV) will lead to higher productivity (DV) by increasing the skill level (IVV)”
- Control variables (CV): To ensure results will not be biased by for example sunshine in
above mentioned example, usually doesn’t have significant effect on the research.
Usually in research these are: Age, gender, ethnicity, place of living, place of
establishment.
“Trainings (IV) will lead to higher productivity (DV) especially among younger workers (MV) when the
sun is shining (CV) by increasing the skill level (IVV).
A Proposition is a statement about a concept that may be judged as true or false. When a
proposition is formulated for empirical testing, we call it a hypothesis
- Descriptive hypothesis: These are propositions that typically state the existence, size,
form or distribution of a variable.
Researchers often use a research question instead of a descriptive hypothesis: What is the
employment rate in Denmark?
Foreign (variable) cars are perceived by Italians (case) to be of better quality (variable) than
domestic cars.
Correlational hypothesis: state that variables occur together in a specific manner without
one causing the other.
Explanatory (causal) hypothesis: One variable leads to a change in the other. (IV leads to
change in DV)
1.9 Theory
Theory is a set of systematically interrelated concepts, definitions and propositions that are
advanced to explain and predict phenomena (facts).
Research dilemma/management dilemma: Triggers the need for investigating how the
dilemma can be solved.
Academic teamwork: Make sure to assign a team leader, who is the best in managing tasks
and groups.
The main research dilemma has to be divided in other, more specific questions that can be
measured.
When the researcher has a clear statement on the dilemma, he or she must work with the
manager to translate it into research question.
Fine tuning research question: After exploration and literature, the project begins to
crystallize Questions come from the exploration or several questions already have been
answered.
Investigative questions: Question that reveal the specific pieces of information that one
need to know in order to answer the research question (sub questions).
Measurement questions:
- Pre-designed or pre-tested: formulated and pre-designed/tested by other
researchers (recorded in literature) enhanced validity. Basically the questions we
ask to our respondents (boek van paashuis met vragen etc.)
- Custom-designed questions: Tailored to fit the investigative questions and
information required (new questionnaire on specific idea/topic)
Unresearchable questions: Some questions are researchable and some are not. To be
researchable, a specific type of data collections can provide answers, for some this is not
possible.
Research design: The blueprint of fulfilling objectives and answering questions. This includes
all specific methods to measure data and to eventually answer questions. For example, case
studies, quantitative methods, qualitative methods, scaled – open ended, interviews etc
Favoured technique syndrome: Some researchers are method bound. They favour one
specific type of research above others because they are for example experienced in that
specific method or they just like it. This might blind them over other more convenient and
‘better’ options do conduct research.
Sample design: The researcher must determine who and how many people to conduct
interview on. A sample is part of population, carefully selected to represent the population.
Pilot testing: Conducted to test weaknesses in the data collection methods. Often skipped
by researchers but might result in not evaluating the method! Resulting in more costs.
Data collection:
Data: facts presented to researcher through environment.
- Abstractness: Some data is more metaphorical than real, for example measurement
of an effect
- Verifiability: When sensory experiences consistently produce the same result, our
data is said to be trustworthy, since it can be verified.
- Elusiveness: (difficult to capture) time-bound nature and speed, for example data
from the 80’s is not relevant anymore for today.
- Closeness to the phenomenon: Reflects truthfulness (closeness) tot the
phenomenon/variable
Company database strip-mining: The amount of data that is already available within an
organisation might distract managers for doing further research. Rarely this information or
method will answer all management questions related to a particular management dilemma.
Research evaluation:
- Ex post facto: After the event for example evaluation of a marketing campaign.
Often too late to base management decisions on.
- Prior or interim evaluations: Evaluations in advance or during the fact (research or
event)
Content:
- Statement of research questions
- Brief description of research methodology
Purpose:
- Present the management and research questions and relate its importance
- Discusses the research efforts of others who have worked on related management
questions
- Suggests the data necessary in solving the questions and how data will be gathered
and interpreted.
Sponsor uses:
All research has a sponsor in one way or another, it is easy for the sponsor to evaluate if the
research goals has been achieved. Proposals are usually submitted in response to a request
for a bid or request of proposal (RFP)
Research benefits:
- Gives guidance for the researchers and for the sponsors
- Gives the sponsor an insight in the research questions, so that he can evaluate if the
problem is covered.
- Time and costs are covered for both parties.
Types of proposals:
Internal Proposal: Within a company, usually small and solicited (RFP, requested for
proposal)
External Proposal: can be requested or not. Likely to compete to other researchers to ‘get
the job’. A consulting firm helping a large financial institute in a certain problem.
Critical path method (CMP): When a research is large and complex, make a CMP that
includes all steps and methods used in the research.
Critical review: For example, book reviews on academic books where scientists leave their
assessment about a specific book. The objective is to assess the quality of the text and to
provide a short summary of the content.
- Most journals use ‘peer reviews’ to decide which manuscripts they publish.
The academic will ask 3 scientists who have published literature on the same topic to
leave their critical review, in order to evaluate and confirm the statements. It also
leaves an option for the writer to improve his or hers manuscript.
- Researchers do not only criticize but they are also very constructive and offer
solutions.
Literature sources
- Primary sources: Full-text publications of theoretical and empirical (observations and
subjective) studies (Journals, reports, pre-publications, academic books)
- Secondary sources: Combinations of primary sources, made to conduct a summary or
a new literature review (papers, bibliographies, directories)
Chapter 4 Ethics
Deontology: The ends never justify the means that are questionable on ethic grounds
Teleology: The morality of the means has to be judged by the ends it servers. The benefits of
a research/study are weighted against the costs of harming people involved.
Benefits of a research should be discussed, for example: help in tackling a certain medical
problem.
Deception: When participants are told only one part of the truth and are being misleaded,
this often happens to:
- prevent biasing before the experiment
- prevent the confidentiality of the sponsor
Informed consent: Fully disclosing the procedures of the research before requesting
permission (agreement/consent)
the Sponsor (opdrachtgever) has the right to receive research that has been conducted
ethically. Confidentiality regarding sponsors:
Epistomology theory of knowledge is concerned with the question how one acquires
knowledge. The choice for quantitative or qualitative is an epistemological question.
The choice of which method you are going to use depends on the following questions:
- What is the research problem?
- What is the objective, what kind of outcome are you looking for?
- What kind of information do you want to obtain and what do you already have access
to?
- Are you attempting to conduct a descriptive, explorative, causal or predictive study?
The quality of research does not depend on the method you are using but depends on the
quality of its design and how well it is conducted.
Research design has many definitions, but together they all imply the following essentials
of a research design. The design basically shows what the researcher will perform to gain the
final answer to the research question. From hypothesis to final analysis.
- Activity and time-based plan
- Always based on the research question
- Guides selection of sources and types of information
- Framework for specifying the relationship among study’s variables
- Outlines procedures for every activity.
The design provides answers for questions such as:
- What kind of answers is the study looking for and which methods will be applied to
find them?
- What techniques will be used to gather data?
- What kind of sampling will be used?
- How will time and cost constraints be dealt with?
Banks and government agency’s try to predict stocks markets and financial investment, think
of how many times they get it wrong.
Interrogation/communication study: the researcher questions the subject and collects their
response by personal or impersonal means.
- Interview or telephone conversation etc.
Archival sources Secondary or primary data that is already available to the researcher.
Qualitative and quantitative studies can rely on both methods of data collection.
Control of variables:
Experimentation provides the most powerful support for a hypothesis of causation,
researchers try to control variables and or manipulate them.
Ex post facto The researcher has no control over the variables in the sense of being able
to manipulate them. They can only report what has happened or is happening.
- Researchers should not try to manipulate this design, since this will introduce bias to
the experiment!
Time dimension
Cross-sectional studies Are carried out once and represent a snapshot in a point of time.
Longitudinal studies Are repeated over an extended period of time. Does pose more
risks of bias, since with panels the same people are questioned in a period of time.
Longitudinal study has the advantage that you can measure different variables over a longer
period of time and see whether the outcome has changed Good for causal studies
Usually studies are longitudinal! Since research is conducted over a span of time.
Panel in longitudinal studies The researcher may study the same people over time.
- In marketing panels are set up to report consumption data on a variety of products.
Cohort groups in longitudinal studies Use different subjects for each sequenced
measurement
- Service industry for example in assessing the service level
Research environment
Designs differ whether they occur under actual environmental conditions
(field conditions At home, workplaces or shops they visit) or under staged or manipulated
conditions (laboratory) or even artificial (simulations role playing)
Exploratory (verkennend) research is particularly useful when the research lack idea of the
problem. The field the researcher work in may be completely new to them so that
explorative research is needed to provide the necessary information.
Objective is the development of hypothesis and research questions.
2. Experience surveys
When we conduct experience surveys, we would like to know their ideas about important
issues or aspects of the subject, and discover what is important across the subjects’ range of
knowledge. Questions that emerge during the interview:
- What is being done?
- What has been tried in the past without success and with success?
- How have things changed?
- What are the change producing elements?
- Who is involved in decisions and what role does each person play…...
3. Focus groups
Often used for a new product or product concept. The output of the session is a list of ideas
and behavioural observations, with recommendations by the moderator.
can be used to measure research questions and to evaluate methods and design.
Useful method in pre-testing questionnaires, experiments and so an. Because the focus
group often contain people who could be respondents.
Causal studies: To seek the effect that a variable has on another, or why certain outcomes
are obtained.
Method of agreement:
Causal relationships
Randomization: Is the basic method in which equivalence between experimental and control
group is established, they should be equal. It is best to assign subjects to either experimental
or control group at random until they are filled.
Matching: We need to be sure that the subjects in each group are ‘matched’ with the same
classification/characteristics. For example 1 group contains 5 people above 50 and the other
only students of -20, than matching has to be done to assure both groups have the same
amount of ‘variables’.
When this has been met, a sample can be made of the firms.
If you use non-innovative as well as innovative firms, you can establish a random sample
between the subjects.
Unit of analysis Describes the level at which the research is performed and which objects
are researched. People or individuals are a common unit of analysis.
- Thinking carefully about a study’s unit of analysis can prevent difficulties and error
that may occur later in the problem definition and research design.
Census:
- Feasible when the population is small
- Necessary when the elements are quite different from each other
Representation basis:
- Probability sampling: Each population is given a non-zero chance of selection
(random)
- Non-probability sampling: is non-random and subjective, each member does not
have a known non-zero chance of being selected.
Element selection:
- Unrestricted: Simple random sample each population element has a known and
equal chance of being selected for the sample accomplished by computer
software or a table of random numbers
- Restricted: All other forms of sampling
Interval or ratio scales Sample mean and sample standard deviation to estimate
population mean and deviation.
Researchers can never be 100 percent sure the sample measured reflects its population,
therefore precision is measured by:
- Interval range in which they expect to find parameter estimate
- Degree of confidence they wish to achieve
A more efficient sample in a statistical sense is one that provides given precision for a
smaller sample size (standard error of the mean)
4 alternative approaches
- Systematic sampling
Pick every k’th element of the population.
Identify the total elements, identify the sampling ratio (K/desired sample), Identify the
starting position, draw a sample by skipping and choosing each k’th ratio
- Stratified sampling
Segment population in different strata or sub-groups, university students can for
example be divided by class level, school or specialism. Afterwards a simple random
sample can be taken from the strata’s.
Proportionate Each stratum is properly represented so that the sample drawn from it is
proportionate to the stratums share of the total population.
- It has higher statistical efficiency than a simple random sample
- Easier to carry out than other stratifying methods
- Provides a self-weighting sample; the population mean or proportion can be
estimated simply by calculating the mean or proportion of all sample cases.
Disproportionate Considering how a sample will be allocated among strata. Take a larger
sample if the stratum is larger than other strata.
- When differences among variances in strata are large, or the sampling costs differ.
Disproportionate sampling is desirable.
- Cluster sampling
With cluster sampling we divide the population in many small sub-groups, the sub groups
are based on several criteria. The sub groups are homogenic and contain heterogenic
elements. provides unbiased estimate of population parameters.
Two conditions foster the use of cluster sampling:
The need for economic efficiency, it’s cheaper than simple random sampling.
The frequent unavailability of a practical sampling frame for individual elements.
Statistical efficiency is lower for clusters since the groups are usually homogeneous. With
simple random samples the groups are heterogenic. But the economic efficiency is
usually great enough to overcome this (it’s cheaper).
Criterion is that the relative efficiency and the economic and statistical factors have to
overcome simple random sampling.
- Area sampling
Populations that can be identified with some geographic areas, it’s the most important
form of cluster sampling. Overcomes both problems of high sampling cost and
unavailability of a practical sampling frame.
Designing cluster samples In order to design cluster samples we must answer several
questions:
- How homogeneous are the clusters: when clusters are homogeneous, this
contributes to low statistical efficiency? One can improve this by constructing
clusters to increase cluster variance (heterogeneity).
- Shall we seek equal or unequal clusters (size): The sample means of clusters are
unbiased estimates of the population mean. This is more likely when clusters are
equal. To ensure this the following approaches can be met:
Combine small clusters, split large clusters until all have an average size.
Stratify clusters by size and choose from each stratum.
Stratify clusters by size and then sub-sample using varying sampling fractions to
secure an overall sampling ratio.
- How large a cluster shall we take? Not clear which size is superior, depends on the
efficiency, variances of means and costs.
- Shall we use a single-stage or multi-stage cluster: For most area sampling, the
tendency is to use multi stage clusters.
- How large a sample is needed: Depends on the cluster design.
Simple cluster sampling: Single-stage samples with equal size clusters, the only
difference between a simple cluster sampling and a simple random sample is the size
of the cluster.
For example: I want to interview a specific neighbourhood. In this neighbourhood there are 7
streets. Each street can be divided in a cluster, so 7 clusters. These streets are somehow
heterogenic in terms of variables (age, gender, social status, income). Then I use simple
random sampling (SRS) to randomly assign one of the clusters that I’m going to use for my
interview. Resulting in for instance number 4 by SRS, so cluster (street) number 4 and its
elements will be used for the interview!
Multistage Keep dividing your clusters into smaller groups by using SRS.
Does not operate from a statistical theory, often produce selection bias (since it is non-
random) and non-representative samples (does not say anything for the total
population).
Convenience sampling Researchers have the freedom to choose whoever they can find.
does not guarantee precision, but might be useful (evaluation of a department)
Purposive sampling
- Judgement sampling: When the research selects sample members conform criterion
(he judges who is in and who isn’t)
- Quota sampling: To improve representativeness. For example, a high school consist
of 40% female and 60% male. When conducting a sample, you should also contain
the same ‘quota’ in selecting, so in a sample of 10 choose 4 females and 6 males to
represent the population in some matter.
Quota control:
Precision control: when you have 6 factors (control variables, catholic etc..) .
Frequency control: With this type of control you use the percentage of the population in the
percentage of your sample.
Problem with quota sampling Its representativeness, the data that is available for control
may also be outdated or inaccurate (the frequency in population might be different).
Despite its problems, its used a lot, it is cheaper and faster than probability sampling.
While there are dangers in bias and representativeness, the risks are usually not that great.
Internet sampling: Might give an unrepresentative conclusion for the entire population
(elderly are unrepresented on the internet) and respondents might be voluntarily. Internet
sampling does have a lot of advantages, its fast and cost effective.
- Personal interviews
- Telephone interviews
- Self-administered surveys
Information the interviewer can do little about the information level of the participant in
order to answer, what he can do is ask screening questions, to determine if the participant
can answer all question regarding the topic.
Motivation in telephone and personal interviews it’s the responsibility of the interviewer
to motivate the respondent. In web based surveys and self-administrated.
Non response error this is bias, occurs when participants do not answer or do not
successfully answer.
Mixed mode when you don’t find a suitable method for your study, you can combine
methods.
Advantages:
- In depth
- Can improve quality of info received (further questions)
- More control, pre-screening and set up location
- Adjust languages
Disadvantage
- Costly in time, money
- Talking to strangers can be difficult
- Questions can be altered bias
- Response errors
Interviewing techniques:
The interviewer needs to assure (learn) that the information obtained will answer the
questions objectives.
Probing: technique of stimulating the participant to answer more fully and relevantly
- A brief understanding and interest (yes, I see, uh, aha)
- An expectant pause
- Repeating the question
- Repeating the participants reply
- A neutral question or comment, what do you mean?
- Question clarification
For example, if the participant answers ‘I don’t know’ try to use probing techniques to get
an answer from the participant.
Recording the interview: writing down the response, repeat and use special instruments if
applicable.
Disadvantages:
- No telephone service households
- Inaccurate or non-functioning phone numbers
- Limitations on interview length
- Limitations on use of visuals.
- Ease of termination (hanging up the phone)
- Less participant involvement, experience
- Distracting physical environment (when in the car or walking in the city)
Advantages:
- Costs
- Sample accessibility: through mail
- Response time: postpone their responses can ensure better quality but also likely for
bias (non-response)
- Anonymity mail surveys are impersonal, providing anonymity.
Disadvantages:
- Topic coverage: type and amount of data that is included, often very short. Long
surveys often require an incentive or personal benefit.
- Non-response error
Reducing non-response:
- Follow ups and reminders
- Preliminary notification advanced notification by phone or email
- Concurrent techniques:
Money incentives works very well! Or reporting of findings to participant
Short questionnaires are better than long in response bias
Respected sponsorships increase response rate
Drop-off system a lightly trained interviewer personally delivers the surveys and picks
it up.
- Only method that can be used to obtain info from people who cannot talk or read.
Like young children and animals.
- Collect data at time it occurs reducing retrospective biases people forgetting
things after a week or their opinions change
- Reducing respondent bias.
- Method reactivity biases when respondents change their behaviour when they
know being observed
- Capture in natural habitat, no bias from environment (laboratory)
- Observation is less demanding than questioning.
Limitations:
- Must be at the same scene of happening.
- Slow and expensive process
- Restricted to information that require training and education, for example behaviour
or values, opinions combining with interviews is a solution
- Is restricted to current time, not the past nor future
Direct observations:
When the observer is physically present and personally monitors.
Indirect observations:
When recording is mechanical, photographic or electronic.
Concealment to shield themselves from subject (mirrors). Reduce risk of bias, but brings up
ethical questions.
Non behavioural
Record analysis: Historical or current records, and public or private records.
Physical condition analysis: inventory condition analysis, or studies of plant safety.
Process (activity) analysis: for example, manufacturing processes or traffic flows.
Behavioural:
- Non-verbal behaviour: Body movement
- Linguistic behaviour: Sounds made in a class (ahs and uhs)
- Extra linguistic: Vocal, pitch loudness and timbre, vocabulary, pronunciation,
characteristics expressions Unabomber example
- Spatial relationship: How a person relates physically to others (proxemics, concerns
how people organize the territory around them)
Example factual observation: counting the number of people with sweaty foreheads at an airport
example inferential observation: counting people who are potential bearers of the swine flu by
looking at sweaty foreheads.
Observations of physical traces. Some very innovative observational procedures that can be both
non-reactive and inconspicuously applied, like:
Unobtrusive measures
These approached encourage creative and imaginative forms of indirect
observation, archival searches, and variations on simple and contrived observation.
Of particular interest are measures involving indirect observation based on physical
traces that include erosion (measures of wear) and accretion (measures of deposit)
Physical trace methods present a strong argument fur use bases on their ability to
provide low-cost access to frequency, attendance and incidence data without
contamination from other methods or reactivity from participants.
They are excellent ‘’triangulation’’ devices for cross-validation
Semi-structured interviews & unstructured: Has a specific topic list and format. when
you want to know something from the participant concerning a topic.
structured: Has a specific order, closed questions. Basically a quantitative method of
collection. predefined questions.
Interview guide: Serves as a memory list for interviewer and ensures questions are asked
the same way.
Question types:
- introductory questions thank you for coming here blabla, how are you? Can you
tell me about…..
- follow-up questions what do you mean by that?
- probing questions do you have an example (trying to get an answer from
respondent)
- specifying questions could you elaborate on that?
- Direct questions what is your PoV?
- Indirect questions what do people around here think?
- Structuring questions moving to the following topic
- “silence”
- interpreting questions do you mean that?
Information recording
Recorded by tape or digitally
Two interviewers, one take notes, the other questions or both question and interpret.
Interviewer qualifications:
- direct the interview, give guidance
- expert in the field
- probing respondents
Focus group
qualitative form of interviews with a group, elements of population
led by moderator, moderator is in charge for probing, ideas and feeling in the group
Homogeneous groups are more common than hetero. However, too homogeneous might
lead to a group being too equal or too dominant.
Online focus groups enable to hold focus groups overseas anytime of the day,
Synchronous at the same time through video
Asynchronous through a panel, email or forum. Disadv no facial expressions.
Disadvantage: only for people with internet (becomes less and less a problem)
Participant observation: Fully dive into the world they want to research (documentary)
Most observers are subject to fatigue, halo effect and observer drift, which refers to a loss in
reliability and validity that effects coding
observer trials with the instruments should be used until high degree of reliability and
validity is achieved.
Data collection:
- Who What requires a participant to be observed. Who carries the responsibility
on ethical level?
- what unit of analysis, characteristic of the observation (act). Event sampling
multiple events (acts). Time sampling time interval or continuous timeframe.
- When on what moment?
- How Field noted or checklist are the most common
- Where Where does the act take place? Reactivity response the respondent is
aware of the fact that he is being observer and his responses are biased (Hawthorne
effect)
If you secondary data cannot answer all questions above, the quality is questionable. For
example, if you want to assess a strategy concerning financial models and the secondary
data you have is examining a company’s liability, then you don’t have the right information.
Sample quality: The data stated in secondary data needs to address the same population. If
the secondary data used a non-probability sample in a high school, and in your research you
want to perform a SRS for the entire population, the data is not useful.
Secondary data has a prominent role in qualitative research. For example: Case studies rely
on data sources, personal interviews with key people or internal documents.
Pattern recognitions: For example, MasterCard analyses 12 million transactions daily, with
the use of data mining they try to detect fraud by pattern recognition.
The primary objective of content analysis is to reduce the often copious information to a
manageable amount.
Condensation and categorization.
The information from content analysis can be used to answer the following questions:
- What are the antecedents of media coverage? Why does media coverage vary
overtime?
- What are the characteristics of media coverage? Why do newspaper report positively
or negatively?
- What are the effect of media coverage? Why do newspaper report later than others?
Advantages:
- Adds to transparency: It is clear to readers what the researcher did
- Other can replicate your research
- Is unobtrusive and non-reactive (niet opdringerig/opvallend)
Disadvantage:
- Quality depends on input
- Coding and interpretations are subject to interpretation bias, I code differently than
others
- Research problem
- Define population of sources and selection procedure all sources or sample
sources? Do you want to compare them or just analyse?
- Coding procedure: Prescriptive (predetermined codes) or open analysis (coding
during the process)?
- Coding frame: Categorization and list of all codes used.
Thematic analysis:
Focuses on the content of the narrative, what has been said. Main objective is to identify
common seams in a bundle of stories.
Temporal organization of the story: cuts the story into smaller pieces and orders them
sequentially.
Interactional analysis:
Dialogues between storyteller and listeners. Accounts for the collaborative processes
between the two in constructing a story.
Considering context:
the narrative analysis is a qualitative method, it should incorporate the specific context in its analysis.
How a story is told might differ in across different times. The differences in the stories in different
contexts yield insights into the evaluation segments, and by considering the context we are better
able to understand the process as a whole.
Ethnographic studies
Used to study business phenomena. Analysis will study on problem statement. Characteristic of
ethnographic study is its richness in the decription.
Advantages:
Cares less about general principles and places strong emphasis on cooperation between researcher
and participants.
Disadvantages;
- Context dependent
- Researchers rarely have full control over the environment
Provides a general framework for conducting qualitative research. It starts from collecting data
and uses this data in an iterative process of coding. The grounded theory basically means that
researchers should reflect on previous coding and if they still are representative (reflecting).
Open coding: Conceptualization of words and phrases, all info is labelled with categories
Axial coding: Identify linkages between categories. Developing theoretical explanations.
Selective coding: After several rounds of axial coding. The researcher focusses on categories and
attempts to develop a new grounded theory (start all over)
Theoretical sampling: is not about representativeness like probability sampling, but more concerned
with which cases would be of additional relevance.
Theoretical saturation: is a stopping rule for qualitative research. You stop when data does not
provide new information.
Advantage grounded theory: provides a convincing framework for a systematic inquiry into
qualitative data.
Disadvantages: pre-theoretical thoughts can result in feasibility problems, it’s very time consuming
and criticized for not fiving new theories, but rather a categorization of data (F.e coding framework)
A Case study is an empirical inquiry that investigates a contemporary phenomenon within its
real-life context. It is more an approach to investigate a phenomenon than a method to
collect data. Usually researchers combine methods. A case study is built upon interviews and
participant observations.
Objective of case study: Detect patterns and explanations. The objective is to understand a
real problem and to gain insights in developing new explanations and theories
For example; I want to know what company’s latest products were and if they implemented
new innovations. Compare the latest products from companies and look for patterns and
explanations (cases).
Chapter 12 experimentation
Causal methods: Why do events occur under some conditions and not under others, these
are called causal methods.
Ex post facto A researcher interviews respondents or observes what is and what has been.
This method also discovers causality. Through experimentation the researcher can alter
variables and observes what changes.
Experiments: are studies where the researcher intervenes the measurements (by altering a
variable for instance)
Intervention: Usually the researcher manipulates the independent variable (IV) and observes
how it has its effect on the dependent variable (DV) IV causes DV to occur
Advantages:
- The researcher’s ability to change/manipulate IV Control group (serves as
comparison with experimental group) and pre, and post-test (measurement before
and after manipulation).
- Ability to control extraneous variables, location or environmental variables.
- Low cost of creating test situations
- Replication leads to discovery of an average effect of the independent variable across
people, situations and times.
- Ability to exploit naturally occurring events to reduce subject’ perceptions of the
researcher as a source of intervention or deviation.
Disadvantages:
- Artificially or not-natural environment might have an effect on subjects.
- Generalization from non-probability samples can pose problems despite random
assignment
- Experimentation equipment’s can be expensive
- Experimental studies of the past are not relevant anymore
- Experiments are used for studies with people, could pose ethical limits.
internal validity Are the conclusions we draw really causally related? Did the
measurement instrument really measure what it is designed to do?
External validity does an observed causal relationship generalize across people, settings
and times? (count for/rely on)
External validity:
The interaction of the experimental treatment with extraneous factors and the resulting
impact on the ability to generalize to times, settings or persons.
True experimental designs Has an experimental group and a control group, these 2
groups are equal through randomly assignment or matching.
Pre-test – post-test control group design (Experimental & control group, random
assignment)
Very effect approach, deals with the 7 major internal validity problems.
Factorial survey/vignette research: researcher presents the subject with a brief and explicit
description of a situation (fact) and then asks him or her to assess the situation or to make a
decision.
Laboratory experiment: the research has the ability to control every variable.
Field experiments; conducted in a natural setting, participants don’t know that their
behaviour is being observed.
Quasi experiment: often cannot know whom to expose the experimental treatment to. Is
inferior to a true experimental design, but is usually superior to pre-experimental designs.
Useful in studying well-defined events (natural disasters, a new law)
Non-equivalent control group design: the test and control groups are not randomly assigned.
Time-series design: repeated observations before and after treatment, allows subjects to act
as their own control (good way to study unplanned events)
Factor: Used to denote an independent variable., factors are divided into level, which
represent sub-groups. Male/female etc..
In this phase you draft a specific instrument design with administrative question, target
questions, classifications and measurement questions.
Question wording:
It is frustrating when people misunderstand a question, most of the time due to a lack of
vocabulary. Criteria to assess your questions on wording:
- Is the question stated in terms of shared vocabulary?
- Does the question contain vocabulary with a single meaning?
- Does the question contain unsupported or misleading assumptions?
- Does the question contain biased wording?
- Does the question contain double negations? (ontkenningen)
- Is the question personalized correctly?
- Are adequate alternatives presented within the question?
Several situational factors affect the decision of whether to use open ended or closed
questions. The decision is also affected by the degree to which the following factors are
known to the interviewer:
Branch question content of one question, assumes other questions have been answered.
Pre-testing options:
Researcher pre-testing:
Participant pre-testing:
Collaborative pre-testing: Inform participants it’s a pre-test
Non-collaborative pre-testing: Do not inform participants it’s a pre-test
The goal of measurement: to provide the highest-quality, lowest-error data for testing
hypotheses.
Researchers formulate hypotheses, then they measure if the hypotheses are true or false. An
important question at this point is What does one measure?
- Objects: Ordinary experiences Tables, people, books, cars, opinions, peer-group
pressures
- Properties: Characteristics of an object
physical properties Weight, height, posture.
Psychological properties Attitudes and intelligence
Error sources:
- Participant Comes from a variety of factors known as, testing, history, knowledge
etc…
- Situational factors Situations that places a strain on the interview or
measurement. F.e, another person that influences the participant.
- Measurer The researcher/measurer can distort responses by checking the wrong
output, careless mechanical handling, incorrect coding, etc..
- Data collection instrument: A defective instrument can cause distortion in two ways:
It can be too confusing and ambiguous to use, and the second is that it might
measure specific data in another way than you would like.
1. Validity
A. Content validity: The extent to which the instrument covers the investigative questions
guiding the study. If the instrument contains a representative sample of the universe of
subject matter of interest, then content validity is good. To evaluate content validity of an
instrument, one must first agree on what elements constitute adequate coverage.
If the data collection instrument adequately covers the topics that have been defined as
relevant dimension we conclude that the instrument has good content validity.
Determination of content validity is judgemental
1. Designer may determine it through careful definition of the topic concerned.
2. Use a panel of people to judge how well the instrument meets the standards.
Reflects the success of measures used for prediction or estimation. You may want to predict
or estimate the existence of a behaviour. 4 qualities that judge criterion validity:
- Relevance
- Freedom from bias
- Reliability
- Availability
A criterion is relevant: In making decisions you must rely on your own judgement in deciding
what partial criteria are appropriate to measure what you want to measure.
Its free from bias when the criterion gives each person/element an equal opportunity to
score well.
A reliable criterion is stable or reproductive. When a criterion is unreliable, but the only
option available. It is possible to use ‘correction for attenuation’ formula that lets you see
what the correlation between the test and the criterion would be if they were made perfectly
reliable
Finally, the information specified by the criterion needs to be available, if not how will you
access it is and how much will it cost?
2. Construct validity
If you want to measure abstract characteristics for which no empirical validation seems
possible. To evaluate construct validity, we consider both theory and the measurement
instrument being used. Once the theory is suckered, we would investigate the adequacy of
the instrument.
Attempts to identify the underlying construct being measured and how well the test
represents it (them)
a. Reliability
Reliability is concerned with estimates of the degree to which a measurement is free of
random or unstable error. Reliable instruments are robust; they work well at different times
under different conditions. This distinction of time and condition is the basis for frequently
used perspectives on reliability: stability, equivalence and internal consistency.
Improving reliability
- Minimize external sources of variation
- Standardize conditions under which measurements occurs
- Improve investigator consistency by using well-trained, motivated, supervised persons
to conduct research
- Broaden the sample of measurement questions used by adding similar questions to
the data-collection or adding more observers
- Improve internal consistency of an instrument by excluding data from analysis drawn
from measurement questions eliciting extreme responses.
3. practicality
a. Economy
economic factors such as costs based on instruments used, measurement questions, data
collection method (personal interviews are more expensive than online surveys).
b. convenience
A measure device passes the convenience test if it is easy to use and to apply.
c. interpretability
when other people than the researchers need to interpret the data. This includes guides on
how to read and interpret the data, evidence about reliability, correlations and scores,
detailed instructions.
Scale selection:
Selection or construction of measurement scales requires decisions in six key areas:
- Study objective: To measure characteristics of participants or to use participants as
judges on objects or indicants presented to them.
- Response form: Rating when participants score an object or indicant, ranking when
you compare scores among two or more indicants of objects, categorization where
participants put themselves or property indicants in groups are categories
- Degree of preference: Preference (choose an object he or she favours) and non-
preference (asked to judge, without reflecting preferences)
- Data properties: How data is classified statistically (nominal, ordinal, interval, ratio)
- Number of dimensions: Unidimensional, measure only one attribute of
participant/object and multidimensional, measure more attributes of
participant/object.
- Scale construction:
Five construction approaches are used in research practice:
Arbitrary (willekeurig): a scale is custom-designed to measure a property or indicant.
Consensus (overeenstemming): judges evaluate the items to be included, based on
topical relevance and lack of ambiguity.
Item analysis: measurement scales are tested with a sample of participants.
Cumulative: scales are chosen for their conformity to a ranking of items with ascending
a descending discriminating (onderscheidende) power.
Factoring: scales are constructed from inter-correlations of items for other studies.
Rating scales: to judge properties of objects without reference to other similar objects. This may be
in forms such as ‘like-dislike’; approve-indifferent-disapprove etc.
Number of scale points: Some researchers think that the greater the number of scales (5 points
instead of 3 points), the greater the sensitivity of the instrument.
Ranking scales The subject directly compares two or more objects and makes choices
among them
- Paired comparison scale: choosing between two objects, when more than 2, this
becomes difficult for the participant.
- Forced ranking scale: List attributes that are ranked relative to each other (1,2,3 etc.)
- Comparative scale: a scale compared with the standard (if know), this bottle of water
tastes better than the one before (yes, about the same, no)
Equal appearing interval scale, also known as the Thurstone scale; this approach resulted in an
interval rating scale for attitude measurement. Its costs, time and staff requirements make it
impractical.
Item analysis scaling is a procedure for evaluating an item based on how well it discriminates
between those persons whose total score is high and those whose total score is low. The most
popular scale using this approach is summated or Likert scale.
The item means between the high score group and the low score group are then tested for
significance by calculating t values. Finally, the 20 to 25 items that have the greatest t values
(significant differences between means) are selected for inclusion in the final scale.
The first step is to collect a large number of statements that meet two criteria:
Each statement is believed to be relevant to the attitude being studied.
Each is believed to reflect a favourable or unfavourable position on that attitude.
The next step is to array these total scores an select some portion representing the highest and
lowest total scores: the top 25 per cent and the bottom 25 per cent. The extremes are the two
criterion groups by which we evaluate individual statements. After finding the t values for each
statement, we rank order them and select those statements with the highest t values.
A widely used indicator to test how well different items form one scale is Cronbach’s alpha. Formally
Cronbach’s α is the average correlation between all items corrected for the number of items. It can take
values between -1 and +1 and a general rule of thumb is that α ≥ 0.7 provide a good scale.
Total scores on cumulative scales have the same meaning. Given a person’s total score, it is
possible to estimate which items were answered positively and which negatively. Scalogram analysis
is a procedure for determining whether a set of items forms a unidimensional scale. A scale is
unidimensional if the responses fall into a pattern in which endorsement of the item reflecting the
extreme position also results in endorsing all items that are less extreme. The scalogram and similar
procedures for discovering underlying structure are useful for assessing behaviours that are highly
structured.
Factor scales include a variety of techniques that have been developed to address
two problems:
How to deal with a universe of content that is multidimensional.
How to uncover underlying dimensions that haven’t been identified by exploratory research.
These techniques are designed to inter-correlate items so that their degree of interdependence
(onderlinge afhankelijkheid )may be detected. We limit the discussion in this section to the
semantic differential (SD), which is based on factor analysis.
SD scale should be adapted to each research problem. SD construction involves the following steps:
Select the concepts.
Select the original bipolar words pairs or pairs you adapt to your needs.
You need at least three bipolar pairs for each factor to use evaluation.
The scale must be relevant to the concepts being judged.
Scales should be stable across subjects and concepts.
Scales should be linear between polar opposites and pass through the origin.
Consensus and cumulative are time-consuming so less used.
Conjoint analysis is used to measure complex decision making that requires multi-attribute
judgements. Its primary focus has been the explanation of consumer behaviour, with numerous
applications in product development and marketing. Conjoint analysis can produce a scaled value
for each attribute as well as utility value for attributes that have levels. Both ranking and rating
inputs may be used to evaluate product attributes.