Professional Documents
Culture Documents
Summary-Brm Summary-Brm: Business Research Methods (Zuyd Hogeschool) Business Research Methods (Zuyd Hogeschool)
Summary-Brm Summary-Brm: Business Research Methods (Zuyd Hogeschool) Business Research Methods (Zuyd Hogeschool)
Summary-brm
Summary BRM
SV BRM Exam
What is research? Why should there be any question about the definition of research?
Research is a systematic enquiry with the objective to provide information capable to solve the research problem. This
definition is rather general and fits all types of research. Questions regarding the definition of research often arise as
research can have various purposes, in particular we distinguish between reporting, descriptive, explanatory and predictive
research. Depending on the purpose, the kind of information to be obtained differs.
Applied research: Has a practical problem-solving emphasis (Not always negative). It is conducted to solve or provide
answers to a real management or business problem.
Pure/basic research: Provides answers to questions of a theoretical nature. It is less motivated by business
considerations, but more by academic considerations.
1
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
1. Clear purpose and focus. (As unambiguous as possible, any decision or problem should include its scope, limitations
and precise meanings of words).
2. Plausible goals.
3. Detailed research process: Follow defensible, ethical and replicable procedures. (proposal, replicability).
4. Provide evidence of objectivity.
5. Research design thoroughly planned (objective results, representativeness of sample, minimize personal bias).
6. High ethical standards applied.
7. Limitations frankly revealed = reporting of procedures should be complete and honest.
8. Appropriate analytical techniques should be used = adequate analysis for decision-makers’ needs (validity,
reliability, probability of error, findings that have led to conclusions).
9. Findings presented unambiguously (eenduidig, slechts op één manier te interpreteren).
10. Conclusions justified (data provides evidence).
11. Reports of findings and conclusions should be presented clearly.
12. Report should be professional in tone, language and appearance.
13. Researcher’s experience reflecte; important for confidence in the research report.
Positivism Interpretivism
Basic principles
View of the world The world is external and The world is socially constructed
objective and subjective
Involvement of researcher Researcher is independent Researcher is part of what is
observed and sometimes even
actively collaborates
Researcher’s influence Research is value-free Research is driven by human
interests
Assumptions
What is observed? Objective, often Subjective interpretations of
quantitative, facts meanings
How is knowledge Reducing phenomena to Taking a broad and total view of
developed? simple elements phenomena to detect explanations
representing general laws beyond the current knowledge
(look at the totality)
Type of study Quantitative Qualitative
Scientific research: A process that combines induction, deduction, observation and hypothesis testing,
into a set of reflective thinking activities (weerspiegelend denken).
2
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
- Reasons given for the conclusion must agree with the real world (true).
- The conclusion must follow from the reasons (argumenten) (valid).
2. Induction: (interpretevism) Observation > Analysis & Conclusion (= hypothesis, not proven yet).
- A conclusion is derived from observations of the real world.
- Hypothesis (= conclusion) is plausible (aannemelijk/geloofwaardig) if it explains the facts.
- The conclusion explains the facts, and the facts support the conclusion.
> Other conclusions/explanations fit the facts as well. E.g. €1 million campaign, but sales do not increase.
Bad campaign, insufficient stock, employee strike, hurricane, etc.
Combining induction and deduction (‘double movement of reflecting thoughts’ – John Dewey):
1. Induction: Observing Hypothesis.
2. Deduction: Hypothesis testing (through new observations) Does the hypothesis explain the facts?
Building blocks of research: Concepts, constructs, definitions, variables, propositions, hypotheses, theories, models.
Concepts and constructs are used at the theoretical level; variables are used at the empirical level.
Construct: E.g. presentation skills > social skills, self-confidence, body language, knowledge of the subject, etc.
- More abstract.
- Intangible.
- Specifically developed for research purposes.
- Can combine multiple concepts or constructs.
3
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
4
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
5
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Proposition: A statement about concepts that may be judged as true or false if it refers to observable phenomena.
Hypothesis: When a proposition is formulated for empirical research.
- Describes the relationship among variables.
Generalization: If the hypothesis is based on more than one case.
The virtue of a hypothesis is that it limits what will be studied.
Of tentative and conjectural nature.
- Explicative models: extend the application of well-developed theories or improve our understanding of their key
concepts.
- Simulation models: clarify the structural relationships of concepts and attempt to reveal the process relationship.
What position would you take and why? ‘Theory is impractical and thus no good’ or ‘Good theory is the most practical
approach to problems’.
The question addresses the problem of the use of theories. Common to theories is that they give a simplified
representation of reality. Thus the usefulness of theories needs to be assessed along the tension between their abstract
and often simplified picture of reality and the complexity of reality on the one hand. On the other hand it needs to be
considered that only theories allow us to provide real explanations, i.e. explanations that are reasoned and apply for more
than just one specific phenomenon.
2. Management questions: How can we solve this problem? / How can we respond to this opportunity?
Exploration phase (find public data, literature review).
3 categories:
o Choice of purpose or objectives; What do we want to achieve?
o Generation and evaluation of solutions; How can we achieve the ends we seek?
o Troubleshooting or control situation: Why does ... incur the highest costs?
3. Research questions: Should management use Strategy A? Yes/no. Strategy B? Yes/no. Etc.
Hypothesis of choice that best states the objective of the study.
Fine-tune when exploration phase is completed, and set the scope (what is not included?).
4. Investigative questions: What are the effects of Strategy A? Strategy B? Etc.
Foundation on which the data collection instrument is based.
Reveal the specific pieces of information that one needs to know in order to answer the research question.
Qualitative research the core of the interview guide.
Quantitative research identify the concepts that need to be measured.
5. Measurement questions: What you will ask in a survey/interview/focus group/observation.
Pre-designed or custom-designed questions. Hereafter, pilot testing.
6. * Decision: What is the recommended action given the research findings?
Research design
- The blueprint for fulfilling objectives and answering questions.
- The strategy for a study and the plan by which the strategy is to be carried out.
- It specifies the methods and procedures for the collection, measurement and analyses of data.
- Design strategy: type, purpose, scope, time frame, environment
Sampling design
= Identify the target population and select a sample to represent this population.
- Researchers take a sample when they are interested in estimating one or more population values and/or testing one or
more statistical hypotheses.
Pilot-testing
- Conducted to detect weaknesses in design and instrumentation, and to provide proxy data for selection of a probability
sample.
Data collection
- Definition of data: the facts presented to the researcher from the study’s environment. Data may be characterized
further by their abstractness, verifiability, elusiveness and closeness to the phenomenon.
- Capturing data is elusive; complicated by the speed at which events occur and the time-bound nature of observation.
- Data are edited to ensure consistency across respondents and to locate omissions.
8
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Research evaluation
- Ex post facto evaluation: evaluation afterwards.
- Prior or interim evaluations
Some questions are answered by research and others are not. Distinguish between them.
One cannot investigate research problems that are essentially value questions that are ill-defined and one should be careful
when engaging in politically motivated research. Examples for value questions are: Whether a company should close a
certain production facility because it is currently unprofitable or whether it should bear the losses for a longer time. The
answer to this question cannot be found by research, as it is mainly dependent on the values you hold. An example for an
ill-defined management problem is: How could we become more profitable? It is certainly possible to investigate drivers for
profitability, but not all drivers can be investigated. Thus, you would need to limit the research problem on the drivers for
profitability.
Critical review: Book or peer review either prior to, or after publishing.
11
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Mention weak and strong points and discuss these using specific criteria.
Overall verdict on the text and some explanations for it.
Criteria used: Contribution to the field, argumentation, methods and analysis, writing.
– Limited to summarize results of studies (only useful if many empirical studies to this problem exist).
- Systematic review: Lies between a narrative literature review and a meta analysis.
Assesses a complete set of relevant studies covering the field. Provides insight in the overall picture.
Effective criticism
- Base your criticism on an assessment of weaknesses and strengths.
- Criticize theories, arguments, ideas and methodology (not authors)
- Reflect on your own critique, providing reasons for the choices you have made and pointing out weaknesses in your
own criticism.
- Treat the work of others with respect; give a fair amount of the arguments/views of others when summarizing.
Treatment of participants research must be designed so a respondent does not suffer physical harm, discomfort, pain,
embarrassment or loss of privacy. Researchers should follow three guidelines:
1. Explain the benefits of the study.
2. Explain the participant’s rights and protection.
3. Obtain informed consent (instemming).
Deception: Occurs when the participant is told only a part of the truth or when the truth is fully compromised.
> Reasons for deception:
1. To prevent biasing participants before the survey or experiment.
2. To protect the confidentiality of a third party (e.g. sponsor).
13
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Informed consent: (Instemming). Fully disclosing (openbaren) the procedures of the research design before
requesting permission to proceed with the study.
1. Introduce yourself.
2. Briefly describe survey topic.
3. Describe target sample.
4. Tell who the sponsor is.
5. Describe the purpose of the research.
6. Give an estimate of the time required to complete the interview.
7. Promise anonymity and confidentiality (when appropriate).
8. Tell that participation is voluntary.
9. Tell that item non-response is acceptable.
10. Ask permission to begin.
11. * Conclusion: Give information on how to contact the principal investigator.
Debriefing participants: (Nabespreking). Involves several activities that follow data collection:
- Explanation of any deception.
- Description of the hypothesis, goal or purpose of the study.
- Post-study sharing of results.
- Post-study follow-up medical or psychological attention.
To what extend do debriefing and informed consent reduce the effects of deception?
The majority of the participants do not resent temporary deception and may have more positive feelings about the value of
the research after debriefing than those who did not participate in the study.
Data collection in cyberspace: De virtuele wereld (World Wide Web). Also applicable to data-mining:
- Researchers are obliged to protect human subjects and ‘do right’.
- Cyber-research is particularly vulnerable to ethical breaches.
E.g. blurring between public and private venues, difficulty on obtaining informed consent, etc.
- Actions permissioned / not precluded by policy/law, does not mean it is ethical/allowable.
- Inquiry must be done honestly and with ethical integrity.
14
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Non-disclosures
- Sponsor non-disclosure
- Purpose non-disclosure
- Findings non-disclosure
Ethical codes
Effective codes:
- Are regulative
- Protect the public interest and the interests of the profession served by the code
- Are behaviour-specific
- Are enforceable.
15
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
For positivistic researchers, the knowledge acquisition process consists of deducting hypotheses (explanations) and testing
those by measuring the reality. Researchers following an interpretivistic approach acquire knowledge more by developing
an understanding of phenomena through a deep-level investigation and analysis of those phenomena.
Deduction Positivists Quantitative
Induction Interpretivist Qualitative
The quality of any research study does not so much depend on whether it is qualitative or quantitative, but rather it
depends on the quality of its design and how well it is conducted.
16
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
A descriptive study is concerned with description, that is the who, what, where, when or how much in observations,
whereas in a causal study relationships between variables are identified, verified and established.
What are similarities between experimental and ex-post facto research designs?
Both designs try to show IV-DV relationship, or causal relationships, basically by:
1. Studying co-variation patterns between variables.
2. Determining time order relationships.
3. Attempting to eliminate the confounding effects of other variables on the IV-DV relationship.
They often use the same data collection methods.
- Cross-sectional study: Study carried out once, represents a snapshot of one point in time.
- Longitudinal study: Study repeated over an extended period.
Advantage: Tracks changes over time, more powerful regarding tests of causality.
Two varieties:
o Panel: Researcher studies the same people over time.
o Cohort groups: Researcher studies different people for each measurement.
Qualitative and quantitative studies can rely on both time dimensions.
6. Topical scope.
- Statistical study: Designed for breadth, rather than depth.
Capture population's characteristics by conclusions from a sample's characteristics.
Census study: Based on the whole population (special case).
- Case study: Designed for depth, rather than breadth.
8. Participant's perceptions.
When participants believe that something out of the ordinary is happening, they may behave less naturally.
There are three levels of perception:
1. Participants perceive no deviations from everyday routines.
2. Participants perceive deviations, but as unrelated to the research.
3. Participants perceive deviations as researcher-induced (example: mystery shopper).
You have grown up as a member of the upper social class and now follow the typical consumption
practices of that class.
Identification of IV and DV
1. The degree to which each variable may be altered, the relatively unalterable variable is the IV. (F.e. age, social status)
2. The time order between the variables (1. IV 2. DV)
Each cell gets its own value. Afterwards you can compare the scores of the different Zuyd faculties.
Variables Number of incoming Number of graduating FTE in staff
Unit of Analysis students students
International Business
HMSM
Facility management
19
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Sampling: Draw conclusions about the entire population, by selecting some elements in a population.
Population element: = Unit of study. The subject on which the measurement is being taken.
Population: The total collection of elements about which we wish to make some inferences. E.g. 4000 files.
Census: Obtain information from all elements within the population (e.g. from all 4000 files).
Representative samples are only a concern in quantitative studies rooted in a positivistic research approach. Qualitative
studies rooted in interpretivism usually do not attempt to generalize their findings to a population.
What are the characteristics of accuracy and precision for measuring sample validity?
What makes a good sample?
Validity and representativity of sample: How well it represents the characteristics of its population.
Representativity of a sample depends on accuracy and precision.
- Accuracy: Degree to which bias and systematic variance are absent from the sample.
Systematic variance: the variation in measures due to some known or unknown influences that
cause the scores to lean in one direction more then another.
o Some sample elements will underestimate the population values, others overestimate these values.
Variations in these values compensate each other > Sample value close to population value.
o The less bias and systematic variance, the greater the accuracy.
- Precision (of estimate): Degree of sampling standard error (error variance). Must be within acceptable
limits for the study’s purpose.
o The smaller the standard error of estimate, the greater the precision of the sample.
What are the six questions (steps) that must be answered to develop a probability sample/sampling plan?
1. What is the relevant population?
Who or what do you want to investigate?
2. What are the parameters of interest? Variables.
- Population parameters: Summary descriptors of variables in the population. E.g. mean, variance.
o Population proportion of incidence?
- Sample statistics: Summary descriptors of variables in the sample.
20
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
What are the two main categories of sampling techniques and their varieties?
The types of sampling design are determined by the representation basis and the element-selection technique.
1. Representation basis technique:
Probability sampling: Based on random selection.
- Each population element has a chance of being selected (sampling frame), non-zero chance.
- Only probability samples provide precision (maximum precision and accuracy).
Non-probability sampling: Not based on random selection and is subjective.
- Not every population element has a chance of being selected (no sampling frame).
- When probability sampling is not feasible (no sampling frame), or time and money budget is limited.
- Produces selection bias and non-representative samples.
2. Element selection technique:
Unrestricted sampling: Each sample element is drawn from the (complete) population.
Restricted sampling: Covers all other forms of sampling. Selection process follows complex rules.
21
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
1. Systematic sampling
To draw a systematic sample you need to follow the following steps:
1. Identify the total number of elements in the population. (E.g. N = 30).
2. Determine the desired sample size. (E.g. n = 10).
3. Identify the sampling ratio k. (E.g. k = N / n = 30 / 10 = 3).
4. Identify the random start (drop a pencil (eyes closed) on your population list, see where the dot is). (E.g. = 2).
5. Draw a sample by choosing every kth entry. (E.g. all green marked people).
Use in combination with other designs to minimize bias.
22
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
2. Stratified sampling
Most populations can be separated into several mutually exclusive sub-populations (or strata). E.g. gender, education.
Each stratum is homogeneous internally and heterogeneous with other strata.
Stratified sampling: The process by which the sample must include elements from each strata.
Proportionate versus disproportionate sampling (deciding how to allocate a total sample among various strata):
Proportionate stratified sampling: Each stratum is properly represented so that the sample drawn from it, is
proportionate to the stratum's share of the total population.
Disproportionate stratified sampling: Any stratification that differs from the proportionate stratified sampling.
3. Cluster sampling
Cluster sampling: Population is divided into groups of elements, some groups are randomly selected for study.
Area sampling: Populations that can be identified with a geographic area (most important form of clusters).
Why clustering (advantages)? - Less expensive than simple random sampling.
- Also possible without sampling frame.
Disadvantage: Lower statistical efficiency (more error) as groups are homogeneous.
23
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Design: In designing cluster samples (incl. area sample) we must answer several questions:
1. How homogeneous are the clusters? Homogenous = low statistical efficieny? Blz 189
2. Shall we seek equal or unequal clusters? Looking at the cluster sizes (Preferably equal, not always possible).
3. How large a cluster shall we take? No size is superior.
4. Shall we use a single-stage or multi-stage cluster? Wat is dit? Blz 190
5. How large a sample is needed? Depends on the cluster design, e.g. simple cluster sampling.
Simple cluster sampling = Single stage samples, with equal-size clusters.
24
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
- Post-stratification; Use information or demographics that differ between the sample and the population
calculate weights to correct for over- under representation.
- Propensity scoring; A second sample from a previous research is believed to be more representative for the
population then the sample you use. Comparing your sample with the second sample allows calculating propensity
scores, reflecting the chance that a subject of the second sample would also be included in the sample.
1. Convenience sampling: Choose whomever you can find; Selection based on convenience.
- No randomization, so the sample is not a good representation of the population.
- The least reliable design, but the cheapest and easiest way to conduct (useful in exploratory study).
Purposive sampling: Attempt to secure a sample that conforms to some determined criteria. There are three types:
2. Judgement sampling: The researcher uses his judgement to select elements conform to some criterion.
Used in exploratory study.
Example: In a study of labour problems, only talk to those who have experienced it.
3. Quota sampling: Used to improve representativeness by using general characteristics of the population.
Example: gender (50% male and 50% female), race, age, income level, employment status, political party.
More than one control dimension, each should:
Have a distribution in the population that we can estimate.
Be pertinent to the topic studied.
Precision control: Combination of characteristics
Frequency control: The overall percentage of those with each characteristic in the sample should
match the percentage holding for the same characteristic in the population.
4. Snowball sampling: Individuals are selected and are used to locate others who possess similar characteristics.
Useful if you want to sample subjects that are difficult to identify.
- Inquiry can be made about exclusively internal information, such as attitudes, opinions, expectations and
intentions.
Survey/questionnaire
Questionnaire/survey: Collecting quantitative information through structured questioning.
Motivation to participate
- Interview: What kind of answer is sought, how complete it should be, in hat terms it should be expressed (even
some coaching)
- Survey: Increase response and encouragement to complete the survey (introduction/intermediate statements)
26
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Non-response error: Responses of participants systematically differ from responses of non-participants. Researcher:
1. Cannot locate the person to be studied.
2. Is unsuccessful in encouraging that person to participate.
Solutions to reduce non-response errors:
- Call-back procedures. > Better than weighting results: Original answers.
- Weighting results from a non-response sample.
- Substituting someone else for the missing participant. > Ask others from household about this person.
Response error: When the data reported differ from the actual data (mostly with personal interviews).
Participant-initiated error: Occurs when the participant fails to answer fully and accurately.
Interviewer error: When the interviewer's control of the process affects the quality of data.
- Failure to secure full participant cooperation.
- Failure to consistently execute interview procedures.
- Failure to establish an appropriate interview environment.
- Falsification of individual answers or whole interviews (cheating).
- Inappropriate influencing behaviour.
- Failure to record answers accurately and completely.
- Physical presence bias (e.g. young vs. old people).
Telephone interviews:
= Using the telephone to conduct an interview
- CATI: computer-administered telephone survey (no interviewer, disadvantage: higher refusal rate)
Disadvantages:
- Inaccessible households
- Inaccurate or non-functioning numbers
- Limitation on interview length (depends on interest of participant)
- Limitations on use of visual or complex questions
- Ease of interview termination
- Less participant involvement
- Distracting physical environment
Self-administered surveys:
= A questionnaire to be completed by the participant.
- Can be faxed/mailed (not e-mail). People can also be intercepted via paper in central locations.
- How to reduce non-response error?
Follow ups, preliminary notification, concurrent techniques, like:
o Short questionnaires length
o Survey sponsorship
o Return envelopes and postage for mail-surveys
27
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
o Personalization
o Cover letters
o Anonymity
o Size, reproduction, cover
o Money incentives
o Deadline rates
o Total design method: Identify the aspects of the survey process that affect the response rate and then
organize the survey effort so the design intentions are carried out in detail. Minimize the burden on
participants by:
Survey easy to read
Offers clear response directions
Includes personalized communication
Provides information about the survey in a cover letter
Are followed by researcher contacts to encourage response.
Web-based surveys:
= Computer-delivered self-administered questionnaires (online).
E-mail, website, pop-up window.
1. Target web survey: Researcher has control over who is allowed to participate in the survey (e-mail).
2. Self-selected survey: Researcher has no/very-limited control on who is responding (pop-up window).
3. Social-media-based sv: Between target and self-selected survey (based on the social media contacts,
snowball sampling).
Mixed mode: Combining several survey methodologies.
Optimal communication approach: Answers research question and deals with constraints in time, budget, HR.
Evaluation of web-based surveys:
- Challenge to draw the right sample
- Costs: high starting investment, after that, low cost
- Non-response: high, f.e. spam-mail
28
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Interview techniques:
The success of a study depends on the interpersonal and communication skills of the interviewer.
Structured:
- Interviewer follows excaxt wording of questions
- Interviewer must learn objectives per question, to receive satisfying answers
- And probing:
Probing: Technique of stimulating participants to answer more fully and relevantly. Probing styles:
- A brief assertion of understanding and interest: I see, yes, aha.
- An expectant pause.
- Repeating the question: Did the participant understand the question?
- Repeating the participant's reply.
- A neutral question or comment: “How do you mean?” or “Can you tell me more about it?”
- Question clarification: “I’m not sure I understand, can you tell me more?”
Observations: Scientific inquiry when it is conducted specifically to answer a research question is systematically planned an
executed, uses proper controls, and provides a reliable and valid account of what’s happened.
Structured observation
Structured observation: Systematically record behaviour along predefined aspects.
Two dimensions: Direct vs. indirect observation; Concealed vs. not concealed observation.
- Direct obs: When the observer is physically present and personally monitors what takes place.
- Indirect obs: When the recording is done by mechanical, photographic or electronic means.
- Concealed obs: The participant is not aware of the observer’s presence (> Ethics).
- Not concealed: The participant is aware of the observer’s presence (> Method reactivity bias).
Conduct structured observations by using a checklist > Quantifying what is observed.
What can you observe with structured observations? Behaviour and non-behaviour.
Behavioural observation: Observing behaviour.
- Non-verbal: Body movement, motor expressions, exchanged glances, etc.
- Linguistic: Interaction, transfer of information, annoying sounds/words (ah, uh).
- Extra-linguistic: Vocal, temporal, interactional and verbal stylistic behaviour.
29
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
- Spatial analysis: How people physically relate to others (distance maintained between each other).
Non-behavioural observation: Not observing behaviour, but records, conditions and processes.
- Record: Analysing historical or current data, public or private data.
- Physical condition: Analysing the conditions of something. E.g. plant safety compliances, inventory.
- Physical process: Analysing the process of something. E.g. manufacturing process, traffic flows.
How can you measure structured observations? Factual vs. inferential, physical traces.
Factual observation: Describes what is happening and what can be seen.
E.g. time and day of the week, environmental factors, product presented, etc.
Inferential observation: Translates what is seen to a concept that cannot be observed.
E.g. Credibility, interest, acceptance, concerns, effectiveness, customer acceptance of product, etc.
Observation of physical traces: Observing measures of wear (slijtage) and measures of deposit, unobtrusive methods
creative.
- Measures of wear: E.g. estimating library book use by looking at the number of teared pages in a book.
- Measure of deposit: E.g. estimating alcohol consumption by collecting and analysing domestic rubbish.
Qualitative interviews: Interviews are usually semi-structured or unstructured. Memory list/intv. guide.
- Unstructured interviews: No specific question/topic list to be covered; mental list of relevant topics.
Flexible and might take another course than originally expected.
Researcher wants to gain insight into what the respondents consider relevant and his interpretations.
- Semi-structured interviews: Question/topic list to be covered; ask questions similarly during all interviews.
Start with specific questions, but allow the interviewee to follow his/her thoughts later on.
Probing techniques are often used. E.g. TV interview journalist with a political decision-maker.
Qualitative interviews
An objective of qualitative interviews is to learn more about the respondents’ viewpoint regarding phenomena.
An interview guide is important when conducting semi-/unstructured interviews, the main functions are:
- Memory list to ensure that the same issues are covered (in every interview).
- Memory list to ensure that the questions are asked in the same way.
> Increases the comparability of multiple interviews.
The more specific the interview guide, the more structured the interview will be, and the less flexible the
interviewer is in responding to the respondents.
30
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Information recording:
- Unstructured interviews can be conducted by two interviewers, but are usually recorded by tape or digitally.
> Advantages: Focus on the conversation (instead of making notes) and you can listen to it again.
> Disadvantages: - People feel uncomfortable; this influences their answering behaviour.
- Technical problems can disturb the interview.
- Transcribing the information recorded is very time-consuming.
The demand on the interviewer with an unstructured interview why experts?
1. Background information.
2. To be able to direct the interview.
3. To decide whether you’ve heard enough or would like to get more information on the topic.
4. Respondents often expect you to be an expert.
Interviewers should be good at active listening.
Focus groups
Focus group: Panel of people, led by a moderator, who meet to discuss some open questions and topics.
Special form of unstructured group interviews.
Can offer new insights into the topic that would have remained hidden in a one-by-one conversation.
Moderator: Uses group dynamics principles to focus or guide the group in interactions.
Script: Guide for moderator: Introduction, directions for participants, opening question, questions to
ask if the discussion falls dead, closing words.
Group size: 6-10 people, but smaller groups can be useful if sensitive issues are discussed.
Group type: Homogenous groups rather than heterogeneous groups.
31
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Participant observation
Participant observation: Qualitative observation approach. More flexible and less structured.
Researchers attempt to fully dive into the world they research to understand it. They become part/participate in
the participants world.
Two dimensions:
- Whether the observer actively participates.
> No distance; can influence participant’s behaviour.
> The more distant you are as an observer, the more descriptive are your observations.
- Whether the observer is concealed or not. Whether the participant knows that he is being observed.
> A concealed observer reduces bias, but there are ethical issues to concealment.
Classification of observation studies
Research class Purpose Research tool Example
1. Completely Generate hypothesis. Filed notes (Natural Ethnographical study,
unstructured. setting). researcher becomes a
part of the culture.
2. Unstructured. Emphasize the best characteristics of Uses labatory facilities,
1 & 4. like videotaping, to
reduce time of
observation.
3. Structured. Emphasize the best characteristics of Uses a structural
1 & 4. observational instrument
in a natural setting.
4. Completely structured. Test hypothesis. Observation checklist Observing decision-
(Control). making according to
structural pattern.
Usually relies on convenience or snowball sampling.
3. Data collection
Specify the details of the task: who, what, when, how, where.
4. Data analysis
Data reduction and categorization (often content analysis).
Field notes: Primary data-collection tool (participant observations). Four principles lead to a higher validity:
1. Direct notes in keywords.
2. Immediate full notes after you leave the setting.
3. Limit observation moment (time you are at the setting).
4. Rich full notes (very complete, everything you noticed).
In assessing the usefulness of secondary data, you need to address the following questions:
1. Information quality: Is the information provided in SD sufficient to answer your research problem?
a. Do the secondary data cover all the information you need?
b. Is the information available detailed enough?
c. Do the data follow the definitions you apply in your research problem?
d. Are the data accurate enough? (Evaluate their source).
2. Sample quality: Do the secondary data address the same population you want to investigate?
a. Do the secondary data refer to the unit of analysis you want to investigate?
b. Is the sample on which the data are based a good representation of the population?
3. Timeliness of data: Were the secondary data collected in the relevant time period? (not out of date).
5. Format: How the information is presented and how easy it is to find a specific piece of information.
> E.g. index, arrangement of information (chronological, alphabetical, etc.).
What is data-mining?
Data-mining: Uncovering knowledge, identifying patterns in data and predicting trends and behaviours from data in
databases stored in data warehouses.
Organizations collect a tremendous amount of information and record it in databases on a daily basis.
With data-mining one searches for valuable information within these large databases (often internal data).
34
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
1. Sample: Decide between census (the entire dataset) and a sample of the data.
2. Explore: Identify relationships within the data (explore trends, groups, outliers).
3. Modify: Modify or transform data (e.g. reduction, categorization of data).
4. Model: Develop/construct a model that explains the data relationships.
5. Assess: Test the model to estimate how well it performs by running the model against known data?
Big data: Big amount of data from the use of web, mobile phones and customer, credit and debit cards.
- Opportunity, but: Privacy, security and intellectual property rights. Validity, reliability and completeness of info
waarom zou dit niet valide etc. zijn?
Content analysis: Technique based on the manual or automated coding of transcrips, documents, articles or even
audio and video material.
Objective: To reduce information to a manageable amount.
Textual information is transformed into numerical data for further statistical analysis.
Quantitative app.: Count occurrence of words/phrases and detect how far they are apart in a text.
Qualitative approach: Detect the general meaning of a text to categorize it.
Sources: Archival material, recordings, current conversations and obtained material (f.e. interviews).
Advantages:
- Adds to transparency (it’s clear to readers what the researcher did).
- Others can take your textual information and replicate your research.
- Content analysis is unobtrusive (niet opdringerig/opvallend) and non-reactive.
Disadvantages:
- Quality depends on input.
- Coding procedure is subject to interpretation bias.
- Time-consuming.The process of content analysis (How to conduct content analysis)
Research problem.
1. Define the population of sources and selection criteria knowing which source and how to select sources.
2. Coding procedure: Prescriptive or open analysis; Coding.
3. Coding frame: List of all codes used.
Coding: The process of categorizing and combining the data for themes and ideas and categories and then marking
similar passages of text with a code label.
All fragments that have the same code are about the same theme/idea.
Software packages are available to automate coding (e.g. NVivo, MAXQDA).
Prescriptive analysis: Prior to searching, define words/phrases that you search in texts (create dictionary of key words).
Open analysis: Try to find the general message of the text. (“Read between the lines”).
35
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Ethnographic studies
Ethnographic studies: Researchers immerse themselves in the lives, culture, or situation they’re studying.
- Data-collection methods: Qualitative. Participant observation, qualitative interviews, secondary data.
Ethnographic studies start with a broad theme and gets more focused over time (= funnel approach).
Note taking:
- Quality increases when goals are clearly defined.
- Take notes as soon as possible
- Include info as people, place and time.
- Clear distinction between notes of your interpretation, someone else’s interpretation and observations.
Action research
Action research: Focusses on social change or the production of socially desirable outcomes.
Uses observations, interviews, focus groups, questionnaires, secondary data.
Addresses real-life problems and is bounded by the context.
Continuous reflecting process of research and action
36
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Credibility – the validity of action research is measured on whether the actions solve the problems and realize the
desired change.
Collaborative venture of researchers, participants and practitioners.
Theoretical sampling: Which additional cases would be most useful to build and develop a theory?
> No representativeness considered.
Theoretical saturation: Stop process if new categories and cases do not improve/add to the understanding of
the phenomena (if it is not relevant).
37
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Criteria to assess how well a research is conducted (grounded theory), instead of validity and reliability:
1. Fit: How well do categories represent real incidents?
2. Relevance: How useful is the theory for practice?
3. Workability: Quality of the explanation offered and assess if the theory works.
4. Modifiability: Can the theory be adapted if new data is compared to it? (Must be flexible enough).
Advantages:
- Framework for systematic inquiry into qualitative data.
- Theory development.
Disadvantages:
- Feasibility problems (e.g. researcher can’t be free of pre-theoretical thoughts).
- Very time-consuming because of its iterative character.
- Criticized for not generating theories, but generating categorization systems.
- Multiple case study Multiple cases are investigated (Most of the times the best option – robust)
- Single case study one case is investigated.
o Sufficient for a critical case study (closing to a longer series of case studies)
o Extreme or unique cases
o Pragmatic reasons (convenience)
Chapter 12 Experimentation
Experiments: Studies involving intervention by the researcher beyond that required for measurement. The usual
intervention is to manipulate a variable in a setting and observe how it affects the subjects being studied.The researcher
manipulates the independent or explanatory variable and then observes whether the hypnotized dependent variable is
affected by the intervention.
Uses questionnaires, observations (and sometimes secondary data).
Causal studies / relationships. Three types of evidence:
1. There must be a correlation between IV and DV:
- Do IV and DV occur together in the way hypothesized?
- When IV does not occur, is there also an absence of DV?
- When there is more or less of IV, do we find more or less of DV?
2. Time order of IV and DV (IV must occur before DV).
3. Only IV influences the DV (NO EV’s!).
39
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Advantages:
The ability to uncover causal relationships (and manipulate the IV).
The ability to control extraneous and environmental variables.
o Extraneous variables (Control and confounding variables): Describe background characteristics of the
participants (gender, age, education).
> Researcher can only control these variables through selection of participants.
o Environmental variables: Describe the situation in which the experiment takes place.
> Can be controlled by researcher, should be kept constant.
The convenience and low costs of creating test situations (instead of searching for their appearance).
The ability to replicate findings and thus rule out isolated or idiosyncratic (vreemde) results.
The ability to exploit (benutten) naturally occurring events (and to some extent field experiments).
Disadvantages:
The artificial setting of the laboratory (can be improved by investment in the facility)
Generalization from non-probability samples (can pose problems despite random assignment).
The number of variables one can include is limited.
Disproportionate costs in select business situations: Applications of experimentation can be expensive.
Focus is restricted to the present and immediate future (prediction/past is impossible or difficult)
The designed intervention is not always effective.
Ethical issues related to the manipulation and control of human subjects.
40
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Laboratory experiments: Conducted in an unnatural setting; researchers can fully control the setting / variables.
BUT: Participants are aware that they are participating in an experiment. > Behaviour might differ.
The experimental effect is problematic, as experimenter and participants interact more (than field exp.).
Field experiments: Conducted in a natural setting; participants unaware that their behaviour is being monitored.
More heterogeneous group: Reflects the population better than the laboratory experiment.
No/limited control on research setting > Ability to manipulate the IV is smaller. Other: Ethical issues.
Validity in experimentation
Internal validity: If the IV has caused the change in the DV.
External validity: When the results of the experiment can be generalized to some larger population.
41
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Quasi-experimental design:
Experimental group, control group (non-equivalent), no randomization, no control.
> Field experiment: Work with existing groups > No randomization, control and equivalence between groups.
Non-equivalent control group design: O1 X O2 Experimental group
O3 O4 Control group (no X)
* Compare pre-test results (O 1 – O3) to determine the degree of equivalence between groups.
- Time-series design: Repeated observations before and after the treatment.
The experimental approach can also be combined with the survey approach.
Factorial surveys: = Vignette research.
Researcher presents the respondent with a brief, explicit description of a situation (description = IV); and then asks
him/her to assess the situation / make a decision (answer = DV).
Testing effect: People know what to expect because they were (pre-)tested before.
Make sure that there’s sufficient time between the tests to reduce the testing effect.
This is why the pre-test is often left out in social studies (prevents testing effect).
Testing effect: O1 affects O2.
Reactivity effect: O1 affects X.
42
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Disguised (indirect): Designed to conceal the question's / survey’s true purpose. WHY?
To avoid bias if it is about a sensitive, boring or difficult topic.
Useful when we seek information that is available from the participant, but not at the conscious level.
Disguising the sponsor for strategic reasons or if name influences answering behaviour.
Multiple-choice strategy There are more than two alternatives (but you can only choose one answer).
- Multiple-choice questions Problems (using this strategy) can be:
One or more responses have not been anticipated.
List of choice is not exhaustive or not mutually exclusive.
One question can be divided into several questions.
Order and balance of choices (do not put the correct answer in the middle, first or
last option; as much positive as negative choices).
Unidimensional scale (different aspects of the same dimension).
Checklist response strategy Like the M-C strategy, but you can choose more than one answer. Order is unimportant.
Rating response strategy Gradations of preference, interest or agreement. Participants can position each factor on
- Rating questions a scale. Order unimportant.
Ranking response strategy Relative order of alternatives is important (e.g. order your top 3).
44
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
1. Validity
Two major forms of validity:
External validity: The data’s ability to be generalized across persons, settings and times.
Internal validity: The ability of the research instrument to measure what it is meant to measure.
a. Content validity: Does the measurement instrument cover the investigative questions? Is all included? (Content).
Good content validity: Representative sample and instrument covers all relevant topics (= subjective).
Judgemental evaluation: Researcher judges content validity by defining of the topic, items and scales.
Panel evaluation: Panel of people/judges judge content validity.
b. Criterion-related validity: Success of measures used for prediction or estimation of e.g. behaviour.
45
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
2. Reliability
Reliability: A measure is reliable to the degree that it supplies consistent results (under different times/conditions).
If a measurement is not valid, it hardly matters if it is reliable – because the measurement instrument does not
measure what the designer needs to measure in order to solve the research problem.
3. Practicality
Practicality has been defined as economy, convenience and interpretability.
a. Economy:
Limit the number of measurement questions, to limit the measurement time (and thus costs).
Choice of data-c ollection method (personal interview is more expensive than online surveys).
b. Convenience: The measuring device needs to be easy to use and apply.
c. Interpretability: When people other than test designers must interpret the results. Make interpretation possible:
State the functions the test was designed to measure and the procedure by which it was developed.
Detailed instructions for administration.
Scoring keys and instructions.
Norms for appropriate reference groups.
Evidence about the reliability.
Evidence regarding the inter-correlations of sub-scores.
Evidence regarding the relationship of the test to other measures.
Guides for test use.
Response methods
To quantify dimensions that are essentially qualitative, rating or ranking scales are used.
46
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
Rating scales: When variables are individually rated. There are many different sample rating scales:
Numerical scale Participants write a number from the Ordinal or Very good 5 4 3 2 1 Very bad
scale next to each item interval Employees cooperation _____________
Employees knowledge______________
Multiple rating list Similar to numerical scale, but it Interval Please indicate how important or unimportant
scale accepts circled response from the each service characteristic is
rater and the lay out permits Fast reliable repair 7 6 5 4 3 2 1
visualization of results Service at my location 7 6 5 4 3 2 1
Fixed sum scale Discover proportions, up to 10 Ratio Relative importance
categories may be used Subject one X
Other subjects X
Sum 100
Stapel scale Alternative for semantic differential Ordinal or Company name x
scale, when it’s difficult to find interval* +3 +2 +1 Technology leader -1 -2 -3
bipolar adjectives (e.g. fast, slow) +3 +2 +1 Exciting products -1 -2 -3
Graphic rating scale Enables researcher to discern fine Ordinal, How likely are you to recommend X to others
differences interval* Very likely |---------------------------| very unlikely
> E.g. with smiley faces and in how or ratio* Place an X at the position along the line that
much pain you are reflects best your judgment
Ranking scales: Compare variables and make choices among them. There are different sample ranking scales:
Examples of ranking scales
Ranking scale Definition Data type How does it look like?
Paired-comparison Choosing between two objects. When Ordinal Choose per question the most favourable answer:
scale there are more than two objects, this 1. X or Y 2. X or Z 3. Y or Z
47
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
48
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
H0: No difference/relationship.
H1: Difference/relationship.
In research, you want to reject H0, and therewith reinforce H1 (NOT PROVE!).
Overview PPT will be given during exam. You should be able to provide the right cell within that overview based on the
situation provided. (Formula’s will NOT be asked, you do NOT have to calculate these things!).
49
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674
General
Data-collection methods Other qualitative approaches
Interviews Experiments
Questionnaires Action research
Focus groups Case studies
Observations Ethnographic research
Secondary data Content analysis
Narrative analysis
Grounded theory
Action research:
Observations, secondary data, interviews, questionnaires, focus groups (everything).
50
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)