Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

lOMoARcPSD|7601674

Summary-brm

Business Research Methods (Zuyd Hogeschool)

StuDocu is not sponsored or endorsed by any college or university


Downloaded by elijah bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Summary BRM

Business Research Methods (Zuyd Hogeschool)

StudeerSnel wordt niet gesponsord of ondersteund door een hogeschool of universiteit


Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

SV BRM Exam

Chapter 1 The nature of business and management research


What is (business) research? Research is always problem-solving based.
Research: Systematic inquiry that provides information to solve problems.
Business research: Systematic inquiry that provides information to guide business decisions.
3 reasons for the growing interest in business research:
1. The need for more and better information as decisions become more complex.
2. The availability of improved techniques and tools to meet this need.
3. The resulting information overload if discipline is not employed in the process.

Which different types of research are available?


There are four different kinds of research/studies:
1. Reporting study: Provide a summation of data or generate statistics (a report).
- Little inference (= conclusion drawing).
- Calls for knowledge and skill in using information sources and dealing with gate keepers.
- Example:
2. Descriptive study: Answers who, what, when, where and how questions, by observing and describing
a subject or event (research variable).
- Deficiency: cannot explain why an event has occurred or why variables interact the way they do.
- Most popular in business research.
- Example:
3. Explanatory study: Answers why and how questions, by explaining the reasons for a phenomenon that
the descriptive study has only observed.
- Correlational study: Studies the relationship between two or more variables.
- Uses theories/hypotheses to explain why a certain phenomenon occurred.
4. Predictive study: Predict when and in what situations an event might reoccur.
- Is rooted as much in theory as in explanation.
- High level of inference (= conclusion drawing).
- Objective of control: Being able to replicate a scenario and dictate a particular outcome.

What is research? Why should there be any question about the definition of research?
Research is a systematic enquiry with the objective to provide information capable to solve the research problem. This
definition is rather general and fits all types of research. Questions regarding the definition of research often arise as
research can have various purposes, in particular we distinguish between reporting, descriptive, explanatory and predictive
research. Depending on the purpose, the kind of information to be obtained differs.

Applied research: Has a practical problem-solving emphasis (Not always negative). It is conducted to solve or provide
answers to a real management or business problem.
Pure/basic research: Provides answers to questions of a theoretical nature. It is less motivated by business
considerations, but more by academic considerations.

What is the difference between good and poor/unprofessional research?


Good research: Generates data that we can trust, as research is professionally planned and conducted.
Poor research: Generates data that we cannot trust, as research is poorly planned and conducted.

Describe characteristics of the scientific method:


Good research follows the structure of the scientific method. Good research must (examples):

1
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

1. Clear purpose and focus. (As unambiguous as possible, any decision or problem should include its scope, limitations
and precise meanings of words).
2. Plausible goals.
3. Detailed research process: Follow defensible, ethical and replicable procedures. (proposal, replicability).
4. Provide evidence of objectivity.
5. Research design thoroughly planned (objective results, representativeness of sample, minimize personal bias).
6. High ethical standards applied.
7. Limitations frankly revealed = reporting of procedures should be complete and honest.
8. Appropriate analytical techniques should be used = adequate analysis for decision-makers’ needs (validity,
reliability, probability of error, findings that have led to conclusions).
9. Findings presented unambiguously (eenduidig, slechts op één manier te interpreteren).
10. Conclusions justified (data provides evidence).
11. Reports of findings and conclusions should be presented clearly.
12. Report should be professional in tone, language and appearance.
13. Researcher’s experience reflecte; important for confidence in the research report.

In what different research philosophies is research embedded (ingegoten)?


Research is based on reasoning (theory) and observations (data or information). The two main research philosophies are
positivism and interpretivism. Between these two positions various other research philosophies exist (realism).

Positivism Interpretivism
Basic principles
View of the world The world is external and The world is socially constructed
objective and subjective
Involvement of researcher Researcher is independent Researcher is part of what is
observed and sometimes even
actively collaborates
Researcher’s influence Research is value-free Research is driven by human
interests
Assumptions
What is observed? Objective, often Subjective interpretations of
quantitative, facts meanings
How is knowledge Reducing phenomena to Taking a broad and total view of
developed? simple elements phenomena to detect explanations
representing general laws beyond the current knowledge
(look at the totality)
Type of study Quantitative Qualitative

Realism: Shares principles of positivism and interpretivism.


- Research requires the identification of how people interpret and give meaning to the setting they’re in.
- Positivism: It accepts the existence of a reality independent of human beliefs and behaviour.
- Interpretivism: It accepts that understanding people and their behaviour requires subjectivity.
 Critical realism: A branch of realism. Recognizes the existence of a gap between the researcher’s concept of
reality and the ‘true’ but unknown reality.

Scientific research: A process that combines induction, deduction, observation and hypothesis testing,
into a set of reflective thinking activities (weerspiegelend denken).

Two different approaches of scientific reasoning:


1. Deduction (Postivism): Theory > Prediction (hypothesis) > Observation > Analysis & Conclusion.
- A conclusion is derived by logical reasoning.

2
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

- Reasons given for the conclusion must agree with the real world (true).
- The conclusion must follow from the reasons (argumenten) (valid).
2. Induction: (interpretevism) Observation > Analysis & Conclusion (= hypothesis, not proven yet).
- A conclusion is derived from observations of the real world.
- Hypothesis (= conclusion) is plausible (aannemelijk/geloofwaardig) if it explains the facts.
- The conclusion explains the facts, and the facts support the conclusion.
> Other conclusions/explanations fit the facts as well. E.g. €1 million campaign, but sales do not increase.
 Bad campaign, insufficient stock, employee strike, hurricane, etc.

Combining induction and deduction (‘double movement of reflecting thoughts’ – John Dewey):
1. Induction: Observing  Hypothesis.
2. Deduction: Hypothesis testing (through new observations)  Does the hypothesis explain the facts?

Building blocks of research: Concepts, constructs, definitions, variables, propositions, hypotheses, theories, models.
 Concepts and constructs are used at the theoretical level; variables are used at the empirical level.

Difference between concepts and constructs:


Concept (opvatting): E.g. table, height of the table, etc.
- Fairly concrete.
- Tangible object or its properties (eigenschappen).
- Culturally shared and accepted.
Importance of concepts:
- The success of research depends on how clearly we conceptualize and how well others understand the concept we
use.
Problems with concepts:
- People differ in meanings they include under any particular label.

Construct: E.g. presentation skills > social skills, self-confidence, body language, knowledge of the subject, etc.
- More abstract.
- Intangible.
- Specifically developed for research purposes.
- Can combine multiple concepts or constructs.

3
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Are the following words concepts or constructs?


a) First-line supervisor Concept
b) Employee morale Construct
c) Assembly line Concept
d) Overdue account Concept
e) Line management Concept
f) Leadership Construct
g) Price–earnings ratio Concept
h) Union democracy Construct
i) Ethical standards Construct

Difference between operational definition and dictionary definition:


Dictionary definition: Concept is described in a way so that the definition is applicable in many situations.
 A broad understanding of the concept.
Operational definition: Concept is described for the specific purpose of the research.
 A narrow understanding of the concept.

Variable: A symbol to which we assign (toekennen) a numeral or value.


- Dichotomous variables: Have two values, e.g. yes/no, male/female  0 or 1.
- Continuous variables: Infinite values, e.g. income, age, temperature, test score, etc.

Difference between concept and variable:


A concept is more abstract than a variable. The variable is the measureable representation of the concept.

Variables in a model (e.g. participating in training leads to higher productivity):


- Independent variables (IV): Predictor variable. Causes a dependent variable to occur.
 IV describes direct influencing factors (IV leidt tot een DV).
 E.g. training
- Dependent variables (DV): Outcome variable. DV describes what is investigated/explained.
 E.g. IV participation in training  DV productivity.
- Moderating variables (MV): Interaction variable. Second IV, which affects the IV-DV relationship.
 Whether a given variable is treated as an IV or MV depends on the hypothesis, the researched relationship
between the variables.
 E.g. age, amount of sleep/rest, etc.
- Intervening variables (IVV): Mediating variable. A factor that theoretically affects the DV, but cannot
be observed or has not been measured.
 Door middel van de IVV / mede dankzij de IVV wordt de DV bereikt > IV  IVV  DV.
 E.g. IV participation in training > IVV Skills > DV productivity.
- Control variables (CV): Variables that have little or no effect on the core of the problem
investigated, and thus can be ignored.
 E.g. The effect of the CV weather (sunshine) on DV productivity.
- Confounding variables (CFV): Affect the relation between IV & DV or between MV & DV.
 Control variable that has not been controlled properly and is related to IV. Therefore it might give an
alternative explanation for the DV. Always avoid CFV’s!
 E.g. weather, experience or amount of sleep/rest if you test people under different circumstances. Those
who have a higher pre-knowledge are more likely to enrol for training (so in the first place it has an influence
on the IV), at the end it will have influence on the productivity (DV).
- Normally it is IV-DV-MV-CV-IVV For example: “Trainings (IV) will lead to higher productivity (DV) especially among
younger workers (MV) when the sun is shining (CV) by increasing the skill level (IVV).
- Extraneous variable (EV): Stands apart from the model (no relationship).

4
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

5
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Proposition: A statement about concepts that may be judged as true or false if it refers to observable phenomena.
Hypothesis: When a proposition is formulated for empirical research.
- Describes the relationship among variables.
 Generalization: If the hypothesis is based on more than one case.
 The virtue of a hypothesis is that it limits what will be studied.
 Of tentative and conjectural nature.

Difference between hypothesis and proposition:


A hypothesis is testable and usually formulated in explanatory research studies that attempt to test them. A proposition is
derived from rationale considerations and is usually the outcome of an explorative research.

There are different types of hypotheses:


1. Descriptive hypotheses: State the existence, size, form or distribution of a variable.
- E.g. ‘8% will lose their job’. Researches often use research questions rather than a descriptive hypothesis.
2. Relational hypotheses: Describe a relationship between two variables.
- Correlational hypo.: Non-causal. Variables occur together without implying that one causes the other.
 E.g. ‘People in the UK give the EU a less favourable rating than do people in France’.
- Explanatory hypo.: Causal. Existence/change in one variable, causes a change in the other variable.
 E.g. ‘An increase in family income, leads to an increase in the percentage of income saved’.
 The IV needs to be the sole reason for the existence of or change in the DV.

How do you formulate a solid research hypothesis?


Three conditions that make a good hypothesis. It should be:
1. Adequate for its purpose: Explains what it claims to explain.
2. Testable, following conditions:
o Does not require techniques that are unavailable.
o Does not require an explanation that defies known laws.
o There are consequences that can be deduced for testing purposes.
3. Better than its rivals: It has greater range, probability and simplicity than its rivals.

Theory: The role of theory is explanation.


Definition: A set of systematically interrelated concepts, definitions and propositions that are advanced to explain and
Predict phenomena (facts).
- The difference between theory and hypothesis:
 Hypothesis: A statement relating two variables.
 Theory: Provides the rationale why those two variables are related.

The use of theory in research:


- Narrows the range of facts we need to study.
- Suggests which research approaches are likely to yield the greatest meaning.
- Suggests a system for the researcher to impose on data in order to classify them in the most meaningful way.
- Summarizes what is known about an object of study, and states the uniformities (gelijkheid) that lie beyond
immediate observation.
- Can be used to predict any further facts that may be found.

Models: The role of models is representation.


- The functions of models are description, explication and simulation.

Three major functions of modelling:


- Descriptive models; describe the behaviour of elements in a system where theory is inadequate or non-exsistant.
6
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

- Explicative models: extend the application of well-developed theories or improve our understanding of their key
concepts.
- Simulation models: clarify the structural relationships of concepts and attempt to reveal the process relationship.

Difference between theories and models:


The difference between a theory and a model is that the purpose of a theory is to explain a certain phenomena while the
purpose of a model is to represent the phenomena. However, in business studies theories are often the fundament of
models.

What position would you take and why? ‘Theory is impractical and thus no good’ or ‘Good theory is the most practical
approach to problems’.
The question addresses the problem of the use of theories. Common to theories is that they give a simplified
representation of reality. Thus the usefulness of theories needs to be assessed along the tension between their abstract
and often simplified picture of reality and the complexity of reality on the one hand. On the other hand it needs to be
considered that only theories allow us to provide real explanations, i.e. explanations that are reasoned and apply for more
than just one specific phenomenon.

Chapter 2 The research process and proposal


The research process:

Management question hierarchy:


1. Management dilemma: Problem/opportunity.
 Focussed, narrowly defined, and relevant.
7
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

2. Management questions: How can we solve this problem? / How can we respond to this opportunity?
 Exploration phase (find public data, literature review).
 3 categories:
o Choice of purpose or objectives; What do we want to achieve?
o Generation and evaluation of solutions; How can we achieve the ends we seek?
o Troubleshooting or control situation: Why does ... incur the highest costs?
3. Research questions: Should management use Strategy A? Yes/no. Strategy B? Yes/no. Etc.
 Hypothesis of choice that best states the objective of the study.
 Fine-tune when exploration phase is completed, and set the scope (what is not included?).
4. Investigative questions: What are the effects of Strategy A? Strategy B? Etc.
 Foundation on which the data collection instrument is based.
 Reveal the specific pieces of information that one needs to know in order to answer the research question.
 Qualitative research  the core of the interview guide.
 Quantitative research  identify the concepts that need to be measured.
5. Measurement questions: What you will ask in a survey/interview/focus group/observation.
 Pre-designed or custom-designed questions. Hereafter, pilot testing.
6. * Decision: What is the recommended action given the research findings?

Formulation of the research dilemma


- Theoretical considerations related to problem are mentioned.
- Should addres non-trivial problems
- Problem is narrowly defined
- Relevant; contributes to the field.
- Issue: Politically motivated research; A manager's motives for seeking research are not always obvious. Managers might
or might not express a genuine need for specific information on which to base a decision.
- Issue: Il-defined management problems; Some categories of problem are so complex, value-laden and bound by
constraints that they prove to bed intractable to traditional forms of analysis.
- Issue: Unresearchable questions; not all questions can be researched.

Research design
- The blueprint for fulfilling objectives and answering questions.
- The strategy for a study and the plan by which the strategy is to be carried out.
- It specifies the methods and procedures for the collection, measurement and analyses of data.
- Design strategy: type, purpose, scope, time frame, environment

Sampling design
= Identify the target population and select a sample to represent this population.
- Researchers take a sample when they are interested in estimating one or more population values and/or testing one or
more statistical hypotheses.

Pilot-testing
- Conducted to detect weaknesses in design and instrumentation, and to provide proxy data for selection of a probability
sample.

Data collection
- Definition of data: the facts presented to the researcher from the study’s environment. Data may be characterized
further by their abstractness, verifiability, elusiveness and closeness to the phenomenon.
- Capturing data is elusive; complicated by the speed at which events occur and the time-bound nature of observation.
- Data are edited to ensure consistency across respondents and to locate omissions.

8
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Analysis and interpretation


- Data analysis usually involves reducing accumulated data to a manageable amount, developing summaries, looking for
patterns and applying statistical techniques.
- Researchers must interpret these findings in light of the client’s research question or determine if the results are
consistent with their hypotheses and theories.

Reporting the results


- The researcher writes a report, to transmit the findings and recommendations to the manager for the intended
purpose of decision-making.
- The report should be developed from the manager’s perception  solving the management dilemma.
- To make the communication relevant, the report should follow these criteria:
o Insightfull adaption of the information to the client’s needs.
o Careful choice of words in crafting interpretations, conclusions and recommendations.

Resource allocation and budgets


Three types of budget:
- Rule-of-tumb budgeting; taking a fixed percentage of some criterion.
- Departmental or functional area budgeting; percentage of total expenditures for research.
- Task budgeting; specific research projects to support on ad hoc basis.

Research evaluation
- Ex post facto evaluation: evaluation afterwards.
- Prior or interim evaluations

Research proposal: Includes management question hierarchy and research design.


 Presents a problem, discusses related research efforts, outlines the data needed for solving the problem and shows
the design used to gather and analyse data (includes management question hierarchy).
 Usually submitted in response to a request for proposal (RFB).
 Content of the proposal depends on whether it is an exploratory, small-scale or large-scale study.
 Internal proposal: Within a company, usually small and solicited (request for proposal (RFP)).
 External proposal: Can be solicited or unsolicited, prepared by an outside firm to obtain a contract.

Benefits of a research proposal (to sponsor and researcher):


- Evaluate a research idea (the problem is clear and well defined).
- Ensure that sponsor and researcher agree on the research question.
- Provides a logical guide for the investigation.
- Offers the opportunity to spot flaws in an early stage of the research.
- Decide if the research goal has been achieved (by comparing final product with proposal).
- Encourages researcher to plan the project so that work progresses steadily towards a deadline.

Structure of the research proposal:


1. Executive summary.
2. Problem statement: Includes management dilemma, background, consequences, management question.
3. Research objectives: Includes management question hierarchy and hypotheses.
4. Literature review: Discusses related research efforts (recent/significant literature).
5. Benefits of study: Describes importance of the study.
6. Research design: Provides a detailed plan.
7. Data analysis: A brief section on the methods used for analysing the data (large scale projects).
8. Nature/form results: Show that each goal has been covered; specify types of data to be obtained and
interpretations that are made.
9
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

9. Qual researcher: Professional research competence, relevant management experience.


10. Budget
11. Schedule
12. Facilities/special resources
13. Project management; Show the sponsor that the research tieam is organized in a way that will enable it to
succeed.
14. Bibliography
15. References.
16. Appendix: Measurement instruments, glossary, etc.
17. Others: Qualifications of researcher, budget, schedule, facilities and resources, etc.

Evaluating the research report


- Formal; list of criteria and an associated point scale.
- Informal; more qualitative.

Some questions are answered by research and others are not. Distinguish between them.
One cannot investigate research problems that are essentially value questions that are ill-defined and one should be careful
when engaging in politically motivated research. Examples for value questions are: Whether a company should close a
certain production facility because it is currently unprofitable or whether it should bear the losses for a longer time. The
answer to this question cannot be found by research, as it is mainly dependent on the values you hold. An example for an
ill-defined management problem is: How could we become more profitable? It is certainly possible to investigate drivers for
profitability, but not all drivers can be investigated. Thus, you would need to limit the research problem on the drivers for
profitability.

Chapter 3 Literature review


What is scientific literature?
Four types of scientific literature: Ranked from highest quality to least quality:
1. Articles in journals.
2. PhD thesis.
3. Conference proceedings: Most recent, but quality control mechanism is less strict.
4. Books: Look at the reputation of the writer.
 All these literature sources are peer-reviewed. > Peers: Academics in the same field.
 Peer-reviewed: Quality mark  Publications that fulfil the quality standards of sound scientific research.

What is a scientific literature review?


A literature review is a critical and in depth evaluation of previous research. It is a summary of a particular area of research,
allowing anybody reading the paper to establish why you are executing this research. A good literature review expands
upon the reasons behind selecting a particular research question.

What are the purposes of a scientific literature review?


A scientific literature review serves the following purposes:
1. Establish the context of the problem by referencing to previous work.
 Isolated knowledge has no value; the value increases if you relate it to existing knowledge.
2. Understand the structure of the problem.
- Relate theories and ideas to the problem.
- Identify the relevant variables and relations.
3. Show the reader what has been done previously.
 Previous work that is related to your study, as you cannot assume that every reader is as knowledgeable as
you are.
- Show which theories have been applied to the problems.
10
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

- Show which research designs and methods have been chosen.


4. Rationalize the significance of the problem and the study presented.
 Show that your idea will make a valuable contribution.
- Synthesize and gain a new perspective on the problem.
- Show what needs to be done in light of the existing knowledge.

Meta analyses (Quantitative approach of literature review)


- Allows you to investigate quantitatively which outcomes are supported by most studies and which outcomes are more
ambiguous. Further you can identify whether differences in the research set-up explain different outcomes (differences
in sample size, types of population).
- Meta analysis focuses on the quantitative characteristics of a topic that might lead to finding relationships between
facts or theories that might not have been found by using only a narrative approach. It is a very structures approach.
But that’s also a disadvantage because it might miss other important evaluation criteria such as methodology and social
context. A major criticism of meta analysis is that it compares apples and oranges. It’s also a very time-consuming
activity.
- Advantages:
o Powerful for reviewing a well-investigated field
o Structured approach to summarize cumulated knowledge.
o Able to detect relationships that narrative summaries of a research field did not discover.
o Efficient way to organize and handle a large amount of information.
- Disadvantages:
o It misses important evaluation criteria, such as the methodological quality of a study or its social context, which
can be accounted for in a narrative summary.

Meta analysis = highly statistical


Narrative analysis = quiet subjective
One form ‘in the middle’  The systematic review, 3 stages:
1. Planning
2. Conduction review
3. Reporting and dissemination

General problems of literature review:


 Authors have different styles of thinking and writing that are specific to certain disciplines.
 There is no perfect review, each is written from a particular perspective (e.g. economic/sociological).

What is the structure of a ‘good’ review?


There is no single best structure for a review. The ingredients of a good literature review:
- Basic ingredients: Ensure that it will give a decent account of the literature and inform the reader about
what has been done so far in the field.
1. Literature relates to the study’s problem statement.
2. Mentions ideas contributing to the exploration or explanation of the study’s problem statement.
3. Summarizes previous studies addressing the current study’s problem statement.
- ‘Seasoning’: Makes it your own work as it reflects your thoughts and assessment of the current literature.
It also points out why your current study makes an important contribution to the field.
4. Discusses the mentioned ideas against the background of the results of previous studies.
5. Analyses and compares previous studies in the light of their research design and methodology.
6. Demonstrates how the current study fits in with previous studies, and shows its specific new
contribution(s).

Critical review: Book or peer review either prior to, or after publishing.
11
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

 Mention weak and strong points and discuss these using specific criteria.
 Overall verdict on the text and some explanations for it.
 Criteria used: Contribution to the field, argumentation, methods and analysis, writing.

Writing a literature review is an iterative process of three tasks:


1. Searching information (literature).
 Use libraries’ online catalogues and bibliographic databases or indexes.
2. Assessing the information obtained.
 Skimming, reading and evaluating research (must be relevant and add to your information/arguments).
> Criteria: Prominence, date of publication, methodology, comparability, uniqueness.
3. Synthesizing the assessment of information.
 Compare studies to identify differences and congruencies (overeenkomsten).
 Explain or interpret the differences and congruencies.
 Two goals 1. Provide an account of development of a field or topic 2. Just a part of the study.

What are promising search strategies?


The major problem of every search is how to find the right sources. On the one hand you want to ensure that you do not
miss a relevant source. But on the other hand you are afraid to suffer from an information overload, i.e. you get so much
information that assessing it becomes impossible. Successful search strategies combine expansive searching with filtering
and selection. Start with an expansive search and then use filters to reduce the pool of sources that needs to be assessed.
Once you have reduced the pool, it is wise to expand it again, e.g., by looking which sources the selected sources cite.

Literature sources: Primary and secondary literature sources.


Primary sources: Full-text publications of theoretical and empirical studies, which represent the original work.
 Academic journals, professional/trade journals, books, newspapers, public opinion journals, conference
proceedings/unpublished manuscripts, reports, research projects.
Secondary sources: A compilation of primary literature.
 Indexes, bibliographies, dictionaries, encyclopaedias, handbooks and directories.

Problems with internet sources:


- A great deal of valuable information in books, journal and other print sources is not available, or only for a high cost.
- Not always credible; the moderator of the website is not always known.

How do you decide which literature should be included in a review? Criteria:


Prominence Peer-reviewed. How often has the piece been cited in the work of others?
Date of publication How old is the piece?
Methodology Do you think the authors of the piece provide sound research?
Comparability Does the piece relate to the arguments you make in your research, either by agreeing with
you or disagreeing with you?
Uniqueness How unique is the piece or does it just state what has been said before?

Three types of literature reviews:


- Narrative literature review: Potentially subjective.
 Literature review is not always exhaustive (uitgebreid) enough or unbiased.
 Mainly addresses and evaluates different theoretical perspectives to a research problem.
- Meta analysis: Highly statistical (and objective).
 + Structured approach to summarize, compare and analyse different empirical studies, addressing the same
research question (including statistical methodology).
 – Limited to quantifiable characteristics (e.g. not include methodological soundness if studies).
12
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

 – Limited to summarize results of studies (only useful if many empirical studies to this problem exist).
- Systematic review: Lies between a narrative literature review and a meta analysis.
 Assesses a complete set of relevant studies covering the field. Provides insight in the overall picture.

Effective criticism
- Base your criticism on an assessment of weaknesses and strengths.
- Criticize theories, arguments, ideas and methodology (not authors)
- Reflect on your own critique, providing reasons for the choices you have made and pointing out weaknesses in your
own criticism.
- Treat the work of others with respect; give a fair amount of the arguments/views of others when summarizing.

A literature review can also be a scientific contribution on its own.


 Goal: The evaluation of a research field through assessing a complete set of the relevant studies covering the field.
Structures:
- Chronological
o Show how the field changed overtime.
o Only advisable with a few development paths, if there are many parallel developments, it becomes chaotic.
- Structure along schools
o Just provide summaries per school or perspective.
o Overemphazise differences between schools.
- Structure along outcome variables or antecedents.

Chapter 4 Ethics in business research


Ethics: The study of ‘right behaviour’ and addresses how research should be conducted in a moral and responsible way.
 Goal of ethics in research is to ensure that no one is harmed (benadeeld) or suffers adverse consequences from
research activities.
Ethic are moral principles, norms or standards of behaviour that guide moral choices about our behaviour and our
relationships with others. Examples in business-research:
- Conflict between request of the sponsor/right of participants
- Societal values; privacy, freedom and honesty.

Two standpoints on research ethics:


- Deontology: The ends never justify the means that are questionable on ethic grounds.
- Teleology: The morality of the means has to be judged by the ends served.
 The benefits of a study are weighted against the costs of harming the people involved.
 As a researcher, you have the responsibility to find the middle ground (ethical standards) between being
completely code-governed and ethical relativism.
 Tempts to adjust research design, procedures and protocols during the planning process, not afterthought.
 Ethical research requires personal integrity from the researcher, project manager and sponsor.

Treatment of participants  research must be designed so a respondent does not suffer physical harm, discomfort, pain,
embarrassment or loss of privacy. Researchers should follow three guidelines:
1. Explain the benefits of the study.
2. Explain the participant’s rights and protection.
3. Obtain informed consent (instemming).

Deception: Occurs when the participant is told only a part of the truth or when the truth is fully compromised.
> Reasons for deception:
1. To prevent biasing participants before the survey or experiment.
2. To protect the confidentiality of a third party (e.g. sponsor).
13
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

 The benefits of deception should be balanced against the risks to participants.


 In case of deception, the participant must be debriefed.

Informed consent: (Instemming). Fully disclosing (openbaren) the procedures of the research design before
requesting permission to proceed with the study.
1. Introduce yourself.
2. Briefly describe survey topic.
3. Describe target sample.
4. Tell who the sponsor is.
5. Describe the purpose of the research.
6. Give an estimate of the time required to complete the interview.
7. Promise anonymity and confidentiality (when appropriate).
8. Tell that participation is voluntary.
9. Tell that item non-response is acceptable.
10. Ask permission to begin.
11. * Conclusion: Give information on how to contact the principal investigator.

Debriefing participants: (Nabespreking). Involves several activities that follow data collection:
- Explanation of any deception.
- Description of the hypothesis, goal or purpose of the study.
- Post-study sharing of results.
- Post-study follow-up medical or psychological attention.
To what extend do debriefing and informed consent reduce the effects of deception?
The majority of the participants do not resent temporary deception and may have more positive feelings about the value of
the research after debriefing than those who did not participate in the study.

Right to privacy/confidentiality: (Geheimhouding):


- Everybody has the right to the protection of personal data.
- Such data must be processed fairly and on the basis of consent of the person concerned.
- Everyone has the right of access to data, which have been collected concerning him or her and the right to have it
rectified (corrigeren).
 Protect confidentiality:
- Obtaining signed non-disclosure documents.
- Restricting access to participant identification.
- Revealing participants information only with written consent.
- Restricting access to data instruments where the participant is identified.
- Non-disclosure of data sub-sets.
 Non-disclosure: Refusal to reveal information.
To address the rights of privacy, ethical researchers:
- Inform participants of their right to refuse to answer or participate.
- Obtain permission to interview
- Schedule field and telephone interviews.

Data collection in cyberspace: De virtuele wereld (World Wide Web). Also applicable to data-mining:
- Researchers are obliged to protect human subjects and ‘do right’.
- Cyber-research is particularly vulnerable to ethical breaches.
E.g. blurring between public and private venues, difficulty on obtaining informed consent, etc.
- Actions permissioned / not precluded by policy/law, does not mean it is ethical/allowable.
- Inquiry must be done honestly and with ethical integrity.

14
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

What are ethical dilemmas regarding sponsors?


Unethical sponsors: Some sponsors might ask the researcher to behave in an unethical manner.
This will lead to distorted and untrue results. What the researcher can do:
- Educate the sponsor in the purpose of research.
- Explain the researcher’s role in fact-finding versus the sponsor’s role in decision-making.
- Explain how distorting the truth or breaking faith with participants leads to future problems.
- Failing moral persuasion, terminate the relationship with the sponsor.

Non-disclosures
- Sponsor non-disclosure
- Purpose non-disclosure
- Findings non-disclosure

What is the responsibility of the researcher?


 Responsibilities regarding participants:
- Informed consent.
- Debriefing participants (in case of deception).
- Right to privacy/confidentiality.
- Data collection in cyberspace.
 Responsibilities regarding the sponsor (right to quality research):
- Providing a research design appropriate for the research question.
- Maximizing the sponsor’s value for the resources expended.
- Providing data analysing and reporting techniques appropriate for the data collected.
- (Showing data objectively, regardless of the sponsor’s preferred outcome).
- Right to anonymity/confidentiality (sponsor, purpose and findings non-disclosure).
 Responsibilities regarding the team:
- Right to safety (e.g. work area).
- Ethical behaviour of assistants.
- Protection of anonymity (sponsors’ and participants’).
 Responsibilities regarding the research community.
- Conduct proper, ethical research to avoid scepticism and earn confidence.
- Be open and honest about limitations and samples.
- Don’t steer your research towards results you want to have.
- Plagiarism (copying from other sources) and falsification are unethical.

Ethical codes
Effective codes:
- Are regulative
- Protect the public interest and the interests of the profession served by the code
- Are behaviour-specific
- Are enforceable.

Chapter 5 Quantitative and qualitative research


Two types of studies:
- Quantitative studies: Rely on quantitative information (i.e. numbers and figures).
 More structured approach, which directs the researcher more.
- Qualitative studies: Rely on qualitative information (i.e. words, sentences and narratives).
 More likely to obtain unexpected information > Often used for explorative studies.
 Distinction: The kind of information used to study a phenomenon.

15
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

For positivistic researchers, the knowledge acquisition process consists of deducting hypotheses (explanations) and testing
those by measuring the reality. Researchers following an interpretivistic approach acquire knowledge more by developing
an understanding of phenomena through a deep-level investigation and analysis of those phenomena.
Deduction  Positivists  Quantitative
Induction  Interpretivist  Qualitative

The quality of any research study does not so much depend on whether it is qualitative or quantitative, but rather it
depends on the quality of its design and how well it is conducted.

What is a research design?


Research design: The strategy for a study and the plan by which the strategy is to be carried out.
 It specifies the methods and procedures for the collection, measurement and analysis of data.
 The essentials of research design:
- The design is an activity- and time-based plan.
- The design is always based on the research question.
- The design guides the selection of sources and types of information.
- The design is a framework for specifying the relationship among the study’s variables.
- The design outlines procedures for every research activity.

What are the major types of research design?


1. Exploratory study.
2. Descriptive study.
3. Causal study.

What are the descriptors classifying research design?


Classifying research design using eight different descriptors:
1. Purpose of the study: Descriptive study vs. Causal (Predictive) study
2. Degree of research question crystallization: Exploratory study vs. Formalized study
3. Method of data collection: Observation vs. Communication vs. * Archival sources
4. Researcher’s ability to manipulate variables: Experimental design vs. Ex-post facto design
(control of variables)
5. Time dimension: Cross-sectional study vs. Longitudinal study
6. Topical scope: Statistical study vs. Case study
7. Research environment: Field conditions vs. Laboratory conditions vs. Simulations
8. Participant’s perceptions: No deviations vs. Some deviations vs. Researcher-induced deviations

1. Purpose of the study.


- Descriptive study: Who, what, where, when, how much?
 A question/hypothesis in which we ask/state something about size, form, distribution or existence of a
variable.
- Causal study: Explanatory study. Why? Tries to explain relationships among variables.
 One variable always causes another and no other variable has the same causal effect.
 Mill’s method of agreement: When two variables have only one condition in common, then that condition
may be regarded as the cause. There is an implicit assumption that there are no variables to consider other
than the ones given. This is never true, as the number of potential variables is infinite.
 Mill’s method of difference: Where the absence of C is associated with the absence of Z, there is evidence
of a causal relationship between C and Z.
o Predictive study: Asks what will happen in the future.
Distinguish between descriptive and causal studies:

16
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

A descriptive study is concerned with description, that is the who, what, where, when or how much in observations,
whereas in a causal study relationships between variables are identified, verified and established.

2. Degree of research question crystallization (structure and immediate objective)


- Exploratory study: Goal: Develop hypothesis/questions for further research. Loose structure.
o Relies more heavily on qualitative techniques (instead of quantitative).
o 4 exploratory techniques:
 Secondary data analyses
Background, start with organisations own data archive
 Experience survey
Interview person and seek their ideas about important issues, probing may show whether certain
facilities are available, what factors need to control and how, who will corporate in the study.
 Focus groups
Pilot group  only people who could be respondents
 Two stage-design
1. Explore, develop research question.
2. Developing the research design.
- Formal study: Goal: Test hypothesis / answer the research question posed. Precise structure.
o Includes descriptive and causal studies.
o ‘Begins where exploratory research leaves off’
 Distinction: The degree of structure and the immediate objective of the study.

Distinguish between exploratory and formal studies:


Exploratory studies tend to have loose structures with the purpose of discovering future tasks, or to develop future
hypotheses and questions for further research. The goal of a formal research design is generally specific: to test hypotheses
or answer research questions.
Exploratory  induction, formal  Deduction

3. Method of data collection.


- Observation: Monitoring. Researcher records the information available from observations.
- Communication: Interrogation. Researcher questions subjects and collects their responses.
- * Archival sources: Secondary data.
 Qualitative and quantitative studies can rely on both methods of data collection.

4. Researcher’s ability to manipulate variables (control of variables).


- Experimental design: Researcher controls/manipulates the variables in the study.
 Discover whether certain variables have an effect on other variables.
 Most powerful support possible for hypothesis of causation.
- Ex-post facto design: Researcher does not control/manipulate the variables in the study.
 He only reports on what has happened/what is happening.
 He must hold factors constant (no influencing, no bias).

What are similarities between experimental and ex-post facto research designs?
Both designs try to show IV-DV relationship, or causal relationships, basically by:
1. Studying co-variation patterns between variables.
2. Determining time order relationships.
3. Attempting to eliminate the confounding effects of other variables on the IV-DV relationship.
 They often use the same data collection methods.

5. The time dimension.


17
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

- Cross-sectional study: Study carried out once, represents a snapshot of one point in time.
- Longitudinal study: Study repeated over an extended period.
 Advantage: Tracks changes over time, more powerful regarding tests of causality.
 Two varieties:
o Panel: Researcher studies the same people over time.
o Cohort groups: Researcher studies different people for each measurement.
 Qualitative and quantitative studies can rely on both time dimensions.

6. Topical scope.
- Statistical study: Designed for breadth, rather than depth.
 Capture population's characteristics by conclusions from a sample's characteristics.
 Census study: Based on the whole population (special case).
- Case study: Designed for depth, rather than breadth.

7. The research environment.


- Field conditions: Design in the actual environmental conditions (natural environment).
 Observe/interrogate people in their usual environment (home, work, shop, etc.).
- Laboratory conditions: Design in staged or manipulated conditions.
 Manipulate the environment, although the laboratory might be designed as the usual environment (e.g.
shopping aisle).
- Simulations: Design in an artificially environment.
 Replicate the essence of a system or process.

8. Participant's perceptions.
When participants believe that something out of the ordinary is happening, they may behave less naturally.
There are three levels of perception:
1. Participants perceive no deviations from everyday routines.
2. Participants perceive deviations, but as unrelated to the research.
3. Participants perceive deviations as researcher-induced (example: mystery shopper).

Three possible causal relationships between two variables:


1. Symmetrical relationship: Two variables fluctuate together, but do not cause a change in one another. (Not
really a causal relationship!)
Example: The correlation between low work attendance and active participation in a company camping club is the
result of another factor, such as a lifestyle preference. (Though, they can move symmetrical).
2. Reciprocal relationship: Two variables mutually influence or reinforce each other.
Example: Self-confidence and performance (Reinforce each other).
3. Asymmetrical relationship: Changes in one variable (IV) are responsible for changes in another
variable (DV). Four types of asymmetrical causal relationships:
- Stimulus > Response.
 When you are challenged to justify your position during a management meeting your pulse rate
increases rapidly, and you speak out strongly in defence of your position.
- Property > Disposition (Eigenschap > Mening)
 You are a member of a minority ethnic group and this makes you very sensitive to ethnic type
comments by others.
- Disposition > Behaviour (Mening > Gedrag)
 You have strong opinions about the degradation of our physical environment by some industries; as a
result you are highly selective in choosing the companies with whom you interview for career
opportunities.
- Property > Behaviour (Eigenschap > Gedrag)
18
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

 You have grown up as a member of the upper social class and now follow the typical consumption
practices of that class.

Identification of IV and DV
1. The degree to which each variable may be altered, the relatively unalterable variable is the IV. (F.e. age, social status)
2. The time order between the variables (1. IV 2. DV)

Testing causal hypothesis


In testing causal hypothesis projects, we seek three types of evidence:
1. Covariation between IV and DV:
- Do IV and DV occur together in the way hypothesized?
- When IV does not occur, is there also an absence of DV?
- When there is more or less of IV, do we find more or less of DV?
2. Time order of IV and DV (IV must occur before DV).
3. Only IV influences the DV (NO EV’s!).

Important for causation in an experimental design ( 1st two most important):


1. Control; all factors are kept constant (except for IV).
2. Random assignment of subjects to groups (equal chance of exposure to IV).
3. Randomization
4. Matching
Important for causation in an ex-post facto design
We cannot manipulate variables, so we study variables that have and have not been exposed to IV.
- Make cross-classification comparisons instead of the assignment of subjects.
- Warning: Post hoc fallacy: Co-variance between variables must be interpreted carefully when the relationship is
based on ex-post facto design.
o  Thorough testing, validating of multiple hypotheses and controlling for confounding variables are
essential.
o Example: apple, high innovation, high profits  difficult to describe the causal relationship.

Chapter 6 Sampling Strategies


What is the importance of the unit of analysis?
Unit of analysis: The level at which the research is performed and which objects are researched.
- The unit of analysis is what you sample. You can only choose 1 unit of analysis.
> E.g. Zuyd Hogeschool, Faculties of Zuyd, Departments of IB Zuyd, etc.
- Variables are the characteristics of the unit of analysis (which objects are described by the variable(s)?).
> E.g. Number of incoming students, number of graduating students, FTE in staff.

The choice of unit of analyses is strongly related to the following 3 questions:


1. What is our research problem and what do we really want to answer?
2. What do we need to measure to answer our research problem?
3. What do we want to do with the results of the study? To whom do we address our conclusions?

Each cell gets its own value. Afterwards you can compare the scores of the different Zuyd faculties.
Variables Number of incoming Number of graduating FTE in staff
Unit of Analysis students students
International Business
HMSM
Facility management

19
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Sampling: Draw conclusions about the entire population, by selecting some elements in a population.
Population element: = Unit of study. The subject on which the measurement is being taken.
Population: The total collection of elements about which we wish to make some inferences. E.g. 4000 files.
Census: Obtain information from all elements within the population (e.g. from all 4000 files).

Representative samples are only a concern in quantitative studies rooted in a positivistic research approach. Qualitative
studies rooted in interpretivism usually do not attempt to generalize their findings to a population.

Reasons for sampling:


- Lower costs.
- Greater accuracy (nauwkeurigheid) of results.
 Better interviewing/testing, more thorough investigation of missing, wrong or suspicious information,
better supervision, better processing.
- Greater speed of data collection.
 The larger the sample size, the longer data collection will take. Long collection periods can cause biases as
events might occur (in that period) that influence respondents’ answers.
- Availability of population elements.
 Some situations require sampling. It is the only process possible if a population is infinite.

Two conditions when choosing for a census study instead of sampling:
1. Feasible when the population is small.
2. Necessary when the elements are quite different from each other (e.g. maker of stereo components).

What are the characteristics of accuracy and precision for measuring sample validity?
What makes a good sample?
Validity and representativity of sample: How well it represents the characteristics of its population.
 Representativity of a sample depends on accuracy and precision.
- Accuracy: Degree to which bias and systematic variance are absent from the sample.
Systematic variance: the variation in measures due to some known or unknown influences that
cause the scores to lean in one direction more then another.
o Some sample elements will underestimate the population values, others overestimate these values.
Variations in these values compensate each other > Sample value close to population value.
o The less bias and systematic variance, the greater the accuracy.
- Precision (of estimate): Degree of sampling standard error (error variance). Must be within acceptable
limits for the study’s purpose.
o The smaller the standard error of estimate, the greater the precision of the sample.

What are the two conditions on which sampling theory is based?


1. There must be enough similarity among the elements in a population that a few of these elements will adequately
represent the characteristics of the total population.
2. While some elements in a sample underestimate a population value, others overestimate this value.

What are the six questions (steps) that must be answered to develop a probability sample/sampling plan?
1. What is the relevant population?
 Who or what do you want to investigate?
2. What are the parameters of interest? Variables.
- Population parameters: Summary descriptors of variables in the population. E.g. mean, variance.
o Population proportion of incidence?
- Sample statistics: Summary descriptors of variables in the sample.
20
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

 Sample statistics are used as estimators of population parameters.


 A parameter is a value of a population, while a statistic is a similar value based on sample data.
E.g. the population mean is a parameter, while a sample mean is a statistic.
3. What is the sampling frame? Complete list of population elements from which the sample is drawn.
 Without a sampling frame, a probability sample cannot exist!
4. What is the type of sample? Probability versus non-probability sample.
5. What sample size is needed?
- The greater the variance within the population, the larger the sample must be to provide precision.
- The greater the desired precision, the larger the sample must be.
- The greater the number of sub-groups of interest within a sample, the greater the sample size must be.
- If sample size exceeds 5% of the population, sample size may be reduced without sacrificing precision.
 Researcher must decide how much precision they need, measured by:
a. The interval range in which they would expect to find the parameter estimate.
b. The degree of confidence they wish to have in that estimate.
6. How much will it cost? Influences sample size and type, and the data-collection method.

What are the two main categories of sampling techniques and their varieties?
The types of sampling design are determined by the representation basis and the element-selection technique.
1. Representation basis technique:
 Probability sampling: Based on random selection.
- Each population element has a chance of being selected (sampling frame), non-zero chance.
- Only probability samples provide precision (maximum precision and accuracy).
 Non-probability sampling: Not based on random selection and is subjective.
- Not every population element has a chance of being selected (no sampling frame).
- When probability sampling is not feasible (no sampling frame), or time and money budget is limited.
- Produces selection bias and non-representative samples.
2. Element selection technique:
 Unrestricted sampling: Each sample element is drawn from the (complete) population.
 Restricted sampling: Covers all other forms of sampling. Selection process follows complex rules.

Types of sampling designs


Representation basis
Element selection Probability Non-probability
Unrestricted Simple random Convenience
Restricted Complex random Purposive
Systematic Judgement
Cluster Quota
Stratified Snowball
Double

Probability sampling approaches:


Simple random sample: Simplest type of probability approach. Each number of the population has an equal
chance of being included in the sample.
 Give everyone a number, let your computer select numbers or use bingo/numbers in a hat.

21
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Simple random sampling is often impractical, reasons:


- It requires a population list (sampling frame) that is often not available.
- It fails to use all information about the population.
- It may be expensive to implement in terms of both time and money.
 These problems have led to the development of alternative designs
 Complex probability sampling:
- Systematic sampling
- Stratified sampling
- Cluster sampling
- Double sampling

1. Systematic sampling
To draw a systematic sample you need to follow the following steps:
1. Identify the total number of elements in the population. (E.g. N = 30).
2. Determine the desired sample size. (E.g. n = 10).
3. Identify the sampling ratio k. (E.g. k = N / n = 30 / 10 = 3).
4. Identify the random start (drop a pencil (eyes closed) on your population list, see where the dot is). (E.g. = 2).
5. Draw a sample by choosing every kth entry. (E.g. all green marked people).
 Use in combination with other designs to minimize bias.

Advantages: Simplicity and flexibility.


Concerns: Over- and under-sampling of fractions and monotonic trends in population elements. Deal with by:
 Randomize the population before sampling.
 Change the random start several times in the sampling process.
 Replicate a selection of different samples.

22
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

2. Stratified sampling
Most populations can be separated into several mutually exclusive sub-populations (or strata). E.g. gender, education.
Each stratum is homogeneous internally and heterogeneous with other strata.
Stratified sampling: The process by which the sample must include elements from each strata.

Proportionate versus disproportionate sampling (deciding how to allocate a total sample among various strata):
Proportionate stratified sampling: Each stratum is properly represented so that the sample drawn from it, is
proportionate to the stratum's share of the total population.
Disproportionate stratified sampling: Any stratification that differs from the proportionate stratified sampling.

Why use stratified sampling (advantages)?


1. To increase a sample’s statistical efficiency.
2. To provide data to represent and analyse sub-populations (strata).
3. To enable different research methods and procedures to be used in different strata.
 Disadvantage: Expensive.

The process for drawing a stratified sample:


1. Determine the variables to use for stratification.
2. Group your population by the chosen variable/characteristic (create strata!).
3. Randomize the elements within each stratum.
4. Use simple random sampling (SRS) or systematic sampling within each stratum to create your sample.

3. Cluster sampling
Cluster sampling: Population is divided into groups of elements, some groups are randomly selected for study.
 Area sampling: Populations that can be identified with a geographic area (most important form of clusters).
 Why clustering (advantages)? - Less expensive than simple random sampling.
- Also possible without sampling frame.
 Disadvantage: Lower statistical efficiency (more error) as groups are homogeneous.

The process of drawing a cluster sample:


1. Create homogeneous clusters with great intracluster variance (heterogeneity/differences within cluster).
2. Use simple random sampling to select one or several clusters.
3. All elements within the selected cluster(s) are part of your sample.

23
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

An important form of cluster sampling is area sampling:


Identifies the population by geographic area, uses this to cluster it. Then you can take this sample, without a sampling
frame, and avoiding high costs.

Design: In designing cluster samples (incl. area sample) we must answer several questions:
1. How homogeneous are the clusters? Homogenous = low statistical efficieny? Blz 189
2. Shall we seek equal or unequal clusters? Looking at the cluster sizes (Preferably equal, not always possible).
3. How large a cluster shall we take? No size is superior.
4. Shall we use a single-stage or multi-stage cluster? Wat is dit? Blz 190
5. How large a sample is needed? Depends on the cluster design, e.g. simple cluster sampling.
 Simple cluster sampling = Single stage samples, with equal-size clusters.

Stratified sampling vs. Cluster sampling


Stratified sampling Cluster sampling
1. We divide the population into a few sub-groups, 1. We divide the population into many sub-groups,
each with many elements in it. The sub-groups are each with a few elements in it. The sub-groups are
selected according to some criterion that is related selected according to some criterion of ease and
to the variables under study. availability in data collection.
2. We try to secure homogeneity within sub-groups, 2. We try to secure heterogeneity within sub-groups,
heterogeneity between sub-groups. homogeneity between sub-groups.
3. We randomly choose elements from within each 3. We randomly choose a number of the sub-groups,
sub-group. all elements in these sub-groups will be studied.

4. Double sampling (or sequential sampling or multi-phase sampling)


It may be more convenient or economical to collect some information by sample and then use this information as the basis
for selecting a sub-sample for further study.
Example?

Non-probability sampling approaches:


Disadvantages:
- Greater chance of bias
- Cannot calculate range within which to expect the population parameter.
Advantages:
- Cost/time
- Could be the only feasible alternative (no sampling frame).

Methods to reduce bias from non-probability sampling:

24
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

- Post-stratification; Use information or demographics that differ between the sample and the population 
calculate weights to correct for over- under representation.
- Propensity scoring; A second sample from a previous research is believed to be more representative for the
population then the sample you use. Comparing your sample with the second sample allows calculating propensity
scores, reflecting the chance that a subject of the second sample would also be included in the sample.

1. Convenience sampling: Choose whomever you can find; Selection based on convenience.
- No randomization, so the sample is not a good representation of the population.
- The least reliable design, but the cheapest and easiest way to conduct (useful in exploratory study).

Purposive sampling: Attempt to secure a sample that conforms to some determined criteria. There are three types:
2. Judgement sampling: The researcher uses his judgement to select elements conform to some criterion.
 Used in exploratory study.
 Example: In a study of labour problems, only talk to those who have experienced it.
3. Quota sampling: Used to improve representativeness by using general characteristics of the population.
 Example: gender (50% male and 50% female), race, age, income level, employment status, political party.
 More than one control dimension, each should:
 Have a distribution in the population that we can estimate.
 Be pertinent to the topic studied.
 Precision control: Combination of characteristics
 Frequency control: The overall percentage of those with each characteristic in the sample should
match the percentage holding for the same characteristic in the population.
4. Snowball sampling: Individuals are selected and are used to locate others who possess similar characteristics.
 Useful if you want to sample subjects that are difficult to identify.

Sampling on the internet


Important: The internet population differs considerably from the general population.
Besides this, it is useful to find/built a population list.
 A probability sample can be drawn on the internet, as long as the entire population is part of the internet population.

Chapter 7 Primary data collection with surveys


Data collection approaches:
- Primary or secondary data.
- Quantitative or qualitative data.
- Communication or observation approach (to gather primary data).
o Qual. & Observ.: Participant observations.
o Quant. & Observ.: Structured observations.
o Qual. & Com.: In-depth, semi-/unstructured interviews, focus groups.
o Quant. & Com.: Structured interviews & surveys.

Communication: Surveying/questioning people and recording their responses for analysis.


 Learn about attitudes, motivations, intentions and expectations.

Strengths of the communication approach:


- Versatility (veelzijdigheid): Gather info about attitudes, opinions, expectations and intentions.
- Geographic coverage: Telephone, mail or internet as medium.
- Possible to gather information about the past.
- More efficient and economical than observation.
- Can address participants who are uniquely qualified.
25
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

- Inquiry can be made about exclusively internal information, such as attitudes, opinions, expectations and
intentions.

Weaknesses of the communication approach:


 The quality and quantity depends heavily on the respondents.
- Willingness of the participant to cooperate.
- Ability of the participant, he/she may not possess the knowledge.
- A participant may not have an opinion on the topic
 Fails to say don’t know, but feels obligated to comment on the topic.
- Different interpretation of the questions by participants.
 Survey responses should be taken for what they are; statements by individuals that reflect varying degrees of
the truth.

Survey/questionnaire
Questionnaire/survey: Collecting quantitative information through structured questioning.

Three conditions for a successful survey: The participant must:


1. Possess the required information: Qualify participants by letting them screen the questions.
2. Understand his/her role in the interview as provider of accurate information.
3. Be adequately motivated to cooperate: Task for the interviewer. Increase by incentive & good rapport:
 Giving an introduction and establishing a good relationship with the participant.

Information  researcher can do little about information level.


- Screening questions can be asked
- Certain information can be provided before the research.

Motivation to participate
- Interview: What kind of answer is sought, how complete it should be, in hat terms it should be expressed (even
some coaching)
- Survey: Increase response and encouragement to complete the survey (introduction/intermediate statements)

Increasing participants’ perceptiveness


1. The participant must believe that the participation experience will be pleasant and satisfying.
2. The participant must believe that answering the survey is an important and worthwhile use of his or her time.
3. The participant must dismiss any mental reservations that he or she might have about participation.

The introduction should contain:


- Appearance and action are critical in the first impression.
- Start by introducing yourself/company/special identification
- Friendly intentions and stimulation of interest are essential in the beginning.
- Reveal purpose study/name sponsor/researcher/address for questions
- Not too much information, this causes bias.
- Prepare convincing answers to critical questions.
- If participant is too busy, try to reschedule.

Establishing a good relationship:


- There must be a relationship of confidence and understanding between interviewer and participant.
- Guarantee anonymity.
- Ask questions properly, record responses accurately and show interest.

26
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Two factors can cause bias in interviewing:


- Non-response error. > With all types of survey. Reduce by call-back procedures.
- Response error. > Mostly with personal interviews (interviewer error).

Non-response error: Responses of participants systematically differ from responses of non-participants. Researcher:
1. Cannot locate the person to be studied.
2. Is unsuccessful in encouraging that person to participate.
Solutions to reduce non-response errors:
- Call-back procedures. > Better than weighting results: Original answers.
- Weighting results from a non-response sample.
- Substituting someone else for the missing participant. > Ask others from household about this person.

Response error: When the data reported differ from the actual data (mostly with personal interviews).
 Participant-initiated error: Occurs when the participant fails to answer fully and accurately.
 Interviewer error: When the interviewer's control of the process affects the quality of data.
- Failure to secure full participant cooperation.
- Failure to consistently execute interview procedures.
- Failure to establish an appropriate interview environment.
- Falsification of individual answers or whole interviews (cheating).
- Inappropriate influencing behaviour.
- Failure to record answers accurately and completely.
- Physical presence bias (e.g. young vs. old people).

Four communication approaches:


Personal interviews:
= Two-way conversation initiated by an interviewer to obtain information from a participant. The differences in roles are
pronounced. They are typically strangers and the interviewer generally controls the topics. The topics are usually
insignificant for the participant. The participant has little hope of receiving benefit for cooperation.
- Intercept interview: target participants in centralized locations.

Telephone interviews:
= Using the telephone to conduct an interview
- CATI: computer-administered telephone survey (no interviewer, disadvantage: higher refusal rate)
Disadvantages:
- Inaccessible households
- Inaccurate or non-functioning numbers
- Limitation on interview length (depends on interest of participant)
- Limitations on use of visual or complex questions
- Ease of interview termination
- Less participant involvement
- Distracting physical environment

Self-administered surveys:
= A questionnaire to be completed by the participant.
- Can be faxed/mailed (not e-mail). People can also be intercepted via paper in central locations.
- How to reduce non-response error?
 Follow ups, preliminary notification, concurrent techniques, like:
o Short questionnaires length
o Survey sponsorship
o Return envelopes and postage for mail-surveys
27
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

o Personalization
o Cover letters
o Anonymity
o Size, reproduction, cover
o Money incentives
o Deadline rates
o Total design method: Identify the aspects of the survey process that affect the response rate and then
organize the survey effort so the design intentions are carried out in detail. Minimize the burden on
participants by:
 Survey easy to read
 Offers clear response directions
 Includes personalized communication
 Provides information about the survey in a cover letter
 Are followed by researcher contacts to encourage response.

Web-based surveys:
= Computer-delivered self-administered questionnaires (online).
 E-mail, website, pop-up window.
1. Target web survey: Researcher has control over who is allowed to participate in the survey (e-mail).
2. Self-selected survey: Researcher has no/very-limited control on who is responding (pop-up window).
3. Social-media-based sv: Between target and self-selected survey (based on the social media contacts,
snowball sampling).
 Mixed mode: Combining several survey methodologies.
 Optimal communication approach: Answers research question and deals with constraints in time, budget, HR.
Evaluation of web-based surveys:
- Challenge to draw the right sample
- Costs: high starting investment, after that, low cost
- Non-response: high, f.e. spam-mail

28
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Interview techniques:
The success of a study depends on the interpersonal and communication skills of the interviewer.
Structured:
- Interviewer follows excaxt wording of questions
- Interviewer must learn objectives per question, to receive satisfying answers
- And probing:

Probing: Technique of stimulating participants to answer more fully and relevantly. Probing styles:
- A brief assertion of understanding and interest: I see, yes, aha.
- An expectant pause.
- Repeating the question: Did the participant understand the question?
- Repeating the participant's reply.
- A neutral question or comment: “How do you mean?” or “Can you tell me more about it?”
- Question clarification: “I’m not sure I understand, can you tell me more?”

Observations: Scientific inquiry when it is conducted specifically to answer a research question is systematically planned an
executed, uses proper controls, and provides a reliable and valid account of what’s happened.

Advantages observational approach:


- The only method for obtaining information from subjects that cannot articulate themselves (children, people with
mental disabilities, objects)
- Allows us to collect data the time it occurs (reduce retrospective biases)
- Do not need to depend on reports of others (reduce bias). Method reactivity bias can still occur (participant knows
he is observed).
- Secure information that would be ignored or found irrelevant in other methods.
- Capture the whole event as it occurs in its natural environment.
- Participants accept this method better (less demanding then communication).

Disadvantages observational approach:


- Observer must be at the scene of the event, sometimes very difficult or impossible.
- Slow and expensive
- Only surface indicators are detected, no attitudes, values and opinions for example.
- Environment calls for subjective assessment and recording of data  might harm validity.
- Cannot gather information about the past, or about the present at a different place.

Structured observation
Structured observation: Systematically record behaviour along predefined aspects.
 Two dimensions: Direct vs. indirect observation; Concealed vs. not concealed observation.
- Direct obs: When the observer is physically present and personally monitors what takes place.
- Indirect obs: When the recording is done by mechanical, photographic or electronic means.
- Concealed obs: The participant is not aware of the observer’s presence (> Ethics).
- Not concealed: The participant is aware of the observer’s presence (> Method reactivity bias).
 Conduct structured observations by using a checklist > Quantifying what is observed.

What can you observe with structured observations? Behaviour and non-behaviour.
Behavioural observation: Observing behaviour.
- Non-verbal: Body movement, motor expressions, exchanged glances, etc.
- Linguistic: Interaction, transfer of information, annoying sounds/words (ah, uh).
- Extra-linguistic: Vocal, temporal, interactional and verbal stylistic behaviour.
29
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

- Spatial analysis: How people physically relate to others (distance maintained between each other).

Non-behavioural observation: Not observing behaviour, but records, conditions and processes.
- Record: Analysing historical or current data, public or private data.
- Physical condition: Analysing the conditions of something. E.g. plant safety compliances, inventory.
- Physical process: Analysing the process of something. E.g. manufacturing process, traffic flows.

How can you measure structured observations? Factual vs. inferential, physical traces.
Factual observation: Describes what is happening and what can be seen.
 E.g. time and day of the week, environmental factors, product presented, etc.
Inferential observation: Translates what is seen to a concept that cannot be observed.
 E.g. Credibility, interest, acceptance, concerns, effectiveness, customer acceptance of product, etc.
Observation of physical traces: Observing measures of wear (slijtage) and measures of deposit, unobtrusive methods
creative.
- Measures of wear: E.g. estimating library book use by looking at the number of teared pages in a book.
- Measure of deposit: E.g. estimating alcohol consumption by collecting and analysing domestic rubbish.

Conducting structured observation


- Designing (who, what, where, when, how)
- Checklist

Chapter 8 Primary data collection: qualitative data

Quantitative interviews: Interviews are usually structured. Questionnaire/survey.


- Structured interviews: Goal is to describe or explain, not to explore.

Qualitative interviews: Interviews are usually semi-structured or unstructured. Memory list/intv. guide.
- Unstructured interviews: No specific question/topic list to be covered; mental list of relevant topics.
 Flexible and might take another course than originally expected.
 Researcher wants to gain insight into what the respondents consider relevant and his interpretations.
- Semi-structured interviews: Question/topic list to be covered; ask questions similarly during all interviews.
 Start with specific questions, but allow the interviewee to follow his/her thoughts later on.
 Probing techniques are often used. E.g. TV interview journalist with a political decision-maker.

Structured and unstructured interviews


Structured Semi-structured or unstructured
Type of study Explanatory or descriptive. Exploratory and explanatory (semi-structured).
Purpose Providing valid and reliable measurements Detect meanings from respondents about phenomena,
of theoretical concepts. and learn about respondents’ viewpoint on phenomena.
Instrument Questionnaire. Interview guide; memory list.
Format Fixed to the initial questionnaire. Flexible, depending on the course of the conversation,
follow-up and new questions raised.

Qualitative interviews
An objective of qualitative interviews is to learn more about the respondents’ viewpoint regarding phenomena.
An interview guide is important when conducting semi-/unstructured interviews, the main functions are:
- Memory list to ensure that the same issues are covered (in every interview).
- Memory list to ensure that the questions are asked in the same way.
> Increases the comparability of multiple interviews.
 The more specific the interview guide, the more structured the interview will be, and the less flexible the
interviewer is in responding to the respondents.

30
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

When writing an interview guide, you should ensure that:


- The guide contains questions about all topics, and order them in a logical order.
- Formulate the questions in a language that is easily understood by the interviewees.
- The questions are not too specific, and the interviewee has the opportunity to reflect on the issue at hand.
- Avoid leading or suggestive questions (reduce the influence of the interviewer).
- Record some general and some specific facts about the respondent (age, gender, working department, years with
the company, etc.).

A researcher’s primary task in interviewing is listening.


Question types in unstructured interviews:
- Introductory questions: General questions that get the interview started and help to establish good rapport.
I’ve looked at your website, but still, can you tell me more about your background?
- Follow-up questions: Used to ask the respondent to elaborate on a given question or to clarify whether you
understood the respondent correctly. What do you mean by…?
- Probing questions: Similar to follow-up, but refer more specifically to a part of the answer. The respondent
told he made a decision, why did you choose to…?
- Specifying questions: Ask the respondent to elaborate on the answer and to offer more information.
What happened after the decision was taken?
- Direct questions: Provide information on how respondents look at a situation from their viewpoint and
often ask them to describe an opinion or feeling. What is your point of view on…?
- Indirect questions: Not directed at the respondent personally, but ask for a general point of view.
What do people around here think about…?
- Structuring questions: Used to go on to the next topic. If we have not missed important aspects of this
subject, I would like to move on to….
- Silence: A way to let the respondent know that you want to hear more.
- Interpreting questions:Asked in order to confirm that you understand the information correctly. Do you mean
that…?

Information recording:
- Unstructured interviews can be conducted by two interviewers, but are usually recorded by tape or digitally.
> Advantages: Focus on the conversation (instead of making notes) and you can listen to it again.
> Disadvantages: - People feel uncomfortable; this influences their answering behaviour.
- Technical problems can disturb the interview.
- Transcribing the information recorded is very time-consuming.
The demand on the interviewer with an unstructured interview  why experts?
1. Background information.
2. To be able to direct the interview.
3. To decide whether you’ve heard enough or would like to get more information on the topic.
4. Respondents often expect you to be an expert.
 Interviewers should be good at active listening.

Focus groups
Focus group: Panel of people, led by a moderator, who meet to discuss some open questions and topics.
 Special form of unstructured group interviews.
 Can offer new insights into the topic that would have remained hidden in a one-by-one conversation.
 Moderator: Uses group dynamics principles to focus or guide the group in interactions.
 Script: Guide for moderator: Introduction, directions for participants, opening question, questions to
ask if the discussion falls dead, closing words.
 Group size: 6-10 people, but smaller groups can be useful if sensitive issues are discussed.
 Group type: Homogenous groups rather than heterogeneous groups.

31
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

> Homogeneous focus groups tend to promote more intense discussion/interaction.


> Heterogenous groups only work if participants are open for each other’s points.
 Online focus group
o Decreased cost/geographically functional
o Automatically recorded
o Perceived anonymity
o Participants can answer at convenient time
o Disadv: they need to have acces to the web
o No non-verbal communication
o Reduce spontaneous comments

Advantages and disadvantages of focus groups


Advantages Disadvantages
 Researcher can observe interaction between  Requires a well-trained moderator.
respondents.  Individuals might dominate the group.
 Detecting different views on a topic.  Respondents might be reluctant to speak up or
 Cost- and time-effective (group of people). remain in their role.

Participant observation
Participant observation: Qualitative observation approach. More flexible and less structured.
 Researchers attempt to fully dive into the world they research to understand it. They become part/participate in
the participants world.
 Two dimensions:
- Whether the observer actively participates.
> No distance; can influence participant’s behaviour.
> The more distant you are as an observer, the more descriptive are your observations.
- Whether the observer is concealed or not. Whether the participant knows that he is being observed.
> A concealed observer reduces bias, but there are ethical issues to concealment.
Classification of observation studies
Research class Purpose Research tool Example
1. Completely Generate hypothesis. Filed notes (Natural Ethnographical study,
unstructured. setting). researcher becomes a
part of the culture.
2. Unstructured. Emphasize the best characteristics of Uses labatory facilities,
1 & 4. like videotaping, to
reduce time of
observation.
3. Structured. Emphasize the best characteristics of Uses a structural
1 & 4. observational instrument
in a natural setting.
4. Completely structured. Test hypothesis. Observation checklist Observing decision-
(Control). making according to
structural pattern.
 Usually relies on convenience or snowball sampling.

Conducting observational studies


1. Define the content of the study and observational plan
Background information, context, existing scientific knowledge, observational targets, sampling, acts
2. Secure and train observers
Concentration, detail-orientated, unobtrusive, experience level
32
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

3. Data collection
Specify the details of the task: who, what, when, how, where.
4. Data analysis
Data reduction and categorization (often content analysis).

Field notes: Primary data-collection tool (participant observations). Four principles lead to a higher validity:
1. Direct notes in keywords.
2. Immediate full notes after you leave the setting.
3. Limit observation moment (time you are at the setting).
4. Rich full notes (very complete, everything you noticed).

Chapter 9 Secondary data and archival sources


What is the difference between primary and secondary data?
Primary data: Information/data collection by the researcher himself.
Secondary data (SD): Information/data that has already been collected by someone else (for other purposes).
 Have had at least one level of interpretation between the event and its recording.
 Qualitative: Multiple sources are merged to overcome that data does not perfectly fit your research prob.
 Provides information on the context of phenomena and therefore it adds to the total perspective.

Difference between primary/secondary literature and data:


 Literature: Insights, ideas, knowledge, theories, models.
 Data: Answers research questions by data-collection and analysis. E.g. results of observation.

Advantages of secondary data:


- Saves time and money.
- Often high-quality data.

Disadvantage of secondary data:


- Data was not collected with your research problem in mind > Might not perfectly fit your research problem.

In assessing the usefulness of secondary data, you need to address the following questions:
1. Information quality: Is the information provided in SD sufficient to answer your research problem?
a. Do the secondary data cover all the information you need?
b. Is the information available detailed enough?
c. Do the data follow the definitions you apply in your research problem?
d. Are the data accurate enough? (Evaluate their source).
2. Sample quality: Do the secondary data address the same population you want to investigate?
a. Do the secondary data refer to the unit of analysis you want to investigate?
b. Is the sample on which the data are based a good representation of the population?
3. Timeliness of data: Were the secondary data collected in the relevant time period? (not out of date).

How do you select sources of secondary data?


Source evaluation: Sources are evaluated and selected based on five factors:
1. Purpose: What the author is trying to accomplish.
2. Scope: Date of publication, how much of the topic is covered and in what depth, covered local,
regional, national or international, etc.
3. Authority: Author and publisher are indicators of the authority of the source.
> Primary sources are the most authoritative.
4. Audience: Tied to the purpose of the source.
33
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

5. Format: How the information is presented and how easy it is to find a specific piece of information.
> E.g. index, arrangement of information (chronological, alphabetical, etc.).

What are typical sources of secondary data?


Dimensions of secondary data are types of secondary data sources and data format:
- Internal sources: All data sources within the organisation in which the researcher is working.
- External sources: All data sources outside the organisation.
- Written sources.
- Electronic sources: Blurring because of the increasing use of information technology (IT).

Sources of secondary data


Internal secondary data sources External secondary data sources
Written - Memos. - Publishers of books, journals, periodicals:
sources - Contracts.  Indexes, yearbooks.
- Invoices. - Government and supranational institutions:
 Books, reports, online.
- Trade and professional associations:
 (Annual) reports.
- Media sources:
 Newspapers, magazines, special reports.
- Commercial sources:
 (Annual) reports.
Electroni - Management information systems. - Publishers of books, journals, periodicals:
c - Accounting records.  Bibliographic databases.
sources - Government and supranational institutions:
 Websites of statistical offices, CD-ROMs.
- Trade and professional associations:
 Websites.
- Media sources:
 Websites, CD-ROMs of complete volumes.
- Commercial sources:
 Websites, data sets of previous studies.

How to ease the burden of imperfection of secondary data:


- Merging of multiple secondary data sources (Method of triangulation, increases validity of results)
- Adjusting your research problem to the available data (Problem: Let the data and not the problem drive your research)
- Investigating which research problems can be investigated with the available data.

What is data-mining?
Data-mining: Uncovering knowledge, identifying patterns in data and predicting trends and behaviours from data in
databases stored in data warehouses.
 Organizations collect a tremendous amount of information and record it in databases on a daily basis.
With data-mining one searches for valuable information within these large databases (often internal data).

Data-mining tools: Perform statistical analysis to discover and validate relationships.


Data warehouse: An electronic repository (bewaarplaats) for databases that organizes large volumes of data
into categories to facilitate retrieval, interpretation and sorting by end-users.
Data marts: Intermediate (tussenliggend) storage facilities that compile locally required information.

How does data-mining work? Five steps:

34
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

1. Sample: Decide between census (the entire dataset) and a sample of the data.
2. Explore: Identify relationships within the data (explore trends, groups, outliers).
3. Modify: Modify or transform data (e.g. reduction, categorization of data).
4. Model: Develop/construct a model that explains the data relationships.
5. Assess: Test the model to estimate how well it performs by running the model against known data?

Data-mining information is valid when (criteria):


1. Accuracy: Data gathered is complete and match with the information you were looking for.
2. Reliability: To what extent the information obtained is independent from different settings.
3. Reality check: Do not use advanced analysis techniques without fully understanding the involved mathematics?

Big data: Big amount of data from the use of web, mobile phones and customer, credit and debit cards.
- Opportunity, but: Privacy, security and intellectual property rights. Validity, reliability and completeness of info 
waarom zou dit niet valide etc. zijn?

Chapter 10 Content analysis and other qualitative approaches

Content analysis: Technique based on the manual or automated coding of transcrips, documents, articles or even
audio and video material.
 Objective: To reduce information to a manageable amount.
 Textual information is transformed into numerical data for further statistical analysis.
 Quantitative app.: Count occurrence of words/phrases and detect how far they are apart in a text.
 Qualitative approach: Detect the general meaning of a text to categorize it.
 Sources: Archival material, recordings, current conversations and obtained material (f.e. interviews).

Categories of content analysis:


- Analysis of antecedents (voorgeschiedenis).
- Analysis of characteristics.
- Analysis of effects.

Advantages:
- Adds to transparency (it’s clear to readers what the researcher did).
- Others can take your textual information and replicate your research.
- Content analysis is unobtrusive (niet opdringerig/opvallend) and non-reactive.

Disadvantages:
- Quality depends on input.
- Coding procedure is subject to interpretation bias.
- Time-consuming.The process of content analysis (How to conduct content analysis)
 Research problem.
1. Define the population of sources and selection criteria  knowing which source and how to select sources.
2. Coding procedure: Prescriptive or open analysis; Coding.
3. Coding frame: List of all codes used.

Coding: The process of categorizing and combining the data for themes and ideas and categories and then marking
similar passages of text with a code label.
 All fragments that have the same code are about the same theme/idea.
 Software packages are available to automate coding (e.g. NVivo, MAXQDA).
Prescriptive analysis: Prior to searching, define words/phrases that you search in texts (create dictionary of key words).
Open analysis: Try to find the general message of the text. (“Read between the lines”).
35
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Coding frame: List of all codes used.


- Constant comparison to ensure consistency in your coding.
- Every time you create a new code, you have to read everything again.

Narrative analysis (H10)


Narrative analysis: Method of recapitulating past experience by matching verbal sequences of clauses to the
sequences of events which actually occurred  examines stories focusing on how its elements are sequenced and how
they are evaluated.
 Participant is part of the story they tell.
 Qualitative > Interviews and secondary data (e.g. biographies).
 Allows researcher to understand phenomena from the respondent’s perspective.
 Incorporate the specific context.
 Very subjective  not for explanatory research, suited for exploratory research.
 Content analysis focusses on small parts, narrative focuses on the context/whole story.
 Considering context is very important! F.e. fight with high school teacher, pupil/adult.

Procedures in narrative analysis to get a better insight into the narrative/story:


 Structural analysis: Focuses on how the narrative/story is told, language and linguistics.
 Thematic analysis: Focuses on the content of the narrative/story.
 Interactional analysis: Dialogue between storyteller and listener; Work together to construct the narrative/story.

Structure of a structure narrative analyses


1. Abstract statement
2. Orientation segments; time, place, participants
3. Complication action; sequence of events, actions
4. Evaluation; assess the actions/attitude/meaning of the actions
5. Resolution; Conclusion, what finally happened
6. Coda; Importance of the story, current phenomena related to the old story.

Ethnographic studies
Ethnographic studies: Researchers immerse themselves in the lives, culture, or situation they’re studying.
- Data-collection methods: Qualitative. Participant observation, qualitative interviews, secondary data.
 Ethnographic studies start with a broad theme and gets more focused over time (= funnel approach).

Elements of ethnographic studies:


 Multiple information sources (combine observations, interviews and secondary data).
 Employ different perspectives (e.g. obtain info from manager, employee, industry experts, etc.).
 Record and present different types of information (e.g. frequencies, citations, anecdotes or visualizations).

Note taking:
- Quality increases when goals are clearly defined.
- Take notes as soon as possible
- Include info as people, place and time.
- Clear distinction between notes of your interpretation, someone else’s interpretation and observations.

Action research
Action research: Focusses on social change or the production of socially desirable outcomes.
 Uses observations, interviews, focus groups, questionnaires, secondary data.
 Addresses real-life problems and is bounded by the context.
 Continuous reflecting process of research and action

36
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

 Credibility – the validity of action research is measured on whether the actions solve the problems and realize the
desired change.
 Collaborative venture of researchers, participants and practitioners.

Describe what action research is about.


Action research addresses real life problems - experienced in an organisation and is bounded by its context, i.e. it
investigates a specific problem taking into account the specific circumstances in which the problem occurs. The specificity
of the approach calls for a close cooperation between researchers, participants and practitioners, as the conclusion of the
research are specific suggestions that should be implemented. Action research views research and management actions as
a continuous reflecting process, i.e. research informs management how a management problem could be solved and the
results of this induced change are taken into account in the research. The major objective and hence criteria for the quality
of the research is whether the research efforts solved the problem and implemented the desired change.

Advantages action research:


 Interplay between action and research to achieve desired changes.
 Implementation of the conclusion and solution to the problem.
 Builds upon the cooperation between participants and researchers.

Disadvantages action research:


 Findings are just anecdotal evidence.
 Action research is often context-dependent. > Transferring knowledge to another context is difficult/impossible.
 The essential distance of the researcher is violated.
 Limited control over the environment.

Action research process


1. Diagnostic stage: Problem identification and definition.
2. Science-based problem: Analysis, generating possible solutions to the problem.
3. Action design: Designing actions responding to the problem.
4. Action stage: Executing actions designed.
5. Assessment: Investigating the effects of the actions.
6. Learning: Assessing transferability of results.

Grounded theory (H10)


Grounded theory: Starts from collected data (not with a theory), and uses this data in an iterative process of coding,
categorizing and comparing to formulate a new grounded theory.
 Grounded theory is a general strategy in qualitative research.
 Induction (data  theory)

Research in grounded theory, is an iterative three-stage process, based on two principles:


1. Open coding: All information pieces are labelled with categories (New conceptual units emerge, new
comparisons and new recatagorizations).
2. Axial coding: Identify linkages between categories, developing theoretical explanations. Also check
whether relationship holds for newly collected data.
3. Selective coding: Focus on few principal categories and its relations and attempts to develop a new grounded
theory.

 Theoretical sampling: Which additional cases would be most useful to build and develop a theory?
> No representativeness considered.
 Theoretical saturation: Stop process if new categories and cases do not improve/add to the understanding of
the phenomena (if it is not relevant).

37
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Criteria to assess how well a research is conducted (grounded theory), instead of validity and reliability:
1. Fit: How well do categories represent real incidents?
2. Relevance: How useful is the theory for practice?
3. Workability: Quality of the explanation offered and assess if the theory works.
4. Modifiability: Can the theory be adapted if new data is compared to it? (Must be flexible enough).

Advantages:
- Framework for systematic inquiry into qualitative data.
- Theory development.

Disadvantages:
- Feasibility problems (e.g. researcher can’t be free of pre-theoretical thoughts).
- Very time-consuming because of its iterative character.
- Criticized for not generating theories, but generating categorization systems.

Chapter 11 Case studies


Case study: An empirical inquiry that investigates a contemporary phenomenon within its real-life context; when the
boundaries between phenomenon and context are not clearly evident; and in which multiple sources of evidence are used.
- Much more an approach to investigate a phenomenon, then to collect information.
- Real-life context > Very context-dependent.
- Multiple approaches are combined and/or multiple cases are studied.
 Several sources of evidence: 1. Interviews 2. Documents and archives 3. Observation
 Suitable for explanatory, descriptive and exploratory research.

- Follows replication logic (not sampling logic > survey).


> Results are therefore not generalizable to a population, but to a theoretical proposition.
Replication logic: Same phenomenon under the same conditions; Phenomenon differs if situation differs.
- Objective: Understand a real problem and use gained insights for developing new explanations and theories.

- Multiple case study  Multiple cases are investigated (Most of the times the best option – robust)
- Single case study  one case is investigated.
o Sufficient for a critical case study (closing to a longer series of case studies)
o Extreme or unique cases
o Pragmatic reasons (convenience)

Advantages of case study:


- Relies on multiple sources of evidence (interviews, observations and secondary data).
- Consideration of the specific, real-life context.
38
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

- Useful in theory building, answers ‘why’ and ‘how’.

Disadvantages of case study:


- Not generalizable to a population.
- Big chance on bias.

Single versus multiple case studies:


 Single case studies: Rely on one single case.
o If the single case study closes a longer series of case studies written by others.
o To investigate extreme or unique cases.
 Multiple case studies: Rely on several cases.
o Results are considered more robust (krachtiger).
o Select the best cases.

Conducting a case study


1. Purpose clearly defined
Specific / Also disclose theoretical expectations
2. Research process detailed
Increases accountability of the research (easier to asses).
- Population used
- Sampling method
- How information is obtained
- Consulted doc/archive
3. Research design thoroughly planned
Find out what sources excist and how to use them (triangulation  the different sorts of evidence provide different
measurements of the same phenomenon and increase construct validity).
Build up a database with all the information obtained.
4. High ethical standards applied
5. Limitation frankly revealed
6. Adequate case study analysis
For example, pattern matching and time-series analysis
7. Findings presented unambiguously
Disclosing all insights, including those that contradict your proposition.
8. Conclusion justified
Do not extend the scope of the project.
Do not generalize for population (not representative).

Chapter 12 Experimentation
Experiments: Studies involving intervention by the researcher beyond that required for measurement. The usual
intervention is to manipulate a variable in a setting and observe how it affects the subjects being studied.The researcher
manipulates the independent or explanatory variable and then observes whether the hypnotized dependent variable is
affected by the intervention.
 Uses questionnaires, observations (and sometimes secondary data).
 Causal studies / relationships. Three types of evidence:
1. There must be a correlation between IV and DV:
- Do IV and DV occur together in the way hypothesized?
- When IV does not occur, is there also an absence of DV?
- When there is more or less of IV, do we find more or less of DV?
2. Time order of IV and DV (IV must occur before DV).
3. Only IV influences the DV (NO EV’s!).

39
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Advantages:
 The ability to uncover causal relationships (and manipulate the IV).
 The ability to control extraneous and environmental variables.
o Extraneous variables (Control and confounding variables): Describe background characteristics of the
participants (gender, age, education).
> Researcher can only control these variables through selection of participants.
o Environmental variables: Describe the situation in which the experiment takes place.
> Can be controlled by researcher, should be kept constant.
 The convenience and low costs of creating test situations (instead of searching for their appearance).
 The ability to replicate findings and thus rule out isolated or idiosyncratic (vreemde) results.
 The ability to exploit (benutten) naturally occurring events (and to some extent field experiments).

Disadvantages:
 The artificial setting of the laboratory (can be improved by investment in the facility)
 Generalization from non-probability samples (can pose problems despite random assignment).
 The number of variables one can include is limited.
 Disproportionate costs in select business situations: Applications of experimentation can be expensive.
 Focus is restricted to the present and immediate future (prediction/past is impossible or difficult)
 The designed intervention is not always effective.
 Ethical issues related to the manipulation and control of human subjects.

Seven steps to make an experiment successful:


1. Select relevant variables for testing.
- Select variables that are the best operational representations of the original concepts.
- Determine how many variables to test.
- Select or design appropriate measures for them (must be adapted to situation, without compromising their original
purpose)
2. Specify the level(s) of treatment.
- For example, high/middle/low income.
- Based on simplicity and common sense.
3. Control the extraneous and environmental factors (experimental environment).
- Extraneous variables: age, gender, business experience.
- Environmental variables: physical environment
- Blind: participants do not know if they get experimental treatment.
- Double blind: experimenters/participants do not know who gets the experimental treatment.
4. Choose an experimental design (suited to the hypothesis).
- Unique to the experimental method.
- A good design improves generalizability of results.
5. Select and assign subjects to groups.
- Preferably random assignment to groups.
- Otherwise, matching (non-probability quota approach) – have subjects in the control and experimental group that
are similar to each other (visualized in a quota matrix).
6. Pilot-test, revise and conduct the final test.
Same as in other research approaches.
7. Analyse the data.

Control group: Base level. Participants are not exposed to IV manipulation.


Experimental group (treatment): Participants are exposed to IV manipulation.

40
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Laboratory experiments: Conducted in an unnatural setting; researchers can fully control the setting / variables.
 BUT: Participants are aware that they are participating in an experiment. > Behaviour might differ.
 The experimental effect is problematic, as experimenter and participants interact more (than field exp.).
Field experiments: Conducted in a natural setting; participants unaware that their behaviour is being monitored.
 More heterogeneous group: Reflects the population better than the laboratory experiment.
 No/limited control on research setting > Ability to manipulate the IV is smaller. Other: Ethical issues.

Validity in experimentation
Internal validity: If the IV has caused the change in the DV.
External validity: When the results of the experiment can be generalized to some larger population.

Seven threats to internal validity:


1. History
- Events might occur during experiment, which confound/also influence DV.
- Example: Critical newspaper about payment comes out, when experimenting peoples knowledge about this.
2. Maturation:
- Changes within subject due to the passage of time (e.g. hungry, tired).
3. Testing:
- Learning effect. Experience of first test influences results of second test.
4. Instrumentation:
- Changes in observations, due to instrument or observer.
- Example: different questions or different observers.
5. Selection:
- Groups should be equivalent in every respect (control & experimental group).
- Retain by random assignment or matching.
6. Statistical regression:
- Random fluctuations over time are problematic if groups are selected on extreme values.
- Example: Even though, most of the time you are good-humoured, sometimes you are bad-humoured.
7. Experiment mortality:
- Composition of groups changes during experiment > Comparison will be distorted.

Other effects, not to solve with random assignment:


1. Diffusion or imitation of treatment
When people in the group talk
2. Compensatory equalization
When the experimental treatment is much more desirable, reluctance of control members (compensatory actions can
solve this).
3. Compensatory rivalry
When participants know they are in the control group and might try harder/compensate.
4. Resentful demoralization of the disadvantaged
When treatment is desirable, control group members become resentful and lower their cooperation and output.
5. Local history
When the history effect happens to both the control/experimental group.

Three threats to external validity (X = experimental treatment/manipulation):


1. The reactivity of testing on X:
Pre-test sensitizes participants > Respond in a different way to X in post-test.
2. Interaction of selection and X:
When selected subjects do not properly represent the desired population.

41
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

3. Other reactive factors:


- Experimental settings: Laboratory experiment can have a biasing effect on the subjects’ response to X.
- Knowledge of being a participant.
- Possible interaction between X and subject characteristics.

Experimental research design (E = experimental effect)


True experimental design:
1. Experimental group and control group.
2. Random assignment or matching.

One group pre-test–post-test design: O1 X O2 Experimental group


* Experimental group, control, no control group, no randomization.

Post-test only control group design: R X O1 Experimental group


R O2 Control group (no X)
* Pre-tests are not necessary when randomization is possible.  E = O 1 – O2

Pre-test – post-test control group design: R O1 X O2 Experimental group


R O3 O4 Control group (no X)
* If randomization was very effective, expect that O 1 and O3 are equal.  E = (O2 – O1) – (O4 – O3)

Quasi-experimental design:
Experimental group, control group (non-equivalent), no randomization, no control.
> Field experiment: Work with existing groups > No randomization, control and equivalence between groups.
Non-equivalent control group design: O1 X O2 Experimental group
O3 O4 Control group (no X)
* Compare pre-test results (O 1 – O3) to determine the degree of equivalence between groups.
- Time-series design: Repeated observations before and after the treatment.

The experimental approach can also be combined with the survey approach.
Factorial surveys: = Vignette research.
 Researcher presents the respondent with a brief, explicit description of a situation (description = IV); and then asks
him/her to assess the situation / make a decision (answer = DV).

Testing effect: People know what to expect because they were (pre-)tested before.
 Make sure that there’s sufficient time between the tests to reduce the testing effect.
 This is why the pre-test is often left out in social studies (prevents testing effect).
 Testing effect: O1 affects O2.
 Reactivity effect: O1 affects X.

Extensions of true experimental design:


Besides the basic designs, often extensions are used. They differ from the basic designs in:
1. The number of different experimental stimuli that are considered.
2. The extent to which assignment procedures are used to increase precision.
Factor: widely used to denote an independent variable (male/female, no training/brief training/good training)
Active factors: can be manipulated
Treatment level: different levels of active factors
Blocking factor: cannot be manipulated, but used for classification (gender, age)

Completely randomized design

42
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Chapter 13 Fieldwork: questionnaires and responses (SURVEYS!)


The instrument design process includes three phases:
1. Developing the instrument design strategy: Create investigative questions.
2. Constructing and refining the measurement questions: Create measurement questions.
3. Drafting and refining the instrument: Create instrument design.

Phase 1: Developing the instrument design strategy


To plan a strategy for the survey, there are four important questions that need to be asked:
1. What type of data is needed to answer the management question?
o Nominal, ordinal, interval, ratio.
2. What communication approach will be used?
o Personal, telephone, self-administered, web-based, mixed mode.
3. Should the questions be structured (closed), unstructured (open-ended)?
 Open-ended questions: Allow participants to reply with their own choice of words/concepts.
> Problems: Frame of reference (interpretation) and getting irrelevant responses.
 Closed questions: Limit participants to a few predetermined response possibilities.
4. Should the questioning be undisguised (direct) or disguised (indirect)?
 Direct questions: Participant should be able to answer them openly and unambiguously.
 Indirect questions: Designed to provide answers through inferences from what the participant says.

Disguised (indirect): Designed to conceal the question's / survey’s true purpose. WHY?
 To avoid bias if it is about a sensitive, boring or difficult topic.
 Useful when we seek information that is available from the participant, but not at the conscious level.
 Disguising the sponsor for strategic reasons or if name influences answering behaviour.

Questionnaires (= interview schedules) contain three types of measurement questions:


 Administrative questions: Identify participant, interviewer, interview location and conditions.
 Classification questions: Goal: Categorize answers based on participant characteristics > Reveal patterns.
 Demographic, economic, sociological, geographic.
 Target questions: Address the investigative questions (most important questions!).
o Structured questions: Fixed set of choices / closed questions.
o Unstructured questions: Do not limit responses / open-ended questions.
o Combination of structured and unstructured questions.

Phase 2: Constructing and refining the measurement questions


Question construction involves three critical decision areas:
1. Question content.
2. Question wording: Shared vocabulary, unambiguous, not misleading, etc.
3. Response strategy: Offer options that include unstructured or structured response.
Unstructured response: Open ended response, free choice of words, free-response.
Structured response: Closed response, specified alternatives provided.
> Closed response strategies: Dichotomous, Multiple-choice, Checklist, Rating, Ranking.

Distinguish between questioning and response structure.


Questioning structure: The amount of structure placed on the interviewer.
Response structure: The amount of structure placed on the participant.
 Structure limits the amount of freedom when the interviewer asks the participant questions.

Different response strategies:


Free response strategy Open-ended questions (= free-response questions).
- Free-response questions
Dichotomous response strategy Provides only two options (often suggesting opposing responses (yes or no)).
- Dichotomous questions
43
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Multiple-choice strategy There are more than two alternatives (but you can only choose one answer).
- Multiple-choice questions Problems (using this strategy) can be:
 One or more responses have not been anticipated.
 List of choice is not exhaustive or not mutually exclusive.
 One question can be divided into several questions.
 Order and balance of choices (do not put the correct answer in the middle, first or
last option; as much positive as negative choices).
 Unidimensional scale (different aspects of the same dimension).
Checklist response strategy Like the M-C strategy, but you can choose more than one answer. Order is unimportant.
Rating response strategy Gradations of preference, interest or agreement. Participants can position each factor on
- Rating questions a scale. Order unimportant.
Ranking response strategy Relative order of alternatives is important (e.g. order your top 3).

Characteristics of response strategies


Characteristics Dichotomous Multiple-choice Rating Rank ordering Free response
Type of questions Closed Closed Closed Closed Open
> Structured > Structured > Structured > Structured > Unstructured
Type of data Nominal Nominal Ordinal Ordinal Nominal
Ordinal Interval Ratio
Ratio
Number of answer 2 3-10 3-7 Max. 10 None
alternatives
Number of 1 1 Max. 7 Max. 10 1
participant answers

Phase 3: Drafting and refining the instrument


Drafting and refining the instrument is a multistep process:
1. Develop the participant-screening process (personal/telephone) along with the introduction.
2. Arrange the measurement sequence (meet volgorde):
> Branched question: If the content of the question assumes other questions have been asked and answered.
3. Prepare and insert instructions including termination, skip directions and probes.
4. Create and insert a conclusion, including a survey return statement (participation has been valuable!).
5. Pre-test specific questions and the instrument as a whole (identify problems before data collection).

Guidelines for the sequence of questions:


 Place the more interesting topical target questions early on (attention-getting & human-interest questions).
 Place personal/ego-threatening questions near the end (prior: use buffer questions).
 Place challenging questions later in the questioning process (simple > complex; general > specific = funnel app.).
 Use transition statements between the different topics of the target question set.

What could be major failures of the survey instrument design?


- Failure to develop target questions which will answer your investigative questions.
- Failure in selecting the most appropriate communication approach.
- Failure in drafting specific measurement questions (content, wording, and sequence of questions).
- Failure to screen participants who are representative of the population.
- Failure to test the instrument properly.

What are major problem assumptions made by researchers?


- Participants are motivated to answer every question truthfully and fully.
- Participants know or understand key words or phrases.
- Participants will answer the question from the same frame of reference that the instrument assumes.
- Participants will do calculations, averaging, or even diligent remembering in order to answer a question.
- Assuming that the development of good survey questions is a simple process.

44
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Chapter 14 Measurement and scales (SURVEYS!)


Four types of data types/measurement scales:
 Nominal: Categories are mutually exclusive and collectively exhaustive (= enige keuzes die er zijn).
 Ordinal: Nominal characteristics + indicator of order.
 Interval: Ordinal characteristics + equality of interval (distance between 1;2 = distance between 2;3).
 Ratio: Interval characteristics + there’s an absolute zero or origin (divide and multiply possible).

Types of data and their measurement characteristics


Type of Data characteristics Basic empirical operation Example
data Classification Order Distance Origin
Nominal + Determination of equality Gender (male vs. female)
Ordinal + + Determination of greater or Doneness of meat (well,
lesser value medium, rare), grades
Interval + + + Determination of equality of Temperature in Celsius
intervals or differences
Ratio + + + + Determination of equality of Age in years profits in €
ratios

Two sources of measurement differences (potential error):


1. Systematic error: Results from a bias.
2. Random error: The remainder. Occurs erratically (onregelmatig).

Four major error sources:


 Participant: Little knowledge (> guesses), temporary factors ((e.g. fatigue, boredom, anxiety, etc.).
 Measurer: Suggestive questions, body language, etc.
 Situational factors: Presence of others & any condition that places a strain on the interviewer/session.
 Data-collection instrument: Confusing, ambiguous, poor selection of question items.

Three major criteria for evaluating a measurement tool:


1. Validity: Refers to the extent to which a test measures what we wish to measure.
a. Content validity.
b. Criterion-related validity.
c. Construct validity.
2. Reliability: Has to do with the accuracy and precision of a measurement procedure.
a. Stability.
b. Equivalence.
c. Internal consistency.
3. Practicality: Is concerned with a wide range of factors of economy, convenience and interpretability.

1. Validity
Two major forms of validity:
 External validity: The data’s ability to be generalized across persons, settings and times.
 Internal validity: The ability of the research instrument to measure what it is meant to measure.

a. Content validity: Does the measurement instrument cover the investigative questions? Is all included? (Content).
 Good content validity: Representative sample and instrument covers all relevant topics (= subjective).
 Judgemental evaluation: Researcher judges content validity by defining of the topic, items and scales.
 Panel evaluation: Panel of people/judges judge content validity.

b. Criterion-related validity: Success of measures used for prediction or estimation of e.g. behaviour.

45
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

 Concurrent validity: Estimation of the present.


 Measure at one point in time. Use two different measurement instruments > Correlate?
 E.g. Cito results & judgement of teacher.
 Predictive validity: Prediction of the future.
 Measure at two points in time. Use two different measurement instruments > Correlate?
 E.g. Cito results & see over a period of time whether the student does well in assigned education level.

c. Construct validity: Success of measures identifying and representing underlying constructs.


 Sub-constructs should be sufficiently distinct (verschillend) from each other.

2. Reliability
Reliability: A measure is reliable to the degree that it supplies consistent results (under different times/conditions).
 If a measurement is not valid, it hardly matters if it is reliable – because the measurement instrument does not
measure what the designer needs to measure in order to solve the research problem.

a. Stability (multiple measurements)


Stability: When something is measured two times, the outcome should be similar.
 Test-retest: Compare the two tests, to learn how reliable they are (correlation).
 Not too much (< 6 months), not too little time apart.
 Participant learns more about the topic before the retest (= topic sensitivity).

b. Equivalence (multiple measurer)


Equivalence: When different measurers measure the same situation *, the outcomes should be similar.
 * Same conditions, instrument, etc. BUT: Different measurers.
 Test: Compare scoring of different observers of the same event (correlation).
 Improve: Use well-trained measurers.

c. Internal consistency (multiple instrument items)


Internal consistency: Degree to which instrument items are homogeneous and reflect the same underlying construct.
 Correlation.

3. Practicality
Practicality has been defined as economy, convenience and interpretability.
a. Economy:
 Limit the number of measurement questions, to limit the measurement time (and thus costs).
 Choice of data-c ollection method (personal interview is more expensive than online surveys).
b. Convenience: The measuring device needs to be easy to use and apply.
c. Interpretability: When people other than test designers must interpret the results. Make interpretation possible:
 State the functions the test was designed to measure and the procedure by which it was developed.
 Detailed instructions for administration.
 Scoring keys and instructions.
 Norms for appropriate reference groups.
 Evidence about the reliability.
 Evidence regarding the inter-correlations of sub-scores.
 Evidence regarding the relationship of the test to other measures.
 Guides for test use.

Response methods
To quantify dimensions that are essentially qualitative, rating or ranking scales are used.

46
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Rating scales: When variables are individually rated. There are many different sample rating scales:

Sample rating scales


Rating scale Definition Data type How does it look like?
Simple category scale Two mutually exclusive response Nominal Have you ever been self-employed?
/ dichotomous scale choices 1. Yes
2. No
Multiple-choice-single- Multiple response choices, but only Nominal For which department are you working?
response scale one answer is sought a. Production
b. Sales, etc.
Multiple-choice- Multiple response choices, rater is Nominal Check any of the sources where you would
multiple-response allowed to select one or more collect information about a new car:
scale / checklist alternatives  Multiple response choices
Likert scale Compare one person’s score with a Interval Strongly agree agree neutral disagree
/ summated scale distribution of scores from a well- strongly disagree
defined sample group
Semantic differential Measures the psychological meanings Interval Heathrow airport
scale of an attitude object Fast __:__:__:__: Slow
High quality __:__:__:__: Low quality

Numerical scale Participants write a number from the Ordinal or Very good 5 4 3 2 1 Very bad
scale next to each item interval Employees cooperation _____________
Employees knowledge______________
Multiple rating list Similar to numerical scale, but it Interval Please indicate how important or unimportant
scale accepts circled response from the each service characteristic is
rater and the lay out permits Fast reliable repair 7 6 5 4 3 2 1
visualization of results Service at my location 7 6 5 4 3 2 1
Fixed sum scale Discover proportions, up to 10 Ratio Relative importance
categories may be used Subject one X
Other subjects X
Sum 100
Stapel scale Alternative for semantic differential Ordinal or Company name x
scale, when it’s difficult to find interval* +3 +2 +1 Technology leader -1 -2 -3
bipolar adjectives (e.g. fast, slow) +3 +2 +1 Exciting products -1 -2 -3
Graphic rating scale Enables researcher to discern fine Ordinal, How likely are you to recommend X to others
differences interval* Very likely |---------------------------| very unlikely
> E.g. with smiley faces and in how or ratio* Place an X at the position along the line that
much pain you are reflects best your judgment

Errors to avoid with rating scales:


 Leniency: (Mildheid). Occurs when a participant is either an ‘easy rater’ or a ‘hard rater’.
o Hard rater: Error of negative leniency. Raters give people a higher score if they know them.
o Easy rater: Error of positive leniency. Where acquaintances are rated lower because one is
aware of the tendency towards positive leniency and attempts to counteract it.
 Central tendency: (Neiging). Raters are reluctant to give extreme judgments.
 Halo effect: Rater introduces systematic bias by carrying over a generalized impression of the subject from
one rating to another. E.g. you may expect the student who does well on the first question of an
exam to do well on the second.

Ranking scales: Compare variables and make choices among them. There are different sample ranking scales:
Examples of ranking scales
Ranking scale Definition Data type How does it look like?
Paired-comparison Choosing between two objects. When Ordinal Choose per question the most favourable answer:
scale there are more than two objects, this 1. X or Y 2. X or Z 3. Y or Z
47
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

becomes a difficult task for the participant


Forced ranking scale Lists attributes that are ranked relative to Ordinal Rank ‘case’ in order of preference (1,2,3)
each other. Number of stimuli is limited _____X _____Y _____Z
Comparative scale Ideal for comparison, if the participant is Ordinal Compared to ‘case’, the ‘characteristic’ of ‘case’ is:
known with the standard Superior Same Inferior
1 2 3 4 5

Measurement scale construction: Five techniques:


1. Arbitrary: Custom-designed scale (subjective > Only content validity!).
2. Consensus: Panel of judges evaluate the items (relevance, ambiguity) > Time-consuming.
3. Item analysis: Measurement scales are pre-tested with a sample of participants (popular: Likert scale).
4. Cumulative: If one agrees with an extreme item (C), one will also agree with less extreme items (A & B).
 Time-consuming.
5. Factoring: Correlate items (from other studies) to detect their relationship (popular: Semantic differential).

48
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

Chapter 18 Hypothesis testing


Only learn this part, not the rest of chapter 18!

Testing for statistical significance:


1. State H0.
2. Choose statistical test. > Based on research design.
3. Select the desired level of significance > p < 0.05.
4. Compute the calculated difference value. > E.g. t, chi-square, etc.
5. Obtain the critical value.
6. Interpret the test.

H0: No difference/relationship.
H1: Difference/relationship.
 In research, you want to reject H0, and therewith reinforce H1 (NOT PROVE!).

Type I error: H1 is falsely accepted, while true H0 is rejected.


Type II error: H0 is falsely accepted, while true H1 is rejected.

How to select a test:


1. What does the test involve?
- One sample.
- Two samples.
- k samples (more than 2).
2. If two or k samples; are they independent (not related) or dependent (related)?
 Dependent: If results come from the same participant or if the same group is measured twice.
 Independent: Two groups will be tested once, separately.
3. Is the DV nominal, ordinal or scale (interval or ratio)?

Overview PPT will be given during exam. You should be able to provide the right cell within that overview based on the
situation provided. (Formula’s will NOT be asked, you do NOT have to calculate these things!).

49
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)
lOMoARcPSD|1642952
lOMoARcPSD|7601674

General
Data-collection methods Other qualitative approaches
Interviews Experiments
Questionnaires Action research
Focus groups Case studies
Observations Ethnographic research
Secondary data Content analysis
Narrative analysis
Grounded theory

Methods used in qualitative research:


- Case studies.
- Qualitative interviews: Semi-/unstructured / focus group.
- Observations: Participant observations.
 The advantage of quality research is the possibility to combine various methods.
 Methods available to combine with interviews and observations:
 Content analysis.
 Narrative analysis.
 Other methods that represent more general frameworks:
 Ethnographic studies.
 Action research.
 Grounded theory.

Experiment & Action research:


 Are very similar
 Differences:
o Experiments try to remove the influence of context, action research does not (= less controlled and less
generalizable).
o Action research is pragmatic, while experiments try to add to existing theories.

Experiments are quantitative: Uses:


 Questionnaires, observations (and sometimes secondary data).

Action research:
 Observations, secondary data, interviews, questionnaires, focus groups (everything).

Case studies (qualitative):


 Interviews (focus groups), observations, documents and archives (secondary data).

Ethnography is very similar to case studies:


 Interviews, observation, secondary data.

Content analysis, narrative analysis and grounded theory:


 Interview, focus groups, observation, secondary data.

Important for exam:


 Be able to explain, provide examples and name some advantages/disadvantages.
 The seven steps of whatever will NOT be asked!

50
Gedownload
Downloaded bydoor Richelle
elijah Broy (richellebroy@hotmail.com)
bagumbilya (andrabannister@gmail.com)

You might also like