BRM Exam CH

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

lOMoARcPSD|24065562

BRM exam ch.1-18

Business Research Methods (Zuyd Hogeschool)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

SV BRM Exam

Chapter 1 The nature of business and management research


What is (business) research? Research is always problem-solving based.
Research: Systematic inquiry that provides information to solve problems.
Business research: Systematic inquiry that provides information to guide business decisions. This includes:

Which different types of research are available?


There are four different kinds of research/studies:
1. Reporting study: Provide a summation of data or generate statistics.
- Little inference (= conclusion drawing).
2. Descriptive study: Answers who, what, when, where and how questions, by observing and describing
a subject or event (research variable).
- Deficiency: cannot explain why an event has occurred or why variables interact the way they do.
- Most popular in business research.
3. Explanatory study: Answers why and how questions, by explaining the reasons for a phenomenon that
the descriptive study has only observed.
- Correlational study: Studies the relationship between two or more variables.
- Uses theories/hypotheses to explain why a certain phenomenon occurred.
4. Predictive study: Predict when and in what situations an event might reoccur.
- Is rooted as much in theory as in explanation.
- High level of inference (= conclusion drawing).

What is research? Why should there be any question about the definition of research?
Research is a systematic enquiry with the objective to provide information capable to solve the research problem. This
definition is rather general and fits all types of research. Questions regarding the definition of research often arise as
research can have various purposes, in particular we distinguish between reporting, descriptive, explanatory and predictive
research. Depending on the purpose, the kind of information to be obtained differs.

Applied research: Has a practical problem-solving emphasis. It is conducted to solve or provide answers to a real
management or business problem.
Pure/basic research: Provides answers to questions of a theoretical nature. It is less motivated by business
considerations, but more by academic considerations.

What is the difference between good and poor/unprofessional research?


Good research: Generates data that we can trust, as research is professionally planned and conducted.
Poor research: Generates data that we cannot trust, as research is poorly planned and conducted.

Describe characteristics of the scientific method:


Good research follows the structure of the scientific method. Good research must (examples):
1. Clear purpose and focus.
2. Plausible goals.
3. Detailed research process: Follow defensible, ethical and replicable procedures. (proposal, replicability).
4. Provide evidence of objectivity.
5. Research design thoroughly planned (objective results, representativeness of sample, minimize personal bias).
6. High ethical standards applied.
7. Limitations frankly revealed = reporting of procedures should be complete and honest.
8. Appropriate analytical techniques should be used = adequate analysis for decision-makers’ needs (validity, reliability,
probability of error, findings that have led to conclusions).

1
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

9. Findings presented unambiguously (eenduidig, slechts op één manier te interpreteren).


10. Conclusions justified (data provides evidence).
11. Reports of findings and conclusions should be presented clearly.
12. Report should be professional in tone, language and appearance.
13. Researcher’s experience reflected.

In what different research philosophies is research embedded (ingegoten)?


Research is based on reasoning (theory) and observations (data or information). The two main research philosophies are
positivism and interpretivism. Between these two positions various other research philosophies exist (realism).

Positivism Interpretivism
Basic principles
View of the world The world is external and The world is socially constructed
objective and subjective
Involvement of researcher Researcher is independent Researcher is part of what is
observed and sometimes even
actively collaborates
Researcher’s influence Research is value-free Research is driven by human
interests
Assumptions
What is observed? Objective, often Subjective interpretations of
quantitative, facts meanings
How is knowledge Reducing phenomena to Taking a broad and total view of
developed? simple elements phenomena to detect explanations
representing general laws beyond the current knowledge
(look at the totality)
Type of study Quantitative Qualitative

Realism: Shares principles of positivism and interpretivism.


- Research requires the identification of how people interpret and give meaning to the setting they’re in.
- Positivism: It accepts the existence of a reality independent of human beliefs and behaviour.
- Interpretivism: It accepts that understanding people and their behaviour requires subjectivity.
 Critical realism: A branch of realism. Recognizes the existence of a gap between the researcher’s concept of
reality and the ‘true’ but unknown reality.

Scientific research: A process that combines induction, deduction, observation and hypothesis testing,
into a set of reflective thinking activities (weerspiegelend denken).

Two different approaches of scientific reasoning:


1. Deduction: Theory > Prediction (hypothesis) > Observation > Analysis & Conclusion.
- A conclusion is derived by logical reasoning.
- Reasons given for the conclusion must agree with the real world (true).
- The conclusion must follow from the reasons (argumenten) (valid).
2. Induction: Observation > Analysis & Conclusion (= hypothesis, not proven yet).
- A conclusion is derived from observations of the real world.
- Hypothesis (= conclusion) is plausible (aannemelijk/geloofwaardig) if it explains the facts.
- The conclusion explains the facts, and the facts support the conclusion.
> Other conclusions/explanations fit the facts as well. E.g. €1 million campaign, but sales do not increase.
 Bad campaign, insufficient stock, employee strike, hurricane, etc.

Combining induction and deduction (‘double movement of reflecting thoughts’ – John Dewey):

2
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

1. Induction: Observing  Hypothesis.


2. Deduction: Hypothesis testing (through new observations)  Does the hypothesis explain the facts?

Building blocks of research: Concepts, constructs, definitions, variables, propositions, hypotheses, theories, models.
 Concepts and constructs are used at the theoretical level; variables are used at the empirical level.

Difference between concepts and constructs:


Concept (opvatting): E.g. table, height of the table, etc.
- Fairly concrete.
- Tangible object or its properties (eigenschappen).
- Culturally shared and accepted.

Construct: E.g. presentation skills > social skills, self-confidence, body language, knowledge of the subject, etc.
- More abstract.
- Intangible.
- Specifically developed for research purposes.
- Can combine multiple concepts or constructs.

Are the following words concepts or constructs?


a) First-line supervisor Concept
b) Employee morale Construct
c) Assembly line Concept
d) Overdue account Concept
e) Line management Concept
f) Leadership Construct
g) Price–earnings ratio Concept
h) Union democracy Construct
i) Ethical standards Construct

3
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Difference between operational definition and dictionary definition:


Dictionary definition: Concept is described in a way so that the definition is applicable in many situations.
 A broad understanding of the concept.
Operational definition: Concept is described for the specific purpose of the research.
 A narrow understanding of the concept.

Variable: A symbol to which we assign (toekennen) a numeral or value.


- Dichotomous variables: Have two values, e.g. yes/no, male/female  0 or 1.
- Continuous variables: Infinite values, e.g. income, age, temperature, test score, etc.

Difference between concept and variable:


A concept is more abstract than a variable. The variable is the measureable representation of the concept.

Variables in a model (e.g. participating in training leads to higher productivity):


- Independent variables (IV): Predictor variable. Causes a dependent variable to occur.
 IV describes direct influencing factors (IV leidt tot een DV).
 E.g. training
- Dependent variables (DV): Outcome variable. DV describes what is investigated/explained.
 E.g. IV participation in training  DV productivity.
- Moderating variables (MV): Interaction variable. Second IV, which affects the IV-DV relationship.
 Whether a given variable is treated as an IV or MV depends on the hypothesis, the researched relationship
between the variables.
 E.g. age, amount of sleep/rest, etc.
- Intervening variables (IVV): Mediating variable. A factor that theoretically affects the DV, but cannot
be observed or has not been measured.
 Door middel van de IVV / mede dankzij de IVV wordt de DV bereikt > IV  IVV  DV.
 E.g. IV participation in training > IVV Skills > DV productivity.
- Control variables (CV): Variables that have little or no effect on the core of the problem
investigated, and thus can be ignored.
 E.g. The effect of the CV weather (sunshine) on DV productivity.
- Confounding variables (CFV): Affect the relation between IV & DV or between MV & DV.
 Control variable that has not been controlled properly and is related to IV. Therefore it might give an
alternative explanation for the DV. Always avoid CFV’s!
 E.g. weather, experience or amount of sleep/rest if you test people under different circumstances. Those who
have a higher pre-knowledge are more likely to enrol for training (so in the first place it has an influence on
the IV), at the end it will have influence on the productivity (DV).
- Normally it is IV-DV-MV-CV-IVV For example: “Trainings (IV) will lead to higher productivity (DV) especially among
younger workers (MV) when the sun is shining (CV) by increasing the skill level (IVV).
- Extraneous variable (EV): Stands apart from the model (no relationship).

4
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Proposition: A statement about concepts that may be judged as true or false if it refers to observable phenomena.
Hypothesis: When a proposition is formulated for empirical research.
- Describes the relationship among variables.
 Generalization: If the hypothesis is based on more than one case.
 The virtue of a hypothesis is that it limits what will be studied.

Difference between hypothesis and proposition:


A hypothesis is testable and usually formulated in explanatory research studies that attempt to test them. A proposition is
derived from rationale considerations and is usually the outcome of an explorative research.

There are different types of hypotheses:


1. Descriptive hypotheses: State the existence, size, form or distribution of a variable.
- E.g. ‘8% will lose their job’. Researches often use research questions rather than a descriptive hypothesis.
2. Relational hypotheses: Describe a relationship between two variables.
- Correlational hypo.: Non-causal. Variables occur together without implying that one causes the other.
 E.g. ‘People in the UK give the EU a less favourable rating than do people in France’.
- Explanatory hypo.: Causal. Existence/change in one variable, causes a change in the other variable.
 E.g. ‘An increase in family income, leads to an increase in the percentage of income saved’.
 The IV needs to be the sole reason for the existence of or change in the DV.

How do you formulate a solid research hypothesis?


Three conditions that make a good hypothesis. It should be:
1. Adequate for its purpose: Explains what it claims to explain.
2. Testable.
3. Better than its rivals: It has greater range, probability and simplicity than its rivals.

Theory: The role of theory is explanation.


- The difference between theory and hypothesis:
 Hypothesis: A statement relating two variables.
 Theory: Provides the rationale why those two variables are related.

Models: The role of models is representation.


- The functions of models are description, explication and simulation.

Difference between theories and models:


The difference between a theory and a model is that the purpose of a theory is to explain a certain phenomena while the
purpose of a model is to represent the phenomena. However, in business studies theories are often the fundament of
models.

What position would you take and why? ‘Theory is impractical and thus no good’ or ‘Good theory is the most practical
approach to problems’.
The question addresses the problem of the use of theories. Common to theories is that they give a simplified
representation of reality. Thus the usefulness of theories needs to be assessed along the tension between their abstract and
often simplified picture of reality and the complexity of reality on the one hand. On the other hand it needs to be
considered that only theories allow us to provide real explanations, i.e. explanations that are reasoned and apply for more
than just one specific phenomenon.

5
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Chapter 2 The research process and proposal


The research process:

Research proposal: Includes management question hierarchy and research design.


 Presents a problem, discusses related research efforts, outlines the data needed for solving the problem and shows
the design used to gather and analyse data (includes management question hierarchy).
 Internal proposal: Within a company, usually small and solicited (request for proposal (RFP)).
 External proposal: Can be solicited or unsolicited, prepared by an outside firm to obtain a contract.

Management question hierarchy:


1. Management dilemma: Problem/opportunity.
 Focussed, narrowly defined, and relevant.
2. Management questions: How can we solve this problem? / How can we respond to this opportunity?
 Exploration phase (find public data, literature review).
3. Research questions: Should management use Strategy A? Yes/no. Strategy B? Yes/no. Etc.
 Hypothesis of choice that best states the objective of the study.
 Fine-tune when exploration phase is completed, and set the scope (what is not included?).
4. Investigative questions: What are the effects of Strategy A? Strategy B? Etc.
 Foundation on which the data collection instrument is based.
5. Measurement questions: What you will ask in a survey/interview/focus group/observation.
 Pre-designed or custom-designed questions. Hereafter, pilot testing.
6. * Decision: What is the recommended action given the research findings?

6
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Benefits of a research proposal (to sponsor and researcher):


- Evaluate a research idea (the problem is clear and well defined).
- Ensure that sponsor and researcher agree on the research question.
- Provides a logical guide for the investigation.
- Offers the opportunity to spot flaws in an early stage of the research.
- Decide if the research goal has been achieved (by comparing final product with proposal).
- Encourages researcher to plan the project so that work progresses steadily towards a deadline.

Structure of the research proposal:


1. Executive summary.
2. Problem statement: Includes management dilemma, background, consequences, management question.
3. Research objectives: Includes management question hierarchy and hypotheses.
4. Literature review: Discusses related research efforts (recent/significant literature).
5. Benefits of study: Describes importance of the study.
6. Research design: Provides a detailed plan.
7. Reporting: Nature and form of results, stating what types of information will be received.
8. References.
9. Appendix: Measurement instruments, glossary, etc.
10. Others: Qualifications of researcher, budget, schedule, facilities and resources, etc.

Research design:
Research design: The blueprint for fulfilling objectives and answering questions.
 The strategy for a study and the plan by which the strategy is to be carried out.
 It specifies the methods and procedures for the collection, measurement and analysis of data.
1. Design strategy: Type, purpose, scope, time frame, environment.
2. Sampling design: Identify the target population and select the sample (to represent that population).
3. Data-collection design: Methods of data collection and measurement instruments used.
4. Pilot-testing: Test, conducted to detect weaknesses in the study’s research design.

Issues in the management problem formulation:


- Politically motivated research:
A manager's motives for seeking research are not always obvious. Managers might or might not express a genuine
need for specific information on which to base a decision.
- Ill-defined management problems:
Some categories of problem are so complex, value-laden and bound by constraints that they prove to bed
intractable to traditional forms of analysis.

Some questions are answered by research and others are not. Distinguish between them.
One cannot investigate research problems that are essentially value questions that are ill-defined and one should be careful
when engaging in politically motivated research. Examples for value questions are: Whether a company should close a
certain production facility because it is currently unprofitable or whether it should bear the losses for a longer time. The
answer to this question cannot be found by research, as it is mainly dependent on the values you hold. An example for an
ill-defined management problem is: How could we become more profitable? It is certainly possible to investigate drivers for
profitability, but not all drivers can be investigated. Thus, you would need to limit the research problem on the drivers for
profitability.

7
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Chapter 3 Literature review


What is scientific literature?
Four types of scientific literature: Ranked from highest quality to least quality:
1. Articles in journals.
2. PhD thesis.
3. Conference proceedings: Most recent, but quality control mechanism is less strict.
4. Books: Look at the reputation of the writer.
 All these literature sources are peer-reviewed. > Peers: Academics in the same field.
 Peer-reviewed: Quality mark  Publications that fulfil the quality standards of sound scientific research.

What is a scientific literature review?


A literature review is a critical and in depth evaluation of previous research. It is a summary of a particular area of research,
allowing anybody reading the paper to establish why you are executing this research. A good literature review expands
upon the reasons behind selecting a particular research question.

What are the purposes of a scientific literature review?


A scientific literature review serves the following purposes:
1. Establish the context of the problem by referencing to previous work.
 Isolated knowledge has no value; the value increases if you relate it to existing knowledge.
2. Understand the structure of the problem.
- Relate theories and ideas to the problem.
- Identify the relevant variables and relations.
3. Show the reader what has been done previously.
 Previous work that is related to your study, as you cannot assume that every reader is as knowledgeable as
you are.
- Show which theories have been applied to the problems.
- Show which research designs and methods have been chosen.
4. Rationalize the significance of the problem and the study presented.
 Show that your idea will make a valuable contribution.
- Synthesize and gain a new perspective on the problem.
- Show what needs to be done in light of the existing knowledge.

General problems of literature review:


 Authors have different styles of thinking and writing that are specific to certain disciplines.
 There is no perfect review, each is written from a particular perspective (e.g. economic/sociological).

What is the structure of a ‘good’ review?


There is no single best structure for a review. The ingredients of a good literature review:
- Basic ingredients: Ensure that it will give a decent account of the literature and inform the reader about
what has been done so far in the field.
1. Literature relates to the study’s problem statement.
2. Mentions ideas contributing to the exploration or explanation of the study’s problem statement.
3. Summarizes previous studies addressing the current study’s problem statement.
- ‘Seasoning’: Makes it your own work as it reflects your thoughts and assessment of the current literature.
It also points out why your current study makes an important contribution to the field.
4. Discusses the mentioned ideas against the background of the results of previous studies.
5. Analyses and compares previous studies in the light of their research design and methodology.
6. Demonstrates how the current study fits in with previous studies, and shows its specific new
contribution(s).

8
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Critical review: Book or peer review either prior to, or after publishing.
 Mention weak and strong points and discuss these using specific criteria.
 Overall verdict on the text and some explanations for it.

Writing a literature review is an iterative process of three tasks:


1. Searching information (literature).
 Use libraries’ online catalogues and bibliographic databases or indexes.
2. Assessing the information obtained.
 Skimming, reading and evaluating research (must be relevant and add to your information/arguments).
> Criteria: Prominence, date of publication, methodology, comparability, uniqueness.
3. Synthesizing the assessment of information.
 Compare studies to identify differences and congruencies (overeenkomsten).
 Explain or interpret the differences and congruencies.

What are promising search strategies?


The major problem of every search is how to find the right sources. On the one hand you want to ensure that you do not
miss a relevant source. But on the other hand you are afraid to suffer from an information overload, i.e. you get so much
information that assessing it becomes impossible. Successful search strategies combine expansive searching with filtering
and selection. Start with an expansive search and then use filters to reduce the pool of sources that needs to be assessed.
Once you have reduced the pool, it is wise to expand it again, e.g., by looking which sources the selected sources cite.

How do you decide which literature should be included in a review? Criteria:


Prominence Peer-reviewed. How often has the piece been cited in the work of others?
Date of publication How old is the piece?
Methodology Do you think the authors of the piece provide sound research?
Comparability Does the piece relate to the arguments you make in your research, either by agreeing with
you or disagreeing with you?
Uniqueness How unique is the piece or does it just state what has been said before?

Literature sources: Primary and secondary literature sources.


Primary sources: Full-text publications of theoretical and empirical studies, which represent the original work.
 Academic journals, professional/trade journals, books, newspapers, public opinion journals, conference
proceedings/unpublished manuscripts, reports, research projects.
Secondary sources: A compilation of primary literature.
 Indexes, bibliographies, dictionaries, encyclopaedias, handbooks and directories.

Three types of literature reviews:


- Narrative literature review: Potentially subjective.
 Literature review is not always exhaustive (uitgebreid) enough or unbiased.
 Mainly addresses and evaluates different theoretical perspectives to a research problem.
- Meta analysis: Highly statistical (and objective).
 + Structured approach to summarize, compare and analyse different empirical studies, addressing the same
research question (including statistical methodology).
 – Limited to quantifiable characteristics (e.g. not include methodological soundness if studies).
 – Limited to summarize results of studies (only useful if many empirical studies to this problem exist).
- Systematic review: Lies between a narrative literature review and a meta analysis.
 Assesses a complete set of relevant studies covering the field. Provides insight in the overall picture.

9
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Chapter 4 Ethics in business research


Ethics: The study of ‘right behaviour’ and addresses how research should be conducted in a moral and responsible way.
 Goal of ethics in research is to ensure that no one is harmed (benadeeld) or suffers adverse consequences from
research activities.

Two standpoints on research ethics:


- Deontology: The ends never justify the means that are questionable on ethic grounds.
- Teleology: The morality of the means has to be judged by the ends served.
 The benefits of a study are weighted against the costs of harming the people involved.
 As a researcher, you have the responsibility to find the middle ground (ethical standards).
 Ethical research requires personal integrity from the researcher, project manager and sponsor.

Treatment of participants  research must be designed so a respondent does not suffer physical harm, discomfort, pain,
embarrassment or loss of privacy. Researchers should follow three guidelines:
1. Explain the benefits of the study.
2. Explain the participant’s rights and protection.
3. Obtain informed consent (instemming).

Deception: Occurs when the participant is told only a part of the truth or when the truth is fully compromised.
> Reasons for deception:
1. To prevent biasing participants before the survey or experiment.
2. To protect the confidentiality of a third party (e.g. sponsor).
 The benefits of deception should be balanced against the risks to participants.
 In case of deception, the participant must be debriefed.

Informed consent: (Instemming). Fully disclosing (openbaren) the procedures of the research design before
requesting permission to proceed with the study.
1. Introduce yourself.
2. Briefly describe survey topic.
3. Describe target sample.
4. Tell who the sponsor is.
5. Describe the purpose of the research.
6. Give an estimate of the time required to complete the interview.
7. Promise anonymity and confidentiality (when appropriate).
8. Tell that participation is voluntary.
9. Tell that item non-response is acceptable.
10. Ask permission to begin.
11. * Conclusion: Give information on how to contact the principal investigator.

Debriefing participants: (Nabespreking). Involves several activities that follow data collection:
- Explanation of any deception.
- Description of the hypothesis, goal or purpose of the study.
- Post-study sharing of results.
- Post-study follow-up medical or psychological attention.

Right to privacy/confidentiality: (Geheimhouding):


- Everybody has the right to the protection of personal data.
- Such data must be processed fairly and on the basis of consent of the person concerned.
- Everyone has the right of access to data, which have been collected concerning him or her and the right to have it
rectified (corrigeren).
10
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

 Protect confidentiality:
- Obtaining signed non-disclosure documents.
- Restricting access to participant identification.
- Revealing participants information only with written consent.
- Restricting access to data instruments where the participant is identified.
- Non-disclosure of data sub-sets.
 Non-disclosure: Refusal to reveal information.

Data collection in cyberspace: De virtuele wereld (World Wide Web). Also applicable to data-mining:
- Researchers are obliged to protect human subjects and ‘do right’.
- Cyber-research is particularly vulnerable to ethical breaches.
E.g. blurring between public and private venues, difficulty on obtaining informed consent, etc.
- Actions permissioned / not precluded by policy/law, does not mean it is ethical/allowable.
- Inquiry must be done honestly and with ethical integrity.

What are ethical dilemmas regarding sponsors?


Unethical sponsors: Some sponsors might ask the researcher to behave in an unethical manner.
This will lead to distorted and untrue results. What the researcher can do:
- Educate the sponsor in the purpose of research.
- Explain the researcher’s role in fact-finding versus the sponsor’s role in decision-making.
- Explain how distorting the truth or breaking faith with participants leads to future problems.
- Failing moral persuasion, terminate the relationship with the sponsor.

What is the responsibility of the researcher?


 Responsibilities regarding participants:
- Informed consent.
- Debriefing participants (in case of deception).
- Right to privacy/confidentiality.
- Data collection in cyberspace.
 Responsibilities regarding the sponsor (right to quality research):
- Providing a research design appropriate for the research question.
- Maximizing the sponsor’s value for the resources expended.
- Providing data analysing and reporting techniques appropriate for the data collected.
- (Showing data objectively, regardless of the sponsor’s preferred outcome).
- Right to anonymity/confidentiality (sponsor, purpose and findings non-disclosure).
 Responsibilities regarding the team:
- Right to safety (e.g. work area).
- Ethical behaviour of assistants.
- Protection of anonymity (sponsors’ and participants’).
 Responsibilities regarding the research community.
- Conduct proper, ethical research to avoid scepticism and earn confidence.
- Be open and honest about limitations and samples.
- Don’t steer your research towards results you want to have.
- Plagiarism (copying from other sources) and falsification are unethical.

11
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Chapter 5 Quantitative and qualitative research


Two types of studies:
- Quantitative studies: Rely on quantitative information (i.e. numbers and figures).
 More structured approach, which directs the researcher more.
- Qualitative studies: Rely on qualitative information (i.e. words, sentences and narratives).
 More likely to obtain unexpected information > Often used for explorative studies.
 Distinction: The kind of information used to study a phenomenon.

The quality of any research study does not so much depend on whether it is qualitative or quantitative, but rather it
depends on the quality of its design and how well it is conducted.

What is a research design?


Research design: The strategy for a study and the plan by which the strategy is to be carried out.
 It specifies the methods and procedures for the collection, measurement and analysis of data.
 The essentials of research design:
- The design is an activity- and time-based plan.
- The design is always based on the research question.
- The design guides the selection of sources and types of information.
- The design is a framework for specifying the relationship among the study’s variables.
- The design outlines procedures for every research activity.

What are the major types of research design?


1. Exploratory study.
2. Descriptive study.
3. Causal study.

What are the descriptors classifying research design?


Classifying research design using eight different descriptors:
1. Purpose of the study: Descriptive study vs. Causal (Predictive) study
2. Degree of research question crystallization: Exploratory study vs. Formalized study
3. Method of data collection: Observation vs. Communication vs. * Archival sources
4. Researcher’s ability to manipulate variables: Experimental design vs. Ex-post facto design
(control of variables)
5. Time dimension: Cross-sectional study vs. Longitudinal study
6. Topical scope: Statistical study vs. Case study
7. Research environment: Field conditions vs. Laboratory conditions vs. Simulations
8. Participant’s perceptions: No deviations vs. Some deviations vs. Researcher-induced deviations

1. Purpose of the study.


- Descriptive study: Who, what, where, when, how much?
 A question/hypothesis in which we ask/state something about size, form, distribution or existence of a
variable.
- Causal study: Explanatory study. Why? Tries to explain relationships among variables.
 One variable always causes another and no other variable has the same causal effect.
 Mill’s method of agreement: When two variables have only one condition in common, then that condition
may be regarded as the cause. There is an implicit assumption that there are no variables to consider other
than the ones given. This is never true, as the number of potential variables is infinite.
 Mill’s method of difference: Where the absence of C is associated with the absence of Z, there is evidence
of a causal relationship between C and Z.
o Predictive study: Asks what will happen in the future.
12
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Distinguish between descriptive and causal studies:


A descriptive study is concerned with description, that is the who, what, where, when or how much in observations,
whereas in a causal study relationships between variables are identified, verified and established.

2. Degree of research question crystallization.


- Exploratory study: Goal: Develop hypothesis/questions for further research. Loose structure.
o Relies more heavily on qualitative techniques (instead of quantitative).
> Secondary data (document analysis), interviews (focus groups), observations, case studies.
o Two-stages design: 1. Explore, develop research question. 2. Answer research question.
- Formal study: Goal: Test hypothesis / answer the research question posed. Precise structure.
o Includes descriptive and causal studies.
 Distinction: The degree of structure and the immediate objective of the study.

Distinguish between exploratory and formal studies:


Exploratory studies tend to have loose structures with the purpose of discovering future tasks, or to develop future
hypotheses and questions for further research. The goal of a formal research design is generally specific: to test hypotheses
or answer research questions.

3. Method of data collection.


- Observation: Monitoring. Researcher records the information available from observations.
- Communication: Interrogation. Researcher questions subjects and collects their responses.
- * Archival sources: Secondary data.
 Qualitative and quantitative studies can rely on both methods of data collection.

4. Researcher’s ability to manipulate variables (control of variables).


- Experimental design: Researcher controls/manipulates the variables in the study.
 Discover whether certain variables have an effect on other variables.
 Most powerful support possible for hypothesis of causation.
- Ex-post facto design: Researcher does not control/manipulate the variables in the study.
 He only reports on what has happened/what is happening.
 He must hold factors constant (no influencing, no bias).

What are similarities between experimental and ex-post facto research designs?
Both designs try to show IV-DV relationship, or causal relationships, basically by:
1. Studying co-variation patterns between variables.
2. Determining time order relationships.
3. Attempting to eliminate the confounding effects of other variables on the IV-DV relationship.
 They often use the same data collection methods.

5. The time dimension.


- Cross-sectional study: Study carried out once, represents a snapshot of one point in time.
- Longitudinal study: Study repeated over an extended period.
 Advantage: Tracks changes over time, more powerful regarding tests of causality.
 Two varieties:
o Panel: Researcher studies the same people over time.
o Cohort groups: Researcher studies different people for each measurement.
 Qualitative and quantitative studies can rely on both time dimensions.

13
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

6. Topical scope.
- Statistical study: Designed for breadth, rather than depth.
 Capture population's characteristics by conclusions from a sample's characteristics.
 Census study: Based on the whole population (special case).
- Case study: Designed for depth, rather than breadth.

7. The research environment.


- Field conditions: Design in the actual environmental conditions (natural environment).
 Observe/interrogate people in their usual environment (home, work, shop, etc.).
- Laboratory conditions: Design in staged or manipulated conditions.
 Manipulate the environment, although the laboratory might be designed as the usual environment (e.g.
shopping aisle).
- Simulations: Design in an artificially environment.
 Replicate the essence of a system or process.

8. Participant's perceptions.
When participants believe that something out of the ordinary is happening, they may behave less naturally.
There are three levels of perception:
1. Participants perceive no deviations from everyday routines.
2. Participants perceive deviations, but as unrelated to the research.
3. Participants perceive deviations as researcher-induced (example: mystery shopper).

Three possible causal relationships between two variables:


1. Symmetrical relationship: Two variables fluctuate together, but do not cause a change in one another.
2. Reciprocal relationship: Two variables mutually influence or reinforce each other.
 E.g. self-confidence and performance.
3. Asymmetrical relationship: Changes in one variable (IV) are responsible for changes in another
variable (DV). Four types of asymmetrical causal relationships:
- Stimulus > Response.
 When you are challenged to justify your position during a management meeting your pulse rate
increases rapidly, and you speak out strongly in defence of your position.
- Property > Disposition (Eigenschap > Mening)
 You are a member of a minority ethnic group and this makes you very sensitive to ethnic type
comments by others.
- Disposition > Behaviour (Mening > Gedrag)
 You have strong opinions about the degradation of our physical environment by some industries; as a
result you are highly selective in choosing the companies with whom you interview for career
opportunities.
- Property > Behaviour (Eigenschap > Gedrag)
 You have grown up as a member of the upper social class and now follow the typical consumption
practices of that class.

Testing causal hypothesis


In testing causal hypothesis projects, we seek three types of evidence:
1. There must be a correlation between IV and DV:
- Do IV and DV occur together in the way hypothesized?
- When IV does not occur, is there also an absence of DV?
- When there is more or less of IV, do we find more or less of DV?
2. Time order of IV and DV (IV must occur before DV).
3. Only IV influences the DV (NO EV’s!).

14
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Chapter 6 Sampling Strategies


What is the importance of the unit of analysis?
Unit of analysis: The level at which the research is performed and which objects are researched.
- The unit of analysis is what you sample. You can only choose 1 unit of analysis.
> E.g. Zuyd Hogeschool, Faculties of Zuyd, Departments of IB Zuyd, etc.
- Variables are the characteristics of the unit of analysis (which objects are described by the variable(s)?).
> E.g. Number of incoming students, number of graduating students, FTE in staff.

Each cell gets its own value. Afterwards you can compare the scores of the different Zuyd faculties.
Variables Number of incoming Number of graduating FTE in staf
Unit of Analysis students students
International Business
HMSM
Facility management

Sampling: Draw conclusions about the entire population, by selecting some elements in a population.
Population element: = Unit of study. The subject on which the measurement is being taken.
Population: The total collection of elements about which we wish to make some inferences. E.g. 4000 files.
Census: Obtain information from all elements within the population (e.g. from all 4000 files).

Representative samples are only a concern in quantitative studies rooted in a positivistic research approach. Qualitative
studies rooted in interpretivism usually do not attempt to generalize their findings to a population.

Reasons for sampling:


- Lower costs.
- Greater accuracy (nauwkeurigheid) of results.
 Better interviewing/testing, more thorough investigation of missing, wrong or suspicious information,
better supervision, better processing.
- Greater speed of data collection.
 The larger the sample size, the longer data collection will take. Long collection periods can cause biases as
events might occur (in that period) that influence respondents’ answers.
- Availability of population elements.
 Some situations require sampling. It is the only process possible if a population is infinite.

Two conditions when choosing for a census study instead of sampling:


1. Feasible when the population is small.
2. Necessary when the elements are quite different from each other (e.g. maker of stereo components).

What are the characteristics of accuracy and precision for measuring sample validity?
What makes a good sample?
Validity and representativity of sample: How well it represents the characteristics of its population.
 Representativity of a sample depends on accuracy and precision.
- Accuracy: Degree to which bias and systematic variance are absent from the sample.
o Some sample elements will underestimate the population values, others overestimate these values.
Variations in these values compensate each other > Sample value close to population value.
o The less bias and systematic variance, the greater the accuracy.
- Precision (of estimate): Degree of sampling standard error (error variance). Must be within acceptable
limits for the study’s purpose.
o The smaller the standard error of estimate, the greater the precision of the sample.

15
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

What are the two conditions on which sampling theory is based?


1. There must be enough similarity among the elements in a population that a few of these elements will adequately
represent the characteristics of the total population.
2. While some elements in a sample underestimate a population value, others overestimate this value.

What are the six questions (steps) that must be answered to develop a probability sample/sampling plan?
1. What is the relevant population?
 Who or what do you want to investigate?
2. What are the parameters of interest? Variables.
- Population parameters: Summary descriptors of variables in the population. E.g. mean, variance.
- Sample statistics: Summary descriptors of variables in the sample.
 Sample statistics are used as estimators of population parameters.
 A parameter is a value of a population, while a statistic is a similar value based on sample data.
E.g. the population mean is a parameter, while a sample mean is a statistic.
3. What is the sampling frame? Complete list of population elements from which the sample is drawn.
 Without a sampling frame, a probability sample cannot exist!
4. What is the type of sample? Probability versus non-probability sample.
5. What sample size is needed?
- The greater the variance within the population, the larger the sample must be to provide precision.
- The greater the desired precision, the larger the sample must be.
- The greater the number of sub-groups of interest within a sample, the greater the sample size must be.
- If sample size exceeds 5% of the population, sample size may be reduced without sacrificing precision.
6. How much will it cost? Influences sample size and type, and the data-collection method.

What are the two main categories of sampling techniques and their varieties?
The types of sampling design are determined by the representation basis and the element-selection technique.
1. Representation basis technique:
 Probability sampling: Based on random selection.
- Each population element has a chance of being selected (sampling frame).
- Only probability samples provide precision (maximum precision and accuracy).
 Non-probability sampling: Not based on random selection and is subjective.
- Not every population element has a chance of being selected (no sampling frame).
- When probability sampling is not feasible (no sampling frame), or time and money budget is limited.
- Produces selection bias and non-representative samples.
2. Element selection technique:
 Unrestricted sampling: Each sample element is drawn from the (complete) population.
 Restricted sampling: Covers all other forms of sampling. Selection process follows complex rules.

Types of sampling designs


Representation basis
Element selection Probability Non-probability
Unrestricted Simple random Convenience
Restricted Complex random Purposive
Systematic Judgement
Cluster Quota
Stratified Snowball
Double

Probability sampling approaches:


16
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Simple random sample: Simplest type of probability approach. Each number of the population has an equal
chance of being included in the sample.
 Give everyone a number, let your computer select numbers or use bingo/numbers in a hat.

Simple random sampling is often impractical, reasons:


- It requires a population list (sampling frame) that is often not available.
- It fails to use all information about the population.
- It may be expensive to implement in terms of both time and money.
 These problems have led to the development of alternative designs.

1. Systematic sampling
To draw a systematic sample you need to follow the following steps:
1. Identify the total number of elements in the population. (E.g. N = 30).
2. Determine the desired sample size. (E.g. n = 10).
3. Identify the sampling ratio k. (E.g. k = N / n = 30 / 10 = 3).
4. Identify the random start (drop a pencil (eyes closed) on your population list, see where the dot is). (E.g. = 2).
5. Draw a sample by choosing every kth entry. (E.g. all green marked people).
 Use in combination with other designs to minimize bias.

Advantages: Simplicity and flexibility.


Concerns: Over- and under-sampling of fractions and monotonic trends in population elements. Deal with by:
 Randomize the population before sampling.
 Change the random start several times in the sampling process.
 Replicate a selection of different samples.

2. Stratified sampling
17
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Most populations can be separated into several mutually exclusive sub-populations (or strata). E.g. gender, education.
Stratified sampling: The process by which the sample must include elements from each strata.
Proportionate versus disproportionate sampling (deciding how to allocate a total sample among various strata):
Proportionate stratified sampling: Each stratum is properly represented so that the sample drawn from it, is
proportionate to the stratum's share of the total population.
Disproportionate stratified sampling: Any stratification that differs from the proportionate stratified sampling.

Why use stratified sampling (advantages)?


1. To increase a sample’s statistical efficiency.
2. To provide data to represent and analyse sub-populations (strata).
3. To enable different research methods and procedures to be used in different strata.
 Disadvantage: Expensive.

The process for drawing a stratified sample:


1. Determine the variables to use for stratification.
2. Group your population by the chosen variable/characteristic (create strata!).
3. Randomize the elements within each stratum.
4. Use simple random sampling (SRS) or systematic sampling within each stratum to create your sample.

3. Cluster sampling
Cluster sampling: Population is divided into groups of elements, some groups are randomly selected for study.
 Area sampling: Populations that can be identified with a geographic area (most important form of clusters).
 Why clustering (advantages)? - Less expensive than simple random sampling.
- Also possible without sampling frame.
 Disadvantage: Lower statistical efficiency (more error) as groups are homogeneous.

The process of drawing a cluster sample:


1. Create homogeneous clusters with great intracluster variance (heterogeneity/differences within cluster).
2. Use simple random sampling to select one or several clusters.
3. All elements within the selected cluster(s) are part of your sample.

18
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Design: In designing cluster samples (incl. area sample) we must answer several questions:
1. How homogeneous are the clusters?
2. Shall we seek equal or unequal clusters? Looking at the cluster sizes.
3. How large a cluster shall we take? No size is superior.
4. Shall we use a single-stage or multi-stage cluster?
5. How large a sample is needed? Depends on the cluster design, e.g. simple cluster sampling.

Stratified sampling vs. Cluster sampling


Stratified sampling Cluster sampling
1. We divide the population into a few sub-groups, 1. We divide the population into many sub-groups,
each with many elements in it. The sub-groups are each with a few elements in it. The sub-groups are
selected according to some criterion that is related selected according to some criterion of ease and
to the variables under study. availability in data collection.
2. We try to secure homogeneity within sub-groups, 2. We try to secure heterogeneity within sub-groups,
heterogeneity between sub-groups. homogeneity between sub-groups.
3. We randomly choose elements from within each 3. We randomly choose a number of the sub-groups,
sub-group. all elements in these sub-groups will be studied.

4. Double sampling (or sequential sampling or multi-phase sampling)


It may be more convenient or economical to collect some information by sample and then use this information as the basis
for selecting a sub-sample for further study.

Non-probability sampling approaches:


1. Convenience sampling: Choose whomever you can find; Selection based on convenience.
- No randomization, so the sample is not a good representation of the population.
- The least reliable design, but the cheapest and easiest way to conduct (useful in exploratory study).

Purposive sampling: Attempt to secure a sample that conforms to some determined criteria. There are three types:
2. Judgement sampling: The researcher uses his judgement to select elements conform to some criterion.
 Used in exploratory study.
3. Quota sampling: Used to improve representativeness by using general characteristics of the population.
 E.g. gender (50% male and 50% female), race, age, income level, employment status, political party.
4. Snowball sampling: Individuals are selected and are used to locate others who possess similar characteristics.
 Useful if you want to sample subjects that are difficult to identify.

19
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Data collection approaches (H7)


Data collection approaches:
- Primary or secondary data.
- Quantitative or qualitative data.
- Communication or observation approach (to gather primary data).
o Qual. & Observ.: Participant observations.
o Quant. & Observ.: Structured observations.
o Qual. & Com.: In-depth, semi-/unstructured interviews, focus groups.
o Quant. & Com.: Structured interviews & surveys.

Chapter 9 Secondary data and archival sources


What is the difference between primary and secondary data?
Primary data: Information/data collection by the researcher himself.
Secondary data (SD): Information/data that has already been collected by someone else (for other purposes).
 Have had at least one level of interpretation between the event and its recording.
 Qualitative: Multiple sources are merged to overcome that data does not perfectly fit your research prob.
 Provides information on the context of phenomena and therefore it adds to the total perspective.

Difference between primary/secondary literature and data:


 Literature: Insights, ideas, knowledge, theories, models.
 Data: Answers research questions by data-collection and analysis. E.g. results of observation.

Advantages of secondary data:


- Saves time and money.
- Often high-quality data.

Disadvantage of secondary data:


- Data was not collected with your research problem in mind > Might not perfectly fit your research problem.

In assessing the usefulness of secondary data, you need to address the following questions:
1. Information quality: Is the information provided in SD sufficient to answer your research problem?
a. Do the secondary data cover all the information you need?
b. Is the information available detailed enough?
c. Do the data follow the definitions you apply in your research problem?
d. Are the data accurate enough? (Evaluate their source).
2. Sample quality: Do the secondary data address the same population you want to investigate?
a. Do the secondary data refer to the unit of analysis you want to investigate?
b. Is the sample on which the data are based a good representation of the population?
3. Timeliness of data: Were the secondary data collected in the relevant time period? (not out of date).

How do you select sources of secondary data?


Source evaluation: Information is evaluated and selected based on five factors:
1. Purpose: What the author is trying to accomplish.
2. Scope: Date of publication, how much of the topic is covered and in what depth, covered local,
regional, national or international, etc.
3. Authority: Author and publisher are indicators of the authority of the source.
> Primary sources are the most authoritative.
4. Audience: Tied to the purpose of the source.
5. Format: How the information is presented and how easy it is to find a specific piece of information.
> E.g. index, arrangement of information (chronological, alphabetical, etc.).
20
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

What are typical sources of secondary data?


Dimensions of secondary data are types of secondary data sources and data format:
- Internal sources: All data sources within the organisation in which the researcher is working.
- External sources: All data sources outside the organisation.
- Written sources.
- Electronic sources: Blurring because of the increasing use of information technology (IT).

Sources of secondary data


Internal secondary data sources External secondary data sources
Written - Memos. - Publishers of books, journals, periodicals:
sources - Contracts.  Indexes, yearbooks.
- Invoices. - Government and supranational institutions:
 Books, reports, online.
- Trade and professional associations:
 (Annual) reports.
- Media sources:
 Newspapers, magazines, special reports.
- Commercial sources:
 (Annual) reports.
Electroni - Management information systems. - Publishers of books, journals, periodicals:
c - Accounting records.  Bibliographic databases.
sources - Government and supranational institutions:
 Websites of statistical offices, CD-ROMs.
- Trade and professional associations:
 Websites.
- Media sources:
 Websites, CD-ROMs of complete volumes.
- Commercial sources:
 Websites, data sets of previous studies.

What is data-mining?
Data-mining: Uncovering knowledge, identifying patterns in data and predicting trends and behaviours from data in
databases stored in data warehouses.
 Organizations collect a tremendous amount of information and record it in databases on a daily basis.
With data-mining one searches for valuable information within these large databases (often internal data).

Data-mining tools: Perform statistical analysis to discover and validate relationships.


Data warehouse: An electronic repository (bewaarplaats) for databases that organizes large volumes of data
into categories to facilitate retrieval, interpretation and sorting by end-users.
Data marts: Intermediate (tussenliggend) storage facilities that compile locally required information.

How does data-mining work? Five steps:


1. Sample: Decide between census (the entire dataset) and a sample of the data.
2. Explore: Identify relationships within the data (explore trends, groups, outliers).
3. Modify: Modify or transform data (e.g. reduction, categorization of data).
4. Model: Develop/construct a model that explains the data relationships.
5. Assess: Test the model to estimate how well it performs by running the model against known data.

21
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Data-mining information is valid when (criteria):


1. Accuracy: Data gathered is complete and match with the information you were looking for.
2. Reliability: To what extent the information obtained is independent from different settings.
3. Reality check: Do not use advanced analysis techniques without fully understanding the involved mathematics.

Big data: Big amount of data from the use of web, mobile phones and customer, credit and debit cards.
- Opportunity, but: Privacy, security and intellectual property rights. Validity, reliability and completeness of info.

Observations (H7 & H8)


Observations: Observe/learn about behaviour, events, people or processes.

Classification of observation studies


Research class Purpose Research tool
1. Completely unstructured. Generate hypothesis. Filed notes (Natural setting).
2. Unstructured. Emphasize the best characteristics of 1 & 4.
3. Structured. Emphasize the best characteristics of 1 & 4.
4. Completely structured. Test hypothesis. Observation checklist (Control).

Advantages observational approach:


- Gathering information about people/activities that cannot be derived from experiments or surveys.
- Avoiding participant filtering and forgetting.
- Gathering environmental context information.
- Optimizing the naturalness of the research setting.
- Reducing obtrusiveness (indringen). Better acceptance of observation by participants (compared to questioning).
- Reduces retrospective bias (collecting data at the time it occurs) & respondent bias (participant-initiated error).
- Gathering information about seemingly irrelevant things (e.g. eye contact, weather, time).
- More flexible limitations of the length of data collection.

Disadvantages observational approach:


- Slow and expensive process (observer, equipment).
- Limited to observation that can be seen (no opinions, values, intentions, attitudes, preferences, etc.).
 Limited reliability of inferences from surface indicators.
- Observation records are often large and difficult to analyse.
- Method reactivity bias: Change of behaviour because participant knows he/she is observed.
- The observer must be present when the event takes place.
- Limited in learning about the past or about the present at some distant place.

Participant observation
Participant observation: Qualitative observation approach. More flexible and less structured.
 Observer participates and dives into the participant’s world to gain insight in, and explore explanations for a
phenomenon.
 Two dimensions:
- Whether the observer actively participates.
> No distance; can influence participant’s behaviour.
> The more distant you are as an observer, the more descriptive are your observations.
- Whether the observer is concealed or not. Whether the participant knows that he is being observed.
> A concealed observer reduces bias, but there are ethical issues to concealment.

22
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Field notes: Primary data-collection tool (participant observations). Four principles lead to a higher validity:
1. Direct notes in keywords.
2. Immediate full notes after you leave the setting.
3. Limit observation moment (time you are at the setting).
4. Rich full notes (very complete, everything you noticed).

Structured observation
Structured observation: Produces data suitable for quantitative analysis.
 Two dimensions: Direct vs. indirect observation; Concealed vs. not concealed observation.
- Direct obs: When the observer is physically present and personally monitors what takes place.
- Indirect obs: When the recording is done by mechanical, photographic or electronic means.
- Concealed obs: The participant is not aware of the observer’s presence (> Ethics).
- Not concealed: The participant is aware of the observer’s presence (> Method reactivity bias).
 Conduct structured observations by using a checklist > Quantifying what is observed.

What can you observe with structured observations? Behaviour and non-behaviour.
Behavioural observation: Observing behaviour.
- Non-verbal: Body movement, motor expressions, exchanged glances, etc.
- Linguistic: Interaction, transfer of information, annoying sounds/words (ah, uh).
- Extra-linguistic: Vocal, temporal, interactional and verbal stylistic behaviour.
- Spatial analysis: How people physically relate to others (distance maintained between each other).

Non-behavioural observation: Not observing behaviour, but records, conditions and processes.
- Record: Analysing historical or current data, public or private data.
- Physical condition: Analysing the conditions of something. E.g. plant safety compliances, inventory.
- Physical process: Analysing the process of something. E.g. manufacturing process, traffic flows.

How can you measure structured observations? Factual vs. inferential, physical traces.
Factual observation: Describes what is happening and what can be seen.
 E.g. time and day of the week, environmental factors, product presented, etc.
Inferential observation: Translates what is seen to a concept that cannot be observed.
 E.g. Credibility, interest, acceptance, concerns, effectiveness, customer acceptance of product, etc.
Observation of physical traces: Observing measures of wear (slijtage) and measures of deposit.
- Measures of wear: E.g. estimating library book use by looking at the number of teared pages in a book.
- Measure of deposit: E.g. estimating alcohol consumption by collecting and analysing domestic rubbish.

Communication (H7 & H8)


Communication: Surveying/questioning people and recording their responses for analysis.
 Learn about attitudes, motivations, intentions and expectations.

Strengths of the communication approach:


- Versatility (veelzijdigheid): Gather info about attitudes, opinions, expectations and intentions.
- Geographic coverage: Telephone, mail or internet as medium.
- More efficient and economical than observation.

Weaknesses of the communication approach:


 The quality and quantity depends heavily on the respondents.
- Willingness of the participant to cooperate.
- Ability of the participant, he/she may not possess the knowledge.
- A participant may not have an opinion on the topic.
- Different interpretation of the questions by participants.

23
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Quantitative interviews: Interviews are usually structured. Questionnaire/survey.


- Structured interviews: Goal is to describe or explain, not to explore.

Qualitative interviews: Interviews are usually semi-structured or unstructured. Memory list/intv. guide.
- Unstructured interviews: No specific question/topic list to be covered; mental list of relevant topics.
 Flexible and might take another course than originally expected.
 Researcher wants to gain insight into what the respondents consider relevant and his interpretations.
- Semi-structured interviews: Question/topic list to be covered; ask questions similarly during all interviews.
 Start with specific questions, but allow the interviewee to follow his/her thoughts later on.
 Probing techniques are often used. E.g. TV interview journalist with a political decision-maker.

Structured and unstructured interviews


Structured Semi-structured or unstructured
Type of study Explanatory or descriptive. Exploratory and explanatory (semi-structured).
Purpose Providing valid and reliable measurements Detect meanings from respondents about phenomena,
of theoretical concepts. and learn about respondents’ viewpoint on phenomena.
Instrument Questionnaire. Interview guide; memory list.
Format Fixed to the initial questionnaire. Flexible, depending on the course of the conversation,
follow-up and new questions raised.

Survey/questionnaire
Questionnaire/survey: Collecting quantitative information through structured questioning.

Three conditions for a successful survey: The participant must:


1. Possess the required information: Qualify participants by letting them screen the questions.
2. Understand that he/she has to give accurate information.
3. Be adequately motivated to cooperate: Task for the interviewer. Increase by incentive & good rapport:
 Giving an introduction and establishing a good relationship with the participant.

The introduction should contain:


- The objective of the study.
- The background of the study.
- How the participant was selected.
- How the information will be used.
- What is expected of the participant.
- The confidential nature of the interview.
- The benefits of the research findings.

Establishing a good relationship:


- There must be a relationship of confidence and understanding between interviewer and participant.
- Guarantee anonymity.
- Ask questions properly, record responses accurately and show interest.

Two factors can cause bias in interviewing:


- Non-response error. > With all types of survey. Reduce by call-back procedures.
- Response error.> Mostly with personal interviews (interviewer error).

Non-response error: Responses of participants systematically differ from responses of non-participants. Researcher:
1. Cannot locate the person to be studied.
2. Is unsuccessful in encouraging that person to participate.
Solutions to reduce non-response errors:
24
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

- Call-back procedures. > Better than weighting results: Original answers.


- Weighting results from a non-response.
- Substituting someone else for the missing participant. > Ask others from household about this person.

Response error: When the data reported differ from the actual data (mostly with personal interviews).
 Participant-initiated error: Occurs when the participant fails to answer fully and accurately.
 Interviewer error: When the interviewer's control of the process affects the quality of data.
- Failure to secure full participant cooperation.
- Failure to consistently execute interview procedures.
- Failure to establish an appropriate interview environment.
- Falsification of individual answers or whole interviews (cheating).
- Inappropriate influencing behaviour.
- Failure to record answers accurately and completely.
- Physical presence bias (e.g. young vs. old people).

Four communication approaches:


- Personal interviews: Face-to-face interview between two persons (interviewer and participant).
- Telephone interviews: Using the telephone to conduct an interview (+ arranging interv. + screening).
- Self-administered surveys: A questionnaire to be completed by the participant.
 Can be faxed/mailed (not e-mail!) (a). People can also be intercepted via paper in central locations (b).
- Web-based surveys: Computer-delivered self-administered questionnaires (online).
 E-mail, website (a), pop-up window (b).
1. Target web survey: Researcher has control over who is allowed to participate in the survey (e-mail).
2. Self-selected survey: Researcher has no/very-limited control on who is responding (pop-up window).
3. Social-media-based sv: Between target and self-selected survey (based on the social media contacts).
 Mixed mode: Combining several survey methodologies.
 Optimal communication approach: Answers research question and deals with constraints in time, budget, HR.

Probing: Technique of stimulating participants to answer more fully and relevantly. Probing styles:
25
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

- A brief assertion of understanding and interest: I see, yes, aha.


- An expectant pause.
- Repeating the question: Did the participant understand the question?
- Repeating the participant's reply.
- A neutral question or comment: “How do you mean?” or “Can you tell me more about it?”
- Question clarification: “I’m not sure I understand, can you tell me more?”

Qualitative interviews
An interview guide is important when conducting semi-/unstructured interviews, the main functions are:
- Memory list to ensure that the same issues are covered (in every interview).
- Memory list to ensure that the questions are asked in the same way.
> Increases the comparability of multiple interviews.
 The more specific the interview guide, the more structured the interview will be, and the less flexible the
interviewer is in responding to the respondents.

When writing an interview guide, you should ensure that:


- The guide contains questions about all topics, and order them in a logical order.
- Formulate the questions in a language that is easily understood by the interviewees.
- The questions are not too specific, and the interviewee has the opportunity to reflect on the issue at hand.
- Avoid leading or suggestive questions (reduce the influence of the interviewer).
- Record some general and some specific facts about the respondent (age, gender, working department, years with
the company, etc.).

A researcher’s primary task in interviewing is listening.


Question types in unstructured interviews:
- Introductory questions: General questions that get the interview started and help to establish good rapport.
I’ve looked at your website, but still, can you tell me more about your background?
- Follow-up questions: Used to ask the respondent to elaborate on a given question or to clarify whether you
understood the respondent correctly. What do you mean by…?
- Probing questions: Similar to follow-up, but refer more specifically to a part of the answer. The respondent
told he made a decision, why did you choose to…?
- Specifying questions: Ask the respondent to elaborate on the answer and to offer more information.
What happened after the decision was taken?
- Direct questions: Provide information on how respondents look at a situation from their viewpoint and
often ask them to describe an opinion or feeling. What is your point of view on…?
- Indirect questions: Not directed at the respondent personally, but ask for a general point of view.
What do people around here think about…?
- Structuring questions: Used to go on to the next topic. If we have not missed important aspects of this
subject, I would like to move on to….
- Silence: A way to let the respondent know that you want to hear more.
- Interpreting questions: Asked in order to confirm that you understand the information correctly. Do you mean
that…?

Information recording:
- Unstructured interviews can be conducted by two interviewers, but are usually recorded by tape or digitally.
> Advantages: Focus on the conversation (instead of making notes) and you can listen to it again.
> Disadvantages: - People feel uncomfortable; this influences their answering behaviour.
- Technical problems can disturb the interview.
- Transcribing the information recorded is very time-consuming.
The demand on the interviewer with an unstructured interview  why experts?
26
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

1. Background information.
2. To be able to direct the interview.
3. To decide whether you’ve heard enough or would like to get more information on the topic.
4. Respondents often expect you to be an expert.
 Interviewers should be good at active listening.

Focus groups
Focus group: Panel of people, led by a moderator, who meet to discuss some open questions and topics.
 Special form of unstructured group interviews.
 Can offer new insights into the topic that would have remained hidden in a one-by-one conversation.
 Moderator: Uses group dynamics principles to focus or guide the group in interactions.
 Script: Guide for moderator: Introduction, directions for participants, opening question, questions to
ask if the discussion falls dead, closing words.
 Group size: 6-10 people, but smaller groups can be useful if sensitive issues are discussed.
 Group type: Homogenous groups rather than heterogeneous groups.
> Homogeneous focus groups tend to promote more intense discussion/interaction.

Advantages and disadvantages of focus groups


Advantages Disadvantages
 Researcher can observe interaction between  Requires a well-trained moderator.
respondents.  Individuals might dominate the group.
 Detecting different views on a topic.  Respondents might be reluctant to speak up or
 Cost- and time-effective (group of people). remain in their role.

27
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Other qualitative approaches


Chapter 12 Experimentation
Experiments: Involve intervention by researcher, by manipulating IV and observing changes in DV.
 Uses questionnaires, observations (and sometimes secondary data).
 Causal studies / relationships. Three types of evidence:
1. There must be a correlation between IV and DV:
- Do IV and DV occur together in the way hypothesized?
- When IV does not occur, is there also an absence of DV?
- When there is more or less of IV, do we find more or less of DV?
2. Time order of IV and DV (IV must occur before DV).
3. Only IV influences the DV (NO EV’s!).

Advantages:
 The ability to uncover causal relationships (and manipulate the IV).
 The ability to control extraneous and environmental variables.
o Extraneous variables: Describe background characteristics of the participants (gender, age, education).
> Researcher can only control these variables through selection of participants.
o Environmental variables: Describe the situation in which the experiment takes place.
> Can be controlled by researcher, should be kept constant.
 The convenience and low costs of creating test situations (instead of searching for their appearance).
 The ability to replicate findings and thus rule out isolated or idiosyncratic (vreemde) results.
 The ability to exploit (benutten) naturally occurring events (and to some extent field experiments).

Disadvantages:
 The artificial setting of the laboratory.
 Generalization from non-probability samples (can pose problems despite random assignment).
 Disproportionate costs in select business situations: Applications of experimentation can be expensive.
 Focus is restricted to the present and immediate future.
 Ethical issues related to the manipulation and control of human subjects.

Seven steps to make an experiment successful:


1. Select relevant variables for testing.
2. Specify the level(s) of treatment.
3. Control the extraneous and environmental factors (experimental environment).
4. Choose an experimental design (suited to the hypothesis).
5. Select and assign subjects to groups.
6. Pilot-test, revise and conduct the final test.
7. Analyse the data.

Control group: Base level. Participants are not exposed to IV manipulation.


Experimental group (treatment): Participants are exposed to IV manipulation.
Matching: Each experimental and control subject are matched on every characteristic used in the study.
 When randomization is not possible.

Laboratory experiments: Conducted in an unnatural setting; researchers can fully control the setting / variables.
 BUT: Participants are aware that they are participating in an experiment. > Behaviour might differ.
 The experimental effect is problematic, as experimenter and participants interact more (than field exp.).
Field experiments: Conducted in a natural setting; participants unaware that their behaviour is being monitored.
 More heterogeneous group: Reflects the population better than the laboratory experiment.
 No/limited control on research setting > Ability to manipulate the IV is smaller. Other: Ethical issues.

28
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Validity in experimentation
Internal validity: If the IV has caused the change in the DV.
External validity: When the results of the experiment can be generalized to some larger population.

Seven threats to internal validity:


1. History: Events might occur during experiment, which confound/also influence DV.
2. Maturation: Changes within subject due to the passage of time (e.g. hungry, tired).
3. Testing: Learning effect. Experience of first test influences results of second test.
4. Instrumentation: Changes in observations, due to instrument or observer.
5. Selection: Groups should be equivalent in every respect (control & experimental group).
6. Statistical regression: Random fluctuations over time are problematic if groups are selected on extreme values.
> E.g. most of the time you are good-humoured, sometimes you are bad-humoured.
7. Experiment mortality: Composition of groups changes during experiment > Comparison will be distorted.

Three threats to external validity (X = experimental treatment/manipulation):


 The reactivity of testing on X: Pre-test sensitizes participants > Respond in a different way to X in post-test.
 Interaction of selection and X: When selected subjects do not properly represent the desired population.
 Other reactive factors:
- Experimental settings: Laboratory experiment can have a biasing effect on the subjects’ response to X.
- Knowledge of being a participant.
- Possible interaction between X and subject characteristics.

Experimental research design (E = experimental effect)


 True experimental design: Experimental group, control group, randomization and control.
- Post-test only control group design: R X O1 Experimental group
R O2 Control group (no X)
* Pre-tests are not necessary when randomization is possible.  E = O 1 – O2
- Pre-test – post-test control group design: R O1 X O2 Experimental group
R O3 O4 Control group (no X)
* If randomization was very effective, expect that O1 and O3 are equal.  E = (O2 – O1) – (O4 – O3)
 Pre-experimental design: Experimental group, control, no control group, no randomization.
- One group pre-test–post-test design: O1 X O2 Experimental group
 Quasi-experimental design: Experimental group, control group (non-equivalent), no randomization, no control.
> Field experiment: Work with existing groups > No randomization, control and equivalence between groups.
- Non-equivalent control group design: O1 X O2 Experimental group
O3 O4 Control group (no X)
* Compare pre-test results (O1 – O3) to determine the degree of equivalence between groups.
- Time-series design: Repeated observations before and after the treatment.

The experimental approach can also be combined with the survey approach.
Factorial surveys: = Vignette research.
 Researcher presents the respondent with a brief, explicit description of a situation (description = IV); and then asks
him/her to assess the situation / make a decision (answer = DV).

Testing effect: People know what to expect because they were (pre-)tested before.
 Make sure that there’s sufficient time between the tests to reduce the testing effect.
 This is why the pre-test is often left out in social studies (prevents testing effect).
 Testing effect: O1 affects O2.
 Reactivity effect: O1 affects X.

29
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Action research (H10)


Action research: Produce socially desirable outcomes.
 Uses observations, interviews, focus groups, questionnaires, secondary data.
 Addresses real-life problems.
 Context-dependent.
 Validity: Do actions solve the problems and realize the desired change?
 Relies heavily on interaction and cooperation between researchers, participants and practitioners.

Describe what action research is about.


Action research addresses real life problems - experienced in an organisation and is bounded by its context, i.e. it
investigates a specific problem taking into account the specific circumstances in which the problem occurs. The specificity of
the approach calls for a close cooperation between researchers, participants and practitioners, as the conclusion of the
research are specific suggestions that should be implemented. Action research views research and management actions as
a continuous reflecting process, i.e. research informs management how a management problem could be solved and the
results of this induced change are taken into account in the research. The major objective and hence criteria for the quality
of the research is whether the research efforts solved the problem and implemented the desired change.

Advantages action research:


 Interplay between action and research to achieve desired changes.
 Implementation of the conclusion and solution to the problem.
 Builds upon the cooperation between participants and researchers.

Disadvantages action research:


 Findings are just anecdotal evidence.
 Action research is often context-dependent. > Transferring knowledge to another context is difficult/impossible.
 The essential distance of the researcher is violated.
 Limited control over the environment.

Action research process


1. Diagnostic stage: Problem identification and definition.
2. Science-based problem: Analysis, generating possible solutions to the problem.
3. Action design: Designing actions responding to the problem.
4. Action stage: Executing actions designed.
5. Assessment: Investigating the effects of the actions.
6. Learning: Assessing transferability of results.

Ethnographic studies (H10)


Ethnographic studies: Researchers immerse themselves in the lives, culture, or situation they’re studying.
- Data-collection methods: Qualitative. Participant observation, qualitative interviews, secondary data.
 Ethnographic studies start with a broad theme and gets more focused over time (= funnel approach).

Elements of ethnographic studies:


 Multiple information sources (combine observations, interviews and secondary data).
 Employ different perspectives (e.g. obtain info from manager, employee, industry experts, etc.).
 Record and present different types of information (e.g. frequencies, citations, anecdotes or visualizations).

30
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Chapter 11 Case studies


Case study: Researcher gets in-depth understanding of the case; Real-life context > Very context-dependent.
- Multiple approaches are combined and/or multiple cases are studied.
 Qualitative > Interviews (focus groups), observations, secondary data.
 Suitable for explanatory, descriptive and exploratory research.
- Follows replication logic (not sampling logic > survey).
> Results are therefore not generalizable to a population, but to a theoretical proposition.
Replication logic: Same phenomenon under the same conditions; Phenomenon differs if situation differs.
- Objective: Understand a real problem and use gained insights for developing new explanations and theories.

Advantages of case study:


- Relies on multiple sources of evidence (interviews, observations and secondary data).
- Consideration of the specific, real-life context.

Disadvantages of case study:


- Not generalizable to a population.
- Big chance on bias.

Single versus multiple case studies:


 Single case studies: Rely on one single case.
o If the single case study closes a longer series of case studies written by others.
o To investigate extreme or unique cases.
 Multiple case studies: Rely on several cases.
o Results are considered more robust (krachtiger).
o Select the best cases.

Content analysis (H10)


Content analysis: Qualitative or quantitative approach to systematically analyse texts.
 Objective: To reduce information to a manageable amount.
 Textual information is transformed into numerical data for further statistical analysis.
 Quantitative app.: Count occurrence of words/phrases and detect how far they are apart in a text.
 Qualitative approach: Detect the general meaning of a text to categorize it.

Categories of content analysis:


- Analysis of antecedents (voorgeschiedenis).
- Analysis of characteristics.
- Analysis of effects.

Advantages:
- Adds to transparency (it’s clear to readers what the researcher did).
- Others can take your textual information and replicate your research.
- Content analysis is unobtrusive (niet opdringerig/opvallend) and non-reactive.

Disadvantages:
- Quality depends on input.
- Coding procedure is subject to interpretation bias.
- Time-consuming.

31
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

The process of content analysis (How to conduct content analysis)


 Research problem.
1. Define the population of sources and selection criteria.
2. Coding procedure: Prescriptive or open analysis; Coding.
3. Coding frame: List of all codes used.

Coding: Categorizing and combing data into themes/ideas; create codes; add fragments to codes.
 All fragments that have the same code are about the same theme/idea.
 Software packages are available to automate coding (e.g. NVivo, MAXQDA).
Prescriptive analysis: Prior to searching, define words/phrases that you search in texts (create dictionary of key words).
Open analysis: Try to find the general message of the text.
Coding frame: List of all codes used.

Narrative analysis (H10)


Narrative analysis: Based on stories (in-depth). Respondents must be part of the stories they tell!
 Qualitative > Interviews and secondary data (e.g. biographies).
 Allows researcher to understand phenomena from the respondent’s perspective.
 Incorporate the specific context.

Procedures in narrative analysis to get a better insight into the narrative/story:


 Structural analysis: Focuses on how the narrative/story is told.
 Thematic analysis: Focuses on the content of the narrative/story.
 Interactional analysis: Dialogue between storyteller and listener; Work together to construct the narrative/story.

Coda: Indicate which current phenomena/actions relate to the story told.


 Offer insight into the importance of the story.

Grounded theory (H10)


Grounded theory: Starts from collected data (not with a theory), and uses this data in an iterative process of coding,
categorizing and comparing to formulate a new grounded theory.
 Grounded theory is a general strategy in qualitative research.

Research in grounded theory, is an iterative three-stage process, based on two principles:


1. Open coding: All information pieces are labelled with categories.
2. Axial coding: Identify linkages between categories.
3. Selective coding: Identify main categories.
 Theoretical sampling: Which additional cases would be most useful to build and develop a theory?
> No representativeness considered.
 Theoretical saturation: Stop process if new categories and cases do not improve/add to the understanding of
the phenomena (if it is not relevant).

32
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Criteria to assess how well a research is conducted (grounded theory):


1. Fit: How well do categories represent real incidents?
2. Relevance: How useful is the theory for practice?
3. Workability: Quality of the explanation offered and assess if the theory works.
4. Modifiability: Can the theory be adapted if new data is compared to it? (Must be flexible enough).

Advantages:
- Framework for systematic inquiry into qualitative data.
- Theory development.

Disadvantages:
- Feasibility problems (e.g. researcher can’t be free of pre-theoretical thoughts).
- Very time-consuming because of its iterative character.
- Criticized for not generating theories, but generating categorization systems.

33
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Chapter 13 Fieldwork: questionnaires and responses (SURVEYS!)


The instrument design process includes three phases:
1. Developing the instrument design strategy: Create investigative questions.
2. Constructing and refining the measurement questions: Create measurement questions.
3. Drafting and refining the instrument: Create instrument design.

Phase 1: Developing the instrument design strategy


To plan a strategy for the survey, there are four important questions that need to be asked:
1. What type of data is needed to answer the management question?
o Nominal, ordinal, interval, ratio.
2. What communication approach will be used?
o Personal, telephone, self-administered, web-based, mixed mode.
3. Should the questions be structured (closed), unstructured (open-ended)?
 Open-ended questions: Allow participants to reply with their own choice of words/concepts.
> Problems: Frame of reference (interpretation) and getting irrelevant responses.
 Closed questions: Limit participants to a few predetermined response possibilities.
4. Should the questioning be undisguised (direct) or disguised (indirect)?
 Direct questions: Participant should be able to answer them openly and unambiguously.
 Indirect questions: Designed to provide answers through inferences from what the participant says.

Disguised (indirect): Designed to conceal the question's / survey’s true purpose. WHY?
 To avoid bias if it is about a sensitive, boring or difficult topic.
 Useful when we seek information that is available from the participant, but not at the conscious level.
 Disguising the sponsor for strategic reasons or if name influences answering behaviour.

Questionnaires (= interview schedules) contain three types of measurement questions:


 Administrative questions: Identify participant, interviewer, interview location and conditions.
 Classification questions: Goal: Categorize answers based on participant characteristics > Reveal patterns.
 Demographic, economic, sociological, geographic.
 Target questions: Address the investigative questions (most important questions!).
o Structured questions: Fixed set of choices / closed questions.
o Unstructured questions: Do not limit responses / open-ended questions.
o Combination of structured and unstructured questions.

Phase 2: Constructing and refining the measurement questions


Question construction involves three critical decision areas:
1. Question content.
2. Question wording: Shared vocabulary, unambiguous, not misleading, etc.
3. Response strategy: Offer options that include unstructured or structured response.
Unstructured response: Open ended response, free choice of words, free-response.
Structured response: Closed response, specified alternatives provided.
> Closed response strategies: Dichotomous, Multiple-choice, Checklist, Rating, Ranking.

Distinguish between questioning and response structure.


Questioning structure: The amount of structure placed on the interviewer.
Response structure: The amount of structure placed on the participant.
 Structure limits the amount of freedom when the interviewer asks the participant questions.

Different response strategies:


Free response strategy Open-ended questions (= free-response questions).
- Free-response questions
Dichotomous response strategy Provides only two options (often suggesting opposing responses (yes or no)).
- Dichotomous questions
Multiple-choice strategy There are more than two alternatives (but you can only choose one answer).
- Multiple-choice questions Problems (using this strategy) can be:
34
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

 One or more responses have not been anticipated.


 List of choice is not exhaustive or not mutually exclusive.
 One question can be divided into several questions.
 Order and balance of choices (do not put the correct answer in the middle, first or
last option; as much positive as negative choices).
 Unidimensional scale (different aspects of the same dimension).
Checklist response strategy Like the M-C strategy, but you can choose more than one answer. Order is unimportant.
Rating response strategy Gradations of preference, interest or agreement. Participants can position each factor on
- Rating questions a scale. Order unimportant.
Ranking response strategy Relative order of alternatives is important (e.g. order your top 3).

Characteristics of response strategies


Characteristics Dichotomous Multiple-choice Rating Rank ordering Free response
Type of questions Closed Closed Closed Closed Open
> Structured > Structured > Structured > Structured > Unstructured
Type of data Nominal Nominal Ordinal Ordinal Nominal
Ordinal Interval Ratio
Ratio
Number of answer 2 3-10 3-7 Max. 10 None
alternatives
Number of 1 1 Max. 7 Max. 10 1
participant answers

Phase 3: Drafting and refining the instrument


Drafting and refining the instrument is a multistep process:
1. Develop the participant-screening process (personal/telephone) along with the introduction.
2. Arrange the measurement sequence (meet volgorde):
> Branched question: If the content of the question assumes other questions have been asked and answered.
3. Prepare and insert instructions including termination, skip directions and probes.
4. Create and insert a conclusion, including a survey return statement (participation has been valuable!).
5. Pre-test specific questions and the instrument as a whole (identify problems before data collection).

Guidelines for the sequence of questions:


 Place the more interesting topical target questions early on (attention-getting & human-interest questions).
 Place personal/ego-threatening questions near the end (prior: use buffer questions).
 Place challenging questions later in the questioning process (simple > complex; general > specific = funnel app.).
 Use transition statements between the diferent topics of the target question set.

What could be major failures of the survey instrument design?


- Failure to develop target questions which will answer your investigative questions.
- Failure in selecting the most appropriate communication approach.
- Failure in drafting specific measurement questions (content, wording, and sequence of questions).
- Failure to screen participants who are representative of the population.
- Failure to test the instrument properly.

What are major problem assumptions made by researchers?


- Participants are motivated to answer every question truthfully and fully.
- Participants know or understand key words or phrases.
- Participants will answer the question from the same frame of reference that the instrument assumes.
- Participants will do calculations, averaging, or even diligent remembering in order to answer a question.
- Assuming that the development of good survey questions is a simple process.
Chapter 14 Measurement and scales (SURVEYS!)
Four types of data types/measurement scales:
 Nominal: Categories are mutually exclusive and collectively exhaustive (= enige keuzes die er zijn).
35
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

 Ordinal: Nominal characteristics + indicator of order.


 Interval: Ordinal characteristics + equality of interval (distance between 1;2 = distance between 2;3).
 Ratio: Interval characteristics + there’s an absolute zero or origin (divide and multiply possible).

Types of data and their measurement characteristics


Type of Data characteristics Basic empirical operation Example
data Classification Order Distance Origin
Nominal + Determination of equality Gender (male vs. female)
Ordinal + + Determination of greater or Doneness of meat (well,
lesser value medium, rare), grades
Interval + + + Determination of equality of Temperature in Celsius
intervals or differences
Ratio + + + + Determination of equality of Age in years profits in €
ratios

Two sources of measurement differences (potential error):


1. Systematic error: Results from a bias.
2. Random error: The remainder. Occurs erratically (onregelmatig).

Four major error sources:


 Participant: Little knowledge (> guesses), temporary factors ((e.g. fatigue, boredom, anxiety, etc.).
 Measurer: Suggestive questions, body language, etc.
 Situational factors: Presence of others & any condition that places a strain on the interviewer/session.
 Data-collection instrument: Confusing, ambiguous, poor selection of question items.

Three major criteria for evaluating a measurement tool:


1. Validity: Refers to the extent to which a test measures what we wish to measure.
a. Content validity.
b. Criterion-related validity.
c. Construct validity.
2. Reliability: Has to do with the accuracy and precision of a measurement procedure.
a. Stability.
b. Equivalence.
c. Internal consistency.
3. Practicality: Is concerned with a wide range of factors of economy, convenience and interpretability.

1. Validity
Two major forms of validity:
 External validity: The data’s ability to be generalized across persons, settings and times.
 Internal validity: The ability of the research instrument to measure what it is meant to measure.

a. Content validity: Does the measurement instrument cover the investigative questions? Is all included? (Content).
 Good content validity: Representative sample and instrument covers all relevant topics (= subjective).
 Judgemental evaluation: Researcher judges content validity by defining of the topic, items and scales.
 Panel evaluation: Panel of people/judges judge content validity.

b. Criterion-related validity: Success of measures used for prediction or estimation of e.g. behaviour.
 Concurrent validity: Estimation of the present.
 Measure at one point in time. Use two different measurement instruments > Correlate?
 E.g. Cito results & judgement of teacher.
 Predictive validity: Prediction of the future.
36
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

 Measure at two points in time. Use two different measurement instruments > Correlate?
 E.g. Cito results & see over a period of time whether the student does well in assigned education level.

c. Construct validity: Success of measures identifying and representing underlying constructs.


 Sub-constructs should be sufficiently distinct (verschillend) from each other.

2. Reliability
Reliability: A measure is reliable to the degree that it supplies consistent results (under different times/conditions).
 If a measurement is not valid, it hardly matters if it is reliable – because the measurement instrument does not
measure what the designer needs to measure in order to solve the research problem.

a. Stability (multiple measurements)


Stability: When something is measured two times, the outcome should be similar.
 Test-retest: Compare the two tests, to learn how reliable they are (correlation).
 Not too much (< 6 months), not too little time apart.
 Participant learns more about the topic before the retest (= topic sensitivity).

b. Equivalence (multiple measurer)


Equivalence: When different measurers measure the same situation *, the outcomes should be similar.
 * Same conditions, instrument, etc. BUT: Different measurers.
 Test: Compare scoring of different observers of the same event (correlation).
 Improve: Use well-trained measurers.

c. Internal consistency (multiple instrument items)


Internal consistency: Degree to which instrument items are homogeneous and reflect the same underlying construct.
 Correlation. > Split half technique

3. Practicality
Practicality has been defined as economy, convenience and interpretability.
a. Economy:
 Limit the number of measurement questions, to limit the measurement time (and thus costs).
 Choice of data-collection method (personal interview is more expensive than online surveys).
b. Convenience: The measuring device needs to be easy to use and apply.
c. Interpretability: When people other than test designers must interpret the results. Make interpretation possible:
 State the functions the test was designed to measure and the procedure by which it was developed.
 Detailed instructions for administration.
 Scoring keys and instructions.
 Norms for appropriate reference groups.
 Evidence about the reliability.
 Evidence regarding the inter-correlations of sub-scores.
 Evidence regarding the relationship of the test to other measures.
 Guides for test use.

Response methods
To quantify dimensions that are essentially qualitative, rating or ranking scales are used.

Rating scales: When variables are individually rated. There are many different sample rating scales:

Sample rating scales

37
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Rating scale Definition Data type How does it look like?


Simple category scale Two mutually exclusive response Nominal Have you ever been self-employed?
/ dichotomous scale choices 1. Yes
2. No
Multiple-choice-single- Multiple response choices, but only Nominal For which department are you working?
response scale one answer is sought a. Production
b. Sales, etc.
Multiple-choice- Multiple response choices, rater is Nominal Check any of the sources where you would
multiple-response allowed to select one or more collect information about a new car:
scale / checklist alternatives  Multiple response choices
Likert scale Compare one person’s score with a Interval Strongly agree agree neutral disagree
/ summated scale distribution of scores from a well- strongly disagree
defined sample group
Semantic diferential Measures the psychological meanings Interval Heathrow airport
scale of an attitude object Fast __:__:__:__: Slow
High quality __:__:__:__: Low quality

Numerical scale Participants write a number from the Ordinal or Very good 5 4 3 2 1 Very bad
scale next to each item interval Employees cooperation _____________
Employees knowledge______________
Multiple rating list Similar to numerical scale, but it Interval Please indicate how important or unimportant
scale accepts circled response from the each service characteristic is
rater and the lay out permits Fast reliable repair 7 6 5 4 3 2 1
visualization of results Service at my location 7 6 5 4 3 2 1
Fixed sum scale Discover proportions, up to 10 Ratio Relative importance
categories may be used Subject one X
Other subjects X
Sum 100
Stapel scale Alternative for semantic differential Ordinal or Company name x
scale, when it’s difficult to find interval* +3 +2 +1 Technology leader -1 -2 -3
bipolar adjectives (e.g. fast, slow) +3 +2 +1 Exciting products -1 -2 -3
Graphic rating scale Enables researcher to discern fine Ordinal, How likely are you to recommend X to others
differences interval* Very likely |---------------------------| very unlikely
> E.g. with smiley faces and in how or ratio* Place an X at the position along the line that
much pain you are reflects best your judgment

Errors to avoid with rating scales:


 Leniency: (Mildheid). Occurs when a participant is either an ‘easy rater’ or a ‘hard rater’.
o Hard rater: Error of negative leniency. Raters give people a higher score if they know them.
o Easy rater: Error of positive leniency. Where acquaintances are rated lower because one is
aware of the tendency towards positive leniency and attempts to counteract it.
 Central tendency: (Neiging). Raters are reluctant to give extreme judgments.
 Halo effect: Rater introduces systematic bias by carrying over a generalized impression of the subject from
one rating to another. E.g. you may expect the student who does well on the first question of an
exam to do well on the second.

Ranking scales: Compare variables and make choices among them. There are different sample ranking scales:
Examples of ranking scales
Ranking scale Definition Data type How does it look like?
Paired-comparison Choosing between two objects. When Ordinal Choose per question the most favourable answer:
scale there are more than two objects, this 1. X or Y 2. X or Z 3. Y or Z
becomes a difficult task for the participant
Forced ranking scale Lists attributes that are ranked relative to Ordinal Rank ‘case’ in order of preference (1,2,3)
each other. Number of stimuli is limited _____X _____Y _____Z
Comparative scale Ideal for comparison, if the participant is Ordinal Compared to ‘case’, the ‘characteristic’ of ‘case’ is:

38
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

known with the standard Superior Same Inferior


1 2 3 4 5

Measurement scale construction: Five techniques:


1. Arbitrary: Custom-designed scale (subjective > Only content validity!).
2. Consensus: Panel of judges evaluate the items (relevance, ambiguity) > Time-consuming.
3. Item analysis: Measurement scales are pre-tested with a sample of participants (popular: Likert scale).
4. Cumulative: If one agrees with an extreme item (C), one will also agree with less extreme items (A & B).
 Time-consuming.
5. Factoring: Correlate items (from other studies) to detect their relationship (popular: Semantic differential).

39
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

Chapter 18 Hypothesis testing


Only learn this part, not the rest of chapter 18!

Testing for statistical significance:


1. State H0.
2. Choose statistical test. > Based on research design.
3. Select the desired level of significance > p < 0.05.
4. Compute the calculated difference value. > E.g. t, chi-square, etc.
5. Obtain the critical value.
6. Interpret the test.

H0: No difference/relationship.
H1: Difference/relationship.
 In research, you want to reject H0, and therewith reinforce H1 (NOT PROVE!).

Type I error: H1 is falsely accepted, while true H0 is rejected.


Type II error: H0 is falsely accepted, while true H1 is rejected.

How to select a test:


1. What does the test involve?
- One sample.
- Two samples.
- k samples (more than 2).
2. If two or k samples; are they independent (not related) or dependent (related)?
 Dependent: If results come from the same participant or if the same group is measured twice.
 Independent: Two groups will be tested once, separately.
3. Is the DV nominal, ordinal or scale (interval or ratio)?

Overview PPT will be given during exam. You should be able to provide the right cell within that overview based on the
situation provided. (Formula’s will NOT be asked, you do NOT have to calculate these things!).

40
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562

General
Data-collection methods Other qualitative approaches
Interviews Experiments
Questionnaires Action research
Focus groups Case studies
Observations Ethnographic research
Secondary data Content analysis
Narrative analysis
Grounded theory

Methods used in qualitative research:


- Case studies.
- Qualitative interviews: Semi-/unstructured / focus group.
- Observations: Participant observations.
 The advantage of quality research is the possibility to combine various methods.
 Methods available to combine with interviews and observations:
 Content analysis.
 Narrative analysis.
 Other methods that represent more general frameworks:
 Ethnographic studies.
 Action research.
 Grounded theory.

Experiment & Action research:


 Are very similar
 Differences:
o Experiments try to remove the influence of context, action research does not (= less controlled and less
generalizable).
o Action research is pragmatic, while experiments try to add to existing theories.

Experiments are quantitative: Uses:


 Questionnaires, observations (and sometimes secondary data).

Action research:
 Observations, secondary data, interviews, questionnaires, focus groups (everything).

Case studies (qualitative):


 Interviews (focus groups), observations, documents and archives (secondary data).

Ethnography is very similar to case studies:


 Interviews, observation, secondary data.

Content analysis, narrative analysis and grounded theory:


 Interview, focus groups, observation, secondary data.

Important for exam:


 Be able to explain, provide examples and name some advantages/disadvantages.
 The seven steps of whatever will NOT be asked!

41
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)

You might also like