Professional Documents
Culture Documents
BRM Exam CH
BRM Exam CH
BRM Exam CH
SV BRM Exam
What is research? Why should there be any question about the definition of research?
Research is a systematic enquiry with the objective to provide information capable to solve the research problem. This
definition is rather general and fits all types of research. Questions regarding the definition of research often arise as
research can have various purposes, in particular we distinguish between reporting, descriptive, explanatory and predictive
research. Depending on the purpose, the kind of information to be obtained differs.
Applied research: Has a practical problem-solving emphasis. It is conducted to solve or provide answers to a real
management or business problem.
Pure/basic research: Provides answers to questions of a theoretical nature. It is less motivated by business
considerations, but more by academic considerations.
1
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Positivism Interpretivism
Basic principles
View of the world The world is external and The world is socially constructed
objective and subjective
Involvement of researcher Researcher is independent Researcher is part of what is
observed and sometimes even
actively collaborates
Researcher’s influence Research is value-free Research is driven by human
interests
Assumptions
What is observed? Objective, often Subjective interpretations of
quantitative, facts meanings
How is knowledge Reducing phenomena to Taking a broad and total view of
developed? simple elements phenomena to detect explanations
representing general laws beyond the current knowledge
(look at the totality)
Type of study Quantitative Qualitative
Scientific research: A process that combines induction, deduction, observation and hypothesis testing,
into a set of reflective thinking activities (weerspiegelend denken).
Combining induction and deduction (‘double movement of reflecting thoughts’ – John Dewey):
2
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Building blocks of research: Concepts, constructs, definitions, variables, propositions, hypotheses, theories, models.
Concepts and constructs are used at the theoretical level; variables are used at the empirical level.
Construct: E.g. presentation skills > social skills, self-confidence, body language, knowledge of the subject, etc.
- More abstract.
- Intangible.
- Specifically developed for research purposes.
- Can combine multiple concepts or constructs.
3
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
4
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Proposition: A statement about concepts that may be judged as true or false if it refers to observable phenomena.
Hypothesis: When a proposition is formulated for empirical research.
- Describes the relationship among variables.
Generalization: If the hypothesis is based on more than one case.
The virtue of a hypothesis is that it limits what will be studied.
What position would you take and why? ‘Theory is impractical and thus no good’ or ‘Good theory is the most practical
approach to problems’.
The question addresses the problem of the use of theories. Common to theories is that they give a simplified
representation of reality. Thus the usefulness of theories needs to be assessed along the tension between their abstract and
often simplified picture of reality and the complexity of reality on the one hand. On the other hand it needs to be
considered that only theories allow us to provide real explanations, i.e. explanations that are reasoned and apply for more
than just one specific phenomenon.
5
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
6
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Research design:
Research design: The blueprint for fulfilling objectives and answering questions.
The strategy for a study and the plan by which the strategy is to be carried out.
It specifies the methods and procedures for the collection, measurement and analysis of data.
1. Design strategy: Type, purpose, scope, time frame, environment.
2. Sampling design: Identify the target population and select the sample (to represent that population).
3. Data-collection design: Methods of data collection and measurement instruments used.
4. Pilot-testing: Test, conducted to detect weaknesses in the study’s research design.
Some questions are answered by research and others are not. Distinguish between them.
One cannot investigate research problems that are essentially value questions that are ill-defined and one should be careful
when engaging in politically motivated research. Examples for value questions are: Whether a company should close a
certain production facility because it is currently unprofitable or whether it should bear the losses for a longer time. The
answer to this question cannot be found by research, as it is mainly dependent on the values you hold. An example for an
ill-defined management problem is: How could we become more profitable? It is certainly possible to investigate drivers for
profitability, but not all drivers can be investigated. Thus, you would need to limit the research problem on the drivers for
profitability.
7
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
8
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Critical review: Book or peer review either prior to, or after publishing.
Mention weak and strong points and discuss these using specific criteria.
Overall verdict on the text and some explanations for it.
9
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Treatment of participants research must be designed so a respondent does not suffer physical harm, discomfort, pain,
embarrassment or loss of privacy. Researchers should follow three guidelines:
1. Explain the benefits of the study.
2. Explain the participant’s rights and protection.
3. Obtain informed consent (instemming).
Deception: Occurs when the participant is told only a part of the truth or when the truth is fully compromised.
> Reasons for deception:
1. To prevent biasing participants before the survey or experiment.
2. To protect the confidentiality of a third party (e.g. sponsor).
The benefits of deception should be balanced against the risks to participants.
In case of deception, the participant must be debriefed.
Informed consent: (Instemming). Fully disclosing (openbaren) the procedures of the research design before
requesting permission to proceed with the study.
1. Introduce yourself.
2. Briefly describe survey topic.
3. Describe target sample.
4. Tell who the sponsor is.
5. Describe the purpose of the research.
6. Give an estimate of the time required to complete the interview.
7. Promise anonymity and confidentiality (when appropriate).
8. Tell that participation is voluntary.
9. Tell that item non-response is acceptable.
10. Ask permission to begin.
11. * Conclusion: Give information on how to contact the principal investigator.
Debriefing participants: (Nabespreking). Involves several activities that follow data collection:
- Explanation of any deception.
- Description of the hypothesis, goal or purpose of the study.
- Post-study sharing of results.
- Post-study follow-up medical or psychological attention.
Protect confidentiality:
- Obtaining signed non-disclosure documents.
- Restricting access to participant identification.
- Revealing participants information only with written consent.
- Restricting access to data instruments where the participant is identified.
- Non-disclosure of data sub-sets.
Non-disclosure: Refusal to reveal information.
Data collection in cyberspace: De virtuele wereld (World Wide Web). Also applicable to data-mining:
- Researchers are obliged to protect human subjects and ‘do right’.
- Cyber-research is particularly vulnerable to ethical breaches.
E.g. blurring between public and private venues, difficulty on obtaining informed consent, etc.
- Actions permissioned / not precluded by policy/law, does not mean it is ethical/allowable.
- Inquiry must be done honestly and with ethical integrity.
11
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
The quality of any research study does not so much depend on whether it is qualitative or quantitative, but rather it
depends on the quality of its design and how well it is conducted.
What are similarities between experimental and ex-post facto research designs?
Both designs try to show IV-DV relationship, or causal relationships, basically by:
1. Studying co-variation patterns between variables.
2. Determining time order relationships.
3. Attempting to eliminate the confounding effects of other variables on the IV-DV relationship.
They often use the same data collection methods.
13
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
6. Topical scope.
- Statistical study: Designed for breadth, rather than depth.
Capture population's characteristics by conclusions from a sample's characteristics.
Census study: Based on the whole population (special case).
- Case study: Designed for depth, rather than breadth.
8. Participant's perceptions.
When participants believe that something out of the ordinary is happening, they may behave less naturally.
There are three levels of perception:
1. Participants perceive no deviations from everyday routines.
2. Participants perceive deviations, but as unrelated to the research.
3. Participants perceive deviations as researcher-induced (example: mystery shopper).
14
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Each cell gets its own value. Afterwards you can compare the scores of the different Zuyd faculties.
Variables Number of incoming Number of graduating FTE in staf
Unit of Analysis students students
International Business
HMSM
Facility management
Sampling: Draw conclusions about the entire population, by selecting some elements in a population.
Population element: = Unit of study. The subject on which the measurement is being taken.
Population: The total collection of elements about which we wish to make some inferences. E.g. 4000 files.
Census: Obtain information from all elements within the population (e.g. from all 4000 files).
Representative samples are only a concern in quantitative studies rooted in a positivistic research approach. Qualitative
studies rooted in interpretivism usually do not attempt to generalize their findings to a population.
What are the characteristics of accuracy and precision for measuring sample validity?
What makes a good sample?
Validity and representativity of sample: How well it represents the characteristics of its population.
Representativity of a sample depends on accuracy and precision.
- Accuracy: Degree to which bias and systematic variance are absent from the sample.
o Some sample elements will underestimate the population values, others overestimate these values.
Variations in these values compensate each other > Sample value close to population value.
o The less bias and systematic variance, the greater the accuracy.
- Precision (of estimate): Degree of sampling standard error (error variance). Must be within acceptable
limits for the study’s purpose.
o The smaller the standard error of estimate, the greater the precision of the sample.
15
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
What are the six questions (steps) that must be answered to develop a probability sample/sampling plan?
1. What is the relevant population?
Who or what do you want to investigate?
2. What are the parameters of interest? Variables.
- Population parameters: Summary descriptors of variables in the population. E.g. mean, variance.
- Sample statistics: Summary descriptors of variables in the sample.
Sample statistics are used as estimators of population parameters.
A parameter is a value of a population, while a statistic is a similar value based on sample data.
E.g. the population mean is a parameter, while a sample mean is a statistic.
3. What is the sampling frame? Complete list of population elements from which the sample is drawn.
Without a sampling frame, a probability sample cannot exist!
4. What is the type of sample? Probability versus non-probability sample.
5. What sample size is needed?
- The greater the variance within the population, the larger the sample must be to provide precision.
- The greater the desired precision, the larger the sample must be.
- The greater the number of sub-groups of interest within a sample, the greater the sample size must be.
- If sample size exceeds 5% of the population, sample size may be reduced without sacrificing precision.
6. How much will it cost? Influences sample size and type, and the data-collection method.
What are the two main categories of sampling techniques and their varieties?
The types of sampling design are determined by the representation basis and the element-selection technique.
1. Representation basis technique:
Probability sampling: Based on random selection.
- Each population element has a chance of being selected (sampling frame).
- Only probability samples provide precision (maximum precision and accuracy).
Non-probability sampling: Not based on random selection and is subjective.
- Not every population element has a chance of being selected (no sampling frame).
- When probability sampling is not feasible (no sampling frame), or time and money budget is limited.
- Produces selection bias and non-representative samples.
2. Element selection technique:
Unrestricted sampling: Each sample element is drawn from the (complete) population.
Restricted sampling: Covers all other forms of sampling. Selection process follows complex rules.
Simple random sample: Simplest type of probability approach. Each number of the population has an equal
chance of being included in the sample.
Give everyone a number, let your computer select numbers or use bingo/numbers in a hat.
1. Systematic sampling
To draw a systematic sample you need to follow the following steps:
1. Identify the total number of elements in the population. (E.g. N = 30).
2. Determine the desired sample size. (E.g. n = 10).
3. Identify the sampling ratio k. (E.g. k = N / n = 30 / 10 = 3).
4. Identify the random start (drop a pencil (eyes closed) on your population list, see where the dot is). (E.g. = 2).
5. Draw a sample by choosing every kth entry. (E.g. all green marked people).
Use in combination with other designs to minimize bias.
2. Stratified sampling
17
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Most populations can be separated into several mutually exclusive sub-populations (or strata). E.g. gender, education.
Stratified sampling: The process by which the sample must include elements from each strata.
Proportionate versus disproportionate sampling (deciding how to allocate a total sample among various strata):
Proportionate stratified sampling: Each stratum is properly represented so that the sample drawn from it, is
proportionate to the stratum's share of the total population.
Disproportionate stratified sampling: Any stratification that differs from the proportionate stratified sampling.
3. Cluster sampling
Cluster sampling: Population is divided into groups of elements, some groups are randomly selected for study.
Area sampling: Populations that can be identified with a geographic area (most important form of clusters).
Why clustering (advantages)? - Less expensive than simple random sampling.
- Also possible without sampling frame.
Disadvantage: Lower statistical efficiency (more error) as groups are homogeneous.
18
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Design: In designing cluster samples (incl. area sample) we must answer several questions:
1. How homogeneous are the clusters?
2. Shall we seek equal or unequal clusters? Looking at the cluster sizes.
3. How large a cluster shall we take? No size is superior.
4. Shall we use a single-stage or multi-stage cluster?
5. How large a sample is needed? Depends on the cluster design, e.g. simple cluster sampling.
Purposive sampling: Attempt to secure a sample that conforms to some determined criteria. There are three types:
2. Judgement sampling: The researcher uses his judgement to select elements conform to some criterion.
Used in exploratory study.
3. Quota sampling: Used to improve representativeness by using general characteristics of the population.
E.g. gender (50% male and 50% female), race, age, income level, employment status, political party.
4. Snowball sampling: Individuals are selected and are used to locate others who possess similar characteristics.
Useful if you want to sample subjects that are difficult to identify.
19
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
In assessing the usefulness of secondary data, you need to address the following questions:
1. Information quality: Is the information provided in SD sufficient to answer your research problem?
a. Do the secondary data cover all the information you need?
b. Is the information available detailed enough?
c. Do the data follow the definitions you apply in your research problem?
d. Are the data accurate enough? (Evaluate their source).
2. Sample quality: Do the secondary data address the same population you want to investigate?
a. Do the secondary data refer to the unit of analysis you want to investigate?
b. Is the sample on which the data are based a good representation of the population?
3. Timeliness of data: Were the secondary data collected in the relevant time period? (not out of date).
What is data-mining?
Data-mining: Uncovering knowledge, identifying patterns in data and predicting trends and behaviours from data in
databases stored in data warehouses.
Organizations collect a tremendous amount of information and record it in databases on a daily basis.
With data-mining one searches for valuable information within these large databases (often internal data).
21
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Big data: Big amount of data from the use of web, mobile phones and customer, credit and debit cards.
- Opportunity, but: Privacy, security and intellectual property rights. Validity, reliability and completeness of info.
Participant observation
Participant observation: Qualitative observation approach. More flexible and less structured.
Observer participates and dives into the participant’s world to gain insight in, and explore explanations for a
phenomenon.
Two dimensions:
- Whether the observer actively participates.
> No distance; can influence participant’s behaviour.
> The more distant you are as an observer, the more descriptive are your observations.
- Whether the observer is concealed or not. Whether the participant knows that he is being observed.
> A concealed observer reduces bias, but there are ethical issues to concealment.
22
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Field notes: Primary data-collection tool (participant observations). Four principles lead to a higher validity:
1. Direct notes in keywords.
2. Immediate full notes after you leave the setting.
3. Limit observation moment (time you are at the setting).
4. Rich full notes (very complete, everything you noticed).
Structured observation
Structured observation: Produces data suitable for quantitative analysis.
Two dimensions: Direct vs. indirect observation; Concealed vs. not concealed observation.
- Direct obs: When the observer is physically present and personally monitors what takes place.
- Indirect obs: When the recording is done by mechanical, photographic or electronic means.
- Concealed obs: The participant is not aware of the observer’s presence (> Ethics).
- Not concealed: The participant is aware of the observer’s presence (> Method reactivity bias).
Conduct structured observations by using a checklist > Quantifying what is observed.
What can you observe with structured observations? Behaviour and non-behaviour.
Behavioural observation: Observing behaviour.
- Non-verbal: Body movement, motor expressions, exchanged glances, etc.
- Linguistic: Interaction, transfer of information, annoying sounds/words (ah, uh).
- Extra-linguistic: Vocal, temporal, interactional and verbal stylistic behaviour.
- Spatial analysis: How people physically relate to others (distance maintained between each other).
Non-behavioural observation: Not observing behaviour, but records, conditions and processes.
- Record: Analysing historical or current data, public or private data.
- Physical condition: Analysing the conditions of something. E.g. plant safety compliances, inventory.
- Physical process: Analysing the process of something. E.g. manufacturing process, traffic flows.
How can you measure structured observations? Factual vs. inferential, physical traces.
Factual observation: Describes what is happening and what can be seen.
E.g. time and day of the week, environmental factors, product presented, etc.
Inferential observation: Translates what is seen to a concept that cannot be observed.
E.g. Credibility, interest, acceptance, concerns, effectiveness, customer acceptance of product, etc.
Observation of physical traces: Observing measures of wear (slijtage) and measures of deposit.
- Measures of wear: E.g. estimating library book use by looking at the number of teared pages in a book.
- Measure of deposit: E.g. estimating alcohol consumption by collecting and analysing domestic rubbish.
23
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Qualitative interviews: Interviews are usually semi-structured or unstructured. Memory list/intv. guide.
- Unstructured interviews: No specific question/topic list to be covered; mental list of relevant topics.
Flexible and might take another course than originally expected.
Researcher wants to gain insight into what the respondents consider relevant and his interpretations.
- Semi-structured interviews: Question/topic list to be covered; ask questions similarly during all interviews.
Start with specific questions, but allow the interviewee to follow his/her thoughts later on.
Probing techniques are often used. E.g. TV interview journalist with a political decision-maker.
Survey/questionnaire
Questionnaire/survey: Collecting quantitative information through structured questioning.
Non-response error: Responses of participants systematically differ from responses of non-participants. Researcher:
1. Cannot locate the person to be studied.
2. Is unsuccessful in encouraging that person to participate.
Solutions to reduce non-response errors:
24
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Response error: When the data reported differ from the actual data (mostly with personal interviews).
Participant-initiated error: Occurs when the participant fails to answer fully and accurately.
Interviewer error: When the interviewer's control of the process affects the quality of data.
- Failure to secure full participant cooperation.
- Failure to consistently execute interview procedures.
- Failure to establish an appropriate interview environment.
- Falsification of individual answers or whole interviews (cheating).
- Inappropriate influencing behaviour.
- Failure to record answers accurately and completely.
- Physical presence bias (e.g. young vs. old people).
Probing: Technique of stimulating participants to answer more fully and relevantly. Probing styles:
25
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Qualitative interviews
An interview guide is important when conducting semi-/unstructured interviews, the main functions are:
- Memory list to ensure that the same issues are covered (in every interview).
- Memory list to ensure that the questions are asked in the same way.
> Increases the comparability of multiple interviews.
The more specific the interview guide, the more structured the interview will be, and the less flexible the
interviewer is in responding to the respondents.
Information recording:
- Unstructured interviews can be conducted by two interviewers, but are usually recorded by tape or digitally.
> Advantages: Focus on the conversation (instead of making notes) and you can listen to it again.
> Disadvantages: - People feel uncomfortable; this influences their answering behaviour.
- Technical problems can disturb the interview.
- Transcribing the information recorded is very time-consuming.
The demand on the interviewer with an unstructured interview why experts?
26
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
1. Background information.
2. To be able to direct the interview.
3. To decide whether you’ve heard enough or would like to get more information on the topic.
4. Respondents often expect you to be an expert.
Interviewers should be good at active listening.
Focus groups
Focus group: Panel of people, led by a moderator, who meet to discuss some open questions and topics.
Special form of unstructured group interviews.
Can offer new insights into the topic that would have remained hidden in a one-by-one conversation.
Moderator: Uses group dynamics principles to focus or guide the group in interactions.
Script: Guide for moderator: Introduction, directions for participants, opening question, questions to
ask if the discussion falls dead, closing words.
Group size: 6-10 people, but smaller groups can be useful if sensitive issues are discussed.
Group type: Homogenous groups rather than heterogeneous groups.
> Homogeneous focus groups tend to promote more intense discussion/interaction.
27
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Advantages:
The ability to uncover causal relationships (and manipulate the IV).
The ability to control extraneous and environmental variables.
o Extraneous variables: Describe background characteristics of the participants (gender, age, education).
> Researcher can only control these variables through selection of participants.
o Environmental variables: Describe the situation in which the experiment takes place.
> Can be controlled by researcher, should be kept constant.
The convenience and low costs of creating test situations (instead of searching for their appearance).
The ability to replicate findings and thus rule out isolated or idiosyncratic (vreemde) results.
The ability to exploit (benutten) naturally occurring events (and to some extent field experiments).
Disadvantages:
The artificial setting of the laboratory.
Generalization from non-probability samples (can pose problems despite random assignment).
Disproportionate costs in select business situations: Applications of experimentation can be expensive.
Focus is restricted to the present and immediate future.
Ethical issues related to the manipulation and control of human subjects.
Laboratory experiments: Conducted in an unnatural setting; researchers can fully control the setting / variables.
BUT: Participants are aware that they are participating in an experiment. > Behaviour might differ.
The experimental effect is problematic, as experimenter and participants interact more (than field exp.).
Field experiments: Conducted in a natural setting; participants unaware that their behaviour is being monitored.
More heterogeneous group: Reflects the population better than the laboratory experiment.
No/limited control on research setting > Ability to manipulate the IV is smaller. Other: Ethical issues.
28
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Validity in experimentation
Internal validity: If the IV has caused the change in the DV.
External validity: When the results of the experiment can be generalized to some larger population.
The experimental approach can also be combined with the survey approach.
Factorial surveys: = Vignette research.
Researcher presents the respondent with a brief, explicit description of a situation (description = IV); and then asks
him/her to assess the situation / make a decision (answer = DV).
Testing effect: People know what to expect because they were (pre-)tested before.
Make sure that there’s sufficient time between the tests to reduce the testing effect.
This is why the pre-test is often left out in social studies (prevents testing effect).
Testing effect: O1 affects O2.
Reactivity effect: O1 affects X.
29
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
30
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Advantages:
- Adds to transparency (it’s clear to readers what the researcher did).
- Others can take your textual information and replicate your research.
- Content analysis is unobtrusive (niet opdringerig/opvallend) and non-reactive.
Disadvantages:
- Quality depends on input.
- Coding procedure is subject to interpretation bias.
- Time-consuming.
31
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Coding: Categorizing and combing data into themes/ideas; create codes; add fragments to codes.
All fragments that have the same code are about the same theme/idea.
Software packages are available to automate coding (e.g. NVivo, MAXQDA).
Prescriptive analysis: Prior to searching, define words/phrases that you search in texts (create dictionary of key words).
Open analysis: Try to find the general message of the text.
Coding frame: List of all codes used.
32
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Advantages:
- Framework for systematic inquiry into qualitative data.
- Theory development.
Disadvantages:
- Feasibility problems (e.g. researcher can’t be free of pre-theoretical thoughts).
- Very time-consuming because of its iterative character.
- Criticized for not generating theories, but generating categorization systems.
33
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Disguised (indirect): Designed to conceal the question's / survey’s true purpose. WHY?
To avoid bias if it is about a sensitive, boring or difficult topic.
Useful when we seek information that is available from the participant, but not at the conscious level.
Disguising the sponsor for strategic reasons or if name influences answering behaviour.
1. Validity
Two major forms of validity:
External validity: The data’s ability to be generalized across persons, settings and times.
Internal validity: The ability of the research instrument to measure what it is meant to measure.
a. Content validity: Does the measurement instrument cover the investigative questions? Is all included? (Content).
Good content validity: Representative sample and instrument covers all relevant topics (= subjective).
Judgemental evaluation: Researcher judges content validity by defining of the topic, items and scales.
Panel evaluation: Panel of people/judges judge content validity.
b. Criterion-related validity: Success of measures used for prediction or estimation of e.g. behaviour.
Concurrent validity: Estimation of the present.
Measure at one point in time. Use two different measurement instruments > Correlate?
E.g. Cito results & judgement of teacher.
Predictive validity: Prediction of the future.
36
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Measure at two points in time. Use two different measurement instruments > Correlate?
E.g. Cito results & see over a period of time whether the student does well in assigned education level.
2. Reliability
Reliability: A measure is reliable to the degree that it supplies consistent results (under different times/conditions).
If a measurement is not valid, it hardly matters if it is reliable – because the measurement instrument does not
measure what the designer needs to measure in order to solve the research problem.
3. Practicality
Practicality has been defined as economy, convenience and interpretability.
a. Economy:
Limit the number of measurement questions, to limit the measurement time (and thus costs).
Choice of data-collection method (personal interview is more expensive than online surveys).
b. Convenience: The measuring device needs to be easy to use and apply.
c. Interpretability: When people other than test designers must interpret the results. Make interpretation possible:
State the functions the test was designed to measure and the procedure by which it was developed.
Detailed instructions for administration.
Scoring keys and instructions.
Norms for appropriate reference groups.
Evidence about the reliability.
Evidence regarding the inter-correlations of sub-scores.
Evidence regarding the relationship of the test to other measures.
Guides for test use.
Response methods
To quantify dimensions that are essentially qualitative, rating or ranking scales are used.
Rating scales: When variables are individually rated. There are many different sample rating scales:
37
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
Numerical scale Participants write a number from the Ordinal or Very good 5 4 3 2 1 Very bad
scale next to each item interval Employees cooperation _____________
Employees knowledge______________
Multiple rating list Similar to numerical scale, but it Interval Please indicate how important or unimportant
scale accepts circled response from the each service characteristic is
rater and the lay out permits Fast reliable repair 7 6 5 4 3 2 1
visualization of results Service at my location 7 6 5 4 3 2 1
Fixed sum scale Discover proportions, up to 10 Ratio Relative importance
categories may be used Subject one X
Other subjects X
Sum 100
Stapel scale Alternative for semantic differential Ordinal or Company name x
scale, when it’s difficult to find interval* +3 +2 +1 Technology leader -1 -2 -3
bipolar adjectives (e.g. fast, slow) +3 +2 +1 Exciting products -1 -2 -3
Graphic rating scale Enables researcher to discern fine Ordinal, How likely are you to recommend X to others
differences interval* Very likely |---------------------------| very unlikely
> E.g. with smiley faces and in how or ratio* Place an X at the position along the line that
much pain you are reflects best your judgment
Ranking scales: Compare variables and make choices among them. There are different sample ranking scales:
Examples of ranking scales
Ranking scale Definition Data type How does it look like?
Paired-comparison Choosing between two objects. When Ordinal Choose per question the most favourable answer:
scale there are more than two objects, this 1. X or Y 2. X or Z 3. Y or Z
becomes a difficult task for the participant
Forced ranking scale Lists attributes that are ranked relative to Ordinal Rank ‘case’ in order of preference (1,2,3)
each other. Number of stimuli is limited _____X _____Y _____Z
Comparative scale Ideal for comparison, if the participant is Ordinal Compared to ‘case’, the ‘characteristic’ of ‘case’ is:
38
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
39
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
H0: No difference/relationship.
H1: Difference/relationship.
In research, you want to reject H0, and therewith reinforce H1 (NOT PROVE!).
Overview PPT will be given during exam. You should be able to provide the right cell within that overview based on the
situation provided. (Formula’s will NOT be asked, you do NOT have to calculate these things!).
40
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)
lOMoARcPSD|24065562
General
Data-collection methods Other qualitative approaches
Interviews Experiments
Questionnaires Action research
Focus groups Case studies
Observations Ethnographic research
Secondary data Content analysis
Narrative analysis
Grounded theory
Action research:
Observations, secondary data, interviews, questionnaires, focus groups (everything).
41
Downloaded by Dareen Fahmawi (dareenfhmawi@gmail.com)