Professional Documents
Culture Documents
Lernzettel-Grundlagen Der Empirischen Forschung
Lernzettel-Grundlagen Der Empirischen Forschung
VL1
Positivism = New knowledge is generated by finding evidence for it (creating hypotheses that
can be tested)
Search for data that prove hypothesis
facts (verified data) is the only source of knowledge
Verification
Problem: selective approach one looks only for confirmation hypotheses that cannot
be tested are irrelevant Many phenomena that are not visible or "concrete" are still
real (e.g., demand relationships, corporate reputation, customer satisfaction)
Falsifikationsansatz= „you can never confirm a thesis, you can only falsify it“
A scientific theory must be potentially falsifiable if not falsifiable, then not
testable/verifiable
Falsification = refutation of a scientific statement by a counterexample
Problem = Problem = many sources of error in measurement method, hypothesis
generation
Asymmetry: Multiple verification does not establish a theory more than a single
verification but a single falsification overturns a theory.
Karl Popper
Empirical research
Quantitative research = confirmatory & deductive
Qualitative research = exploratory & inductive
VL2
Step 3: Research Designs: Data Collection Methods, Data Sources and Types
Benefits Limitations
Less effort Collected for some other purpose
Saves time No control over data collection
Low cost (it depends...) May not be very accurate
Sometimes more accurate (e.g., data on May not be reported in the required form
competitors) - Different unit of measurement (e.g.,
Some information can be obtained only from individual, family, or household data)
secondary data (e.g., data on entire population or - Different definitions, classifications
on the past) (e.g., age, income)
May be outdated
May not meet data requirements
A number of assumptions have to be made
Primary Research: Data Collection Methods and Data Types
Types Narrative, Unstructured, In-depth, Mini group (dyad, triad), focus group
semi-structured focus (5-12 people), super group (> 12
people)
Features - Personal interview the most - Extension of qualitative interview
established form of qualitative to groups
research - Moderator controls the group
- Unstandardized, open-ended discussion
questions - Many applications
- Many applications
Benefits - Interview with a single individual - Discussion among participants
allows addressing personal and - Analysis of group dynamics and
sensitive topics opinion building processes
- Very flexible - Multiple viewpoints (if group is
heterogeneous)
- Sessions can be longer compared
to single interviews
- Very cost efficient form of
(qualitative) research
Survey
Reflective or formative?
VL 5
Accuracy of Measures
1. Objectivity = Measurements must be independent of researchers
- Administration
- Scoring
- Interpretation
Measures of Reliability
a) Test-Retest Reliability
- Administer the same test to the same (or a similar) sample on two different
occasions.
- Assumption: No substantial change in the construct being measured.
- Correlation between two observations is the Test- Retest Reliability.
b) Parallel-Forms Reliability
- Create two forms and administer both instruments to the same sample of people.
(Example: analogue und digital scale)
- Correlation between the two parallel forms is the estimate of reliability.
c) Internal Consistency Reliability
- A single measurement instrument is administered to the same sample on one
occasion.
- Reliability is estimated by a comparison of how well the items (measuring the
same construct) yield similar results.
i) Split-Half Reliability
- Randomly divide all items that purport to measure the same construct into
two sets.
- Split-half reliability estimate is the correlation between the total score for
each randomly divided half.
ii) Cronbach´s Alpha
- α is mathematically equivalent to the average of all possible split-half
estimates.
- It is the most frequently used estimate of internal consistency.
- Values above 0.7 desirable
Problem of Validity
a) Researchers usually relied on traditional scale development procedures:
− Only “reflective” world view
− Strict emphasis on internal-consistency reliability (coefficient alpha)
b) Anomalous results:
− Deletion of conceptually necessary items in the pursuit of factorial
unidimensionality
− The addition of unnecessary and often conceptually inappropriate items to obtain a
high alpha
VL 6
Modeling
Mediator
4. A phenomenon (where?)
Deduction-Induction-Wheel
VL8
Hypothesis Testing
- The test statistic often follows a well-known distribution, such as the normal, t,
or chi-square distribution.
- In statistical tests, we directly control for the type I error by setting a maximum
α-level
- Increasing the sample size, reduces the standard error and thus narrows the
CI
- All else being equal, if you increase the sample size excessively, even slight or
marginal effects become statistically significant.
Step 6 and 7: Compare the p-value with α or the Critical Value with the Test
Statistic and Make the Decision
t- Distribution
Confidence Interval
Definition: Confidence interval is the range into which a true
population parameter will fall, assuming a certain level of
confidence
VL09
Causality
Causality = relation between an event (the cause) and a second event (the effect), where
the second event is understood as a consequence of the first.
Necessary condition: If the first object had not been, the second never had existed
- Causality implies correlation (but not the other way round).
- Temporal precedence: An object A causes an object B if A precedes B (95% correct, but:
barometer falls before rain starts)
- Absence of other plausible causal agents
- Thorough and rigorous theoretic reasoning
- Causality can be examined only in a controlled experiment by manipulation of a cause
variable and measurement of a (posterior) effect
Causes of Correlation
Classes of Experiments
1. Laboratory experiments
- Treatment is introduced in an artificial or laboratory setting.
- The variance of all (or nearly all) of the possible influential independent variables not
pertinent to the immediate problem of the investigation is kept to a minimum.
- The laboratory experiment tends to be artificial
- Internal validity is high.
2. Field experiments
- Realistic situation in which one or more independent variables are manipulated by the
experimenter under carefully controlled conditions as the situation will permit.
- The respondents usually are not aware that an experiment is being conducted; thus,
the response tends to be natural.
- External validity is high.
Manipulation Check
Definition: A manipulation check is a test used to determine the effectiveness of a
manipulation in an experimental design.
2. Maturation: The processes within subjects which act as a function of the passage of time. i.e. if
the project lasts a few years, most participants may improve their performance regardless of
treatment
4. Instrumentation: Changes in the instrument, observers, or scorers which may produce changes in
outcomes
5. Statistical Regression: It is also known as regression to the mean. This threat is caused by the
selection of subjects on the basis of extreme scores or characteristics. (“Give me forty worst
students and I guarantee that they will show immediate improvement right after my treatment.”)
7. Experimental mortality: Subjects drop out of a study (at rates that are different from subjects in
a control or comparison group)
8. Selection of subjects: Biases which may result in selection of comparison groups. Random
assignment of group membership limits this threat. However, when the sample size is small,
randomization may fail. Sometimes random assignment is not possible due to self-selection (e.g.,
see the Simpson Paradox on the following slides).
9. Resentful demoralization: During the experiment the control group becomes more and more
resentful and demoralized because of a missing treatment and perceived inequalities which
eventually affects their motivation and performance much more than the treatment effect alone.
10. Compensatory rivalry/equalization: The control group perceives an inequity compared to the
experimental group and tries to offset this inequity (which was not anticipated by the researcher)
11. Treatment diffusion: The control group is aware of the treatment condition and tries to anticipate
the reaction of the experimental group and adapts its own behavior accordingly.
12. John Henry and Hawthorne effect: John Henry was a worker who outperformed a machine
under an experimental setting because he was aware that his performance was compared with that
of a machine. Generally, subjects may change their behavior due to changes in the environment
(e.g., presence of an observer) rather than due to the nature of the change (i.e., the actual
treatment).
3. Low validity of measurements: Measures do not measure what they are supposed to
measure (e.g., IQ test does not test intelligence but whether subjects show exam nerves)
4. Sample selection bias: The sample is not representative for the population
VL10
Basics
- Analysis of the effect of one or multiple independent nominal
criteria (single or combined) on one dependent metrical
object
- Most important method for evaluation of experiments (e.g.,
differences between experimental and control group)
- Essentially, ANOVA is used as a test of means of two or
more populations
ANalysis Of VAriance
Why ANOVA?
- The research questions could be answered with several pairwise comparisons or t-tests,
but...
- ...the amount of comparisons grows up very rapid: with 10 groups, we would have to
carry out 45 group comparisons
- ...the overall chance of a type I error would increase: with 10 groups and a significance
level of α=0.05 the overall chance of a type I error would be 0.901 (α inflation trap)
Types of ANOVA
- One-way ANOVA: examine mean differences between more than two groups, the
variable indicating the groups is referred to as the factor
- MANOVA, ANCOVA,...
ANOVA: Possible Extensions
- Multiple factors and unequally taken cells: Unequally sized amount of observations in
the respective cells. Principle of the analysis of variance persists. Single observations are
weighted.
- Multiple tests: Possibility to compare single paired mean values or linear combinations
of mean values
- Multidimensional Analysis of Variance: Model with more than one dependent and
more independent variables (application of the Common Linear Model).
- Multiple Classification Analysis: Estimation of the impact force of the key effects (in
contrast ANOVA only asserts whether there are different impact forces of factor stages!).