Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

RESEARCH METHODS:

Lesson 2:
Quantitative Research Method
PRACTICE OF MEDICINE
MPOA020
QUANTITATIVE RESEARCH
What is research?

Research: is the systematic


collection, analysis and interpretation
of data to answer a certain question or
solve a problem
What is quantitative research?

Quantitative research is the


numerical representation and
manipulation of observations
for the purpose of describing and
explaining the phenomena that
those observations refect.
Cohen (1980): quantitative
research is defned as social
research that employs empirical
methods and empirical
statements.
What is quantitative
research?
Creswell (1994): quantitative
research is a type of research
that explains phenomena by
collecting numerical data that are
analyzed using mathematically
based methods (in particular
statistics).
What is quantitative
research?
The frst element is explaining
phenomena. This is a key element of all
research, be it quantitative or qualitative.
When we set out to do some research, we
are always looking to explain something.
Second element: in quantitative research
we collect numerical data.
Third element: analysis using
mathematically-based methods. In
order to be able to use mathematically
based methods (statistics) data have to
be in numerical form.
Quantitative Research
While it is important to use the
right data analysis tools, it is
even more important to use the
right research design and data
collection instruments.
In quantitative research the aim
is to determine the relationship
between one thing (an
independent variable) and
another (a dependent or
outcome variable) in a
Types of Quantitative studies

Observational
In a observational study, no attempt is
made to change behaviour or conditions;
you measure things as they are

Experimental.
In an experimental study you take
measurements, apply some sort of
intervention, then take measurements
again to see what happened. (Outcomes)
Observational studies
1. Observational studies
◦ Descriptive
 Cross-sectional study
 Case report
 Ecological study
◦ Analytical
 Cohort study
 Case-control study
2. Experimental studies
◦ Randomized control trial
◦ Non-randomized control trial
Observational studies
Descriptive study: This describes the
occurrence, frequency and distribution (by
person, place and time) of diseases or events
within a population
Cross-sectional study: a survey in which
measurements are done as a single observation
like a snapshot. Both exposure and outcome are
measured at the same time
Case reports: These are usually reports of
unusual mode of presentation of a disease
Ecological study: looks for association
between exposure and outcome in populations
rather than individuals
Observational studies
Analytical study: This has control or
comparison group(s), it is used to investigate
causal factors
Cohort study: prospective study where the
investigator compares the occurrence of
disease in the group of individuals exposed
to the suspected risk factor with another
group of individual who are not exposed.
Case control study: a retrospective study
where a group of afected person is
compared with a suitably matched control
group of non-afected persons
Cohort vs Case-control
designs
Experimental studies
This involves studies in which one group which is
deliberately subjected to an experience is
compared with a control group which has not had
a similar experience

Randomized control trial: subjects are


allocated into groups by randomization. This
means subjects are allocated to the treatment
and control group by chance

Non-randomized control trial: also called


quasi-experimental study. Allocation of subjects
into study and control groups is done without
randomization.
Randomized controlled trial (RCT)
Analytical study design in which
study participants are randomly
assigned to exposure by the
researcher then followed up to
measure the outcome.

This exposure is controlled by the


researcher.

The exposure is usually an


“intervention”
“Intervention”
Treatment / interventional
procedures

Prevention strategies

Screening programmes

Diagnostic tests
Randomized controlled trial
(RCT)
Experimental Studies

 Also known as longitudinal studies or


interventions, because you do more than
just observe the subjects.
 In the simplest experiment, a time series,
one or more measurements are taken on
all subjects before and after a treatment.
 A special case of the time series is the so-
called single-subject design, in which
measurements are taken repeatedly (e.g.,
10 times) before and after an intervention
on one subject. (n=1 study)
Experimental Studies
Time Series
Time series sufer from a major problem: any
change you see could be due to something other
than the treatment.

Crossover design
The crossover design is one solution to this
problem. Normally the subjects are given two
treatments, one being the real treatment, the other
a control or placebo.
Half the subjects receive the real treatment frst,
the other half the control frst. After a period of
time sufcient to allow any treatment efect to
wash out, the two groups are crossed over.
Experimental studies
 Ifthe subjects are blind (or masked) to the
identity of the treatment, the design is a
single-blind controlled trial.
 Blinding of subjects eliminates the placebo
efect, whereby people react diferently to a
treatment if they think it is in some way
special.
 In a double-blind study, the researcher
(physician) also does not know which
treatment the subjects receive until all
measurements are taken.
Confounding variables
Confounding variables are
variables with a signifcant efect
on the dependent variable that
the researcher failed to control or
eliminate - sometimes because
the researcher is not aware of the
efect of the confounding
variable.
The key is to identify possible
confounding variables and
somehow try to eliminate or
CONFOUNDING
Confounders are those factors (Factor A) which are
related to both the exposure (Factor B) and the
outcome (Disease C) under study, i.e.
◦ – Factor A is a risk factor for outcome C
◦ – Factor A is associated with Factor B
 Coggon’s example of age being a confounder when
comparing crude mortality rates between 2 towns:
– Old age (Factor A) is a risk factor for mortality
(outcome C)
– Old age (Factor A) is associated with a town that is a
popular retirement destination (Factor B).
– Thus the fact that the town that is popular for retirees
has a higher crude mortality rate does not mean that
living in that town is a risk factor for mortality
Results

To test a hypothesis, quantitative research uses


signifcance tests to determine which hypothesis is
right. The signifcance test can show whether the null
hypothesis is more likely correct than the research
hypothesis.

Research methodology in a number of areas like


social sciences depends heavily on signifcance tests.

The t-test (also called the Student's T-Test) is one of


many statistical signifcance tests, which compares
two supposedly equal sets of data to see if they
really are alike or not.

The t-test helps the researcher conclude whether a


hypothesis is supported or not.
Drawing conclusions
Drawing
 a conclusion is based on several factors of the
research process, not just because the researcher got the
expected result. It has to be based on the validity and
reliability of the measurement

The
 observations are often referred to as empirical
evidence and the logic/thinking leads to the conclusions.
Anyone should be able to check the observation and
logic, to see if they also reach the same conclusions.

Errors
 of the observations may stem from measurement-
problems, misinterpretations, unlikely random events etc.

A
 common error is to think that correlation implies a
causal relationship. This is not necessarily true.
Generalization

Generalization is to which extent


the research and the conclusions
of the research apply to the real
world.

It is not always so that good


research will refect the real world,
since we can only measure a small
portion of the population at a time.
Validity and reliability
 Validity:
 Thisis the degree to which a test measures
what it is supposed to measure. Validity
refers to what degree the research refects
the given research problem.

 Reliability
 The ability of a measure to produce a
repeatable result depends on its reliability.
Reliability refers to how consistent a set of
measurements are.
MEASUREMENT ERROR
AND BIAS
BIAS IN QUANTITATIVE STUDIES
• Bias is a systematic error in the design, conduct, or
analysis of a study that results in errors when
calculating measures of association between
outcomes and risk factors
• It is thus always one-sided, either underestimating
or over-estimating risk
• It is not usually a subjective error, and is often
unavoidable, but must be recognized as a limitation
of the study to allow for correct interpretation
BIAS IN QUANTITATIVE STUDIES
STUDY
Selection bias:
• Occurs when study participants are not representative of the target population
Eg: Pregnant women attending antenatal clinics do not represent all pregnant
women, only those attending antenatal clinics.

Information bias:
• Recall bias may result because those afected by a disease often recall their
exposures better than those who are unafected
• Non-response leads to a lack of information on non-responders. Thus if the non-
response rate is high, the study may be severely biased. Sometimes called
volunteer bias.

Misclassifcation bias:
• Cases are misclassifed as controls or vice versa, or
• Exposed misclassifed as unexposed or vice versa
• Results from inaccuracies and unreliability in measurement
• Can be addressed by ensuring validity (accuracy) and reliability (repeatability)
of tests
Methods for controlling for
bias
• Randomization

• Restricted study eligibility of patients

• Matching of patients in one study group to those with


similar characteristics in the comparison group
• Stratifcation i.e. comparison of rates within groups of
individual who have the same values for the
confounding variable
• Adjustment by standardization or using multiple linear
and logistic regression
Confdence interval
(CI)
• The CI is the interval or range about which the ‘true’ statistic
(mean, proportion, correlation, coefcient, relative risk) is
believed to be found within a given population with known
probability.
• The rationale of calculating CIs is that a single value (or point
estimate) in one study is likely to be inaccurate.
• Conventional confdence limits are 90%, 95% and 99%

• A 99% CI will be wider than a 95% CI

• If the 96% CI contains 0 then the results are not signifcantly


diferent between the 2 groups
• for RR, or Odds ratio, if the result contains 1 then the results are
not signifcantly diferent
Hypothesis testing
• The null hypothesis underlies all statistical tests. It
states that there is no diference between the
samples or population being compared, and that
any diference observed is simply by chance

• Null Hypothesis (H0): P1 – P2 = 0 or P1 = P2

• Alternative Hypothesis (HA): P1 – P2 ≠ 0 or P1 ≠


P2
Statistical signifcance
• The purpose of signifcance testing is to assess how strong is the
evidence for a diference between one group and another, or whether
random occurrence could reasonably explain the diference
• Signifcance level α)᝗
• Conventional signifcance levels are 5%, 1% and 0.1%. The smaller the
value, the less likelihood of the diference having occurred by chance

• Critical level αp-value᝗:The likelihood or probability of observing a


result by chance is known as the p-value, and it could be 0.05 (5%),
0.01 (1%) or even 0.001 (0.1%).
• It is the probability that a given diference is observed in a study
sample statistic (mean or proportion) when in reality such a diference
does not exist in the population
• Conventionally a diference is said to be statistically signifcant if the
corresponding p-value is <0.05
Type I and II errors
• There are two types of errors when drawing
conclusions in research:
• Type I error α)᝗: this is the probability of rejecting null
hypothesis when indeed it is true i.e. saying there is
diference when really there is none. The risk of false-
positive result
• Type II error αβ᝗: this is the probability of accepting null
hypothesis when indeed it is false. That is saying there is
no diference when really there is a diference. The risk of
false-negative result.
• Power of the test to detect a diference =(1-β)

You might also like