Download as pdf or txt
Download as pdf or txt
You are on page 1of 94

Research Methodology & Biostatistics

MRM-301T
Dr. Uttam Kumar Mandal
Associate Professor
MRSPTU
Bathinda

Unit I:
General Research Methodology
Topics to be covered
• General Research Methodology: Research,
objectives, requirements, practical difficulties,
review of literature, study design, types of
studies, strategies to eliminate errors/bias,
controls, randomization, crossover design,
placebo, blinding techniques.
What is Research??
“Research is to see what everybody
else has seen, and to think what
nobody else has thought”
- Albert Szent-Gyorgyi
What is Research?
“Research is a
systematized effort
to gain new knowledge”.
Research comprises defining and redefining
problems, formulating hypothesis or suggested
solutions; collecting, organizing and evaluating
data; making deductions and reaching
conclusions; and at last carefully testing the
conclusions to determine whether they fit the
formulating hypothesis.
---Clifford Woody
Objectives of Research
• To solve a specific problem
• To gain familiarity with a phenomenon or to achieve
new insights into it
• To portray accurately the characteristics of a particular
individual, situation or a group
• To determine the frequency with which something
occurs or with which it is associated with something else
• To test a hypothesis of a causal relationship between
variables

The main aim of research is to find out the truth which is


hidden and which has not been discovered as yet.
Motivation in research
• Desire to get a research degree along with its consequential
benefits;
• Desire to face the challenge in solving the unsolved problems,
i.e., concern over practical problems initiates research;
• Desire to get intellectual joy of doing some creative work;
• Desire to be of service to society;
• Desire to get respectability.
• Other factors: directives of government, employment
conditions, curiosity about new things, desire to understand
causal relationships, social thinking and awakening,
Requirements

• User’s intentions
• Context
• Knowledge
• Skills
• Experience
Steps in Research
1. Identify and prioritize health problems
2. Collect review of literature/Situation Analysis
3. Decide aims & objectives
4. Planning Methodology including study design
5. Execution
6. Compilation, Classification & Presentation of data
7. Analysis
8. Test of Significance/Test of Hypothesis
9. Inferences
10. Report Writing
11. Dissemination of Report
Practical difficulties
Types of Research
• Exploratory Research
• Descriptive Research
• Analytical Research
• Causal or Experimental Research
Exploratory Research Design
• The exploratory research design is used to
increase familiarity of the analyst with
problem under investigation. This is
particularly true when researcher is new in
area, or when problem is of different type.
Exploratory Research
• The objective or goal is “unclear” or the problem is
“badly understood”

• Requires strong skills and deep observations

• Patient suffers from breathing problem in CoVID-


19…..Why?
• The sale is dropping … why?
• Why do people spend money more on consumable
items than long term investments?
Exploratory Research Design
This design is followed to realize following
purposes:
• Clarifying concepts and defining problem
• Formulating problem for more precise
investigation
• Increasing researcher’s familiarity with
problem
• Establishing priorities for further investigation
Descriptive Research Design
• Mostly used in social science and business research
• Usually involves surveys & comparisons
• The major purpose of descriptive research is description of
the state of affairs as it exists at present
• The researcher has no control over the variables; he can
only report what has happened or what is happening
Descriptive Research
• The objective or goal is “clear” and “structured”
• Descriptive research endeavors to describe and
encourage us to learn by engaging the question of
what?
• Running an ice-cream business - Alaska or
Miami?
• frequency of shopping
• Preferences of people
• or similar data
Analytical Research
• The researcher has to use facts or information
already available, and analyze these to make a
critical evaluation of the material
• The analytical research aim is to analyze the
data by answering the question of why?
• Why has the value of the Indian rupees
reduced against major world currencies (USD
or EURO or British pound) in recent years?
Causal or Experimental
• Causal research design deals with determining
cause and effect relationship.
• In causal research design, attempt is made to
measure impact of manipulation on
independent variables (like price, products,
advertising and selling efforts or marketing
strategies in general) on dependent variables
(like sales volume, profits, and brand image
and brand loyalty).
Types of Research
• Applied Research
• Fundamental research
• Quantitative research
• Qualitative research
• Conceptual Research
• Empirical Research
Applied Research
• Applied research aims at finding a solution for
an immediate problem facing a society or an
industrial/business organization
• Central aim of applied research is to discover a
solution for some pressing practical problem
Fundamental Research
• Fundamental research is mainly concerned
with generalisations and with the formulation
of a theory.
• Gathering knowledge for knowledge’s sake is
termed ‘pure’ or ‘basic’ research.
• Research concerning some natural
phenomenon or relating to pure mathematics
are examples of fundamental research
Quantitative Research

• Quantitative research is based on the


measurement of quantity or amount.
• It is applicable to phenomena that can be
expressed in terms of quantity.
Qualitative Research
• When we are interested in investigating the
reasons for human behaviour (i.e., why people
think or do certain things)
• This type of research aims at discovering the
underlying motives and desires, using in depth
interviews for the purpose.
Qualitative Research
• Common qualitative research techniques include
focus groups, interviews, and observation.
• With the conversations summarized into these coded
responses, the data has been converted from
purely qualitative data into quantitative data that can
be summarized in charts and graphs.
• Examples:
• Survey conducted to understand the mentality of a
citizen behind voting for a particular candidate in
Assembly Election and Lok Sabha Election.
Conceptual Research
• Conceptual research is that related to some abstract
idea(s) or theory.
• It is generally used by philosophers and thinkers to
develop new concepts or to reinterpret existing ones.
• Sir Issac Newton observed his surroundings to
conceptualize and develop theories about gravitation
and motion.
• Copernicus used conceptual research to come up with
the concepts about stellar constellations based on his
observations of the universe.
Empirical Research
• We can also call it as experimental type of research.
• It is data-based research, coming up with conclusions
which are capable of being verified by observation or
experiment.
• In such a research, the researcher must first provide
himself with a working hypothesis or guess as to the
probable results.
• He then works to get enough facts (data) to prove or
disprove his hypothesis.
• Characterised by the experimenter’s control over the
variables under study and his deliberate manipulation of
one of them to study its effects.
Types of Research
Other types of Research
• One time research / longitudinal research
• Field setting research /Laboratory
Research/Simulation research
• Clinical /diagnostic research
• Historical Research
• Decision-oriented / result oriented research
Research Design
• The word ‘design’ has various meanings.
• It is the blue print of architect’s work.
• The research design is similar to broad plan or
model that states how the entire research
project would be conducted.
Research Design
• A research design is a broad plan that states
objectives of research project and provides the
guidelines what is to be done to realize those
objectives.
• It is, in other words, a master plan for
executing a research project.
Research Design
• It specifies objectives, data collection and
analysis methods, time, costs, responsibility,
probable outcomes, and actions.
Study Design

A study design is a specific plan or protocol for


conducting the study, which allows the
investigator to translate the conceptual
hypothesis into an operational one.
Hypothesis
• The hypothesis is a conjectural (imaginary,
speculative, or abstract) statement about the
relationship between two or more variables.
• Naturally, in initial state of the study, we lack
sufficient understanding about problem to
formulate a specific hypothesis.
Content of Research Design
1. Statement of problem (Rationality)
2. Aim & objectives
3. Hypothesis
4. Methodology
– Methods, ways, and procedures used for
collection of data
– Definition of population and sampling
– procedures to be followed
5. Time, costs, and responsibility specification
6. Expected outcome
Strategies to eliminate
errors/bias
What is error in Research
• Simply put, error describes how much the
results of a study missed the mark, by
encompassing all the flaws in a research study

• A false or mistaken result obtained in a study or


experiment
Types of error
• Random error is the portion of variation in
measurement that has no apparent connection to
any other measurement or variable, generally
regarded as due to chance
• Systematic error which often has a recognizable
source, e.g., a faulty measuring instrument, or
pattern, e.g., it is consistently wrong in a
particular direction
Types of error
• Sampling error
• Population specification error
• Selection error
• Surrogate information error
• Measurement error
• Experimental error
Sampling Error
• A sampling error is a statistical error that
occurs when an analyst does not select
a sample that represents the entire population
of data and the results found in the sample do
not represent the results that would be obtained
from the entire population.
What is bias?
• Bias is disproportionate weight in favor of or
against an idea or thing, usually in a way that
is closed-minded, prejudicial, or unfair.
Strategies to eliminate errors/bias
• Control
• Randomization
• Cross-over design
• Blinding
• Placebo
Strategies to eliminate errors/bias
• Control
• Randomization
• Cross-over design
• Blinding
• Placebo
Purpose of Control Group
• To allow discrimination of patient outcomes.
• The control group tells what would have happened to
patients if they had not received the test treatment.
• In most situations, however, a concurrent control group is
needed because it is not possible to predict outcome with
adequate accuracy or certainty.
• The test and control groups should be similar with regard
to all baseline and on-treatment variables that could
influence outcome, except for the study treatment.
• Failure to achieve this similarity can introduce a bias into
the study.
Purpose of Control Group
• To allow discrimination of patient outcomes.
• The control group tells what would have happened to
patients if they had not received the test treatment.
• In most situations, however, a concurrent control group is
needed because it is not possible to predict outcome with
adequate accuracy or certainty.
• The test and control groups should be similar with regard
to all baseline and on-treatment variables that could
influence outcome, except for the study treatment.
• Failure to achieve this similarity can introduce a bias into
the study.
Types of Controls
• Control groups in clinical trials can be classified on
the basis of two critical attributes:
• The type of treatment used
• The method of determining who will be in the control
group.
• Type of control treatment may be any of the
following four:
– placebo
– no treatment
– different dose or regimen of the study treatment
– a different active treatment.
Strategies to eliminate errors/bias
• Control
• Randomization
• Cross-over design
• Blinding
• Placebo
Randomization
Purpose of randomization
• Primary purpose- To prevent bias in allocating
subjects to treatment groups.
• Secondary purpose- To achieve comparability
between the groups.
Simple Randomization
• If there are 50 (N) number of subjects, the
treatment plans are divided equally ie, 25(n1)
and 25 (n2). Such that n1+n2= N.
• The envelopes with the treatment plans are
labelled and shuffled.
• The order of the envelopes give the order of
treatment of subjects.
Random allocation
• Random allocation is a procedure in which identified
sample participants are randomly assigned to a
treatment and each participant has the same
probability of being assigned to any particular
treatment.
• If the design is based on N participants and n1 are to
be assigned to Treatment 1 then all possible samples
of size n1 have the same probability of being assigned
to Treatment 1.
• The random selection can be done using a list of
random numbers generated using statistical software
e.g. SPSS, Excel, Minitab, Stata, SAS.
Block Randomization
• Block randomization is commonly used in the
two treatment situation where sample sizes for
the two treatments are to be equal or
approximately equal.
• The process involves recruiting participants in
short blocks and ensuring that half of the
participants within each block are allocated to
treatment “A” and the other half to “B”.
Stratification
• A stratification factor is a categorical covariate
which divides the patient population according
to its levels e.g.
• ‐sex, 2 levels: Male, Female
• ‐age, 3 levels: <40, 40‐59, ≥ 60 years
• ‐recruitment centres
• ‐Status of the health (severity of disease)
• ‐any other known prognostic factor
Minimization
• Using this method, the first patient is truly randomly
allocated; for each subsequent patient, the treatment
allocation is identified, which minimizes the
imbalance between groups at that time.
Strategies to eliminate errors/bias
• Control
• Randomization
• Cross-over design
• Blinding
• Placebo
What is cross over design?
• A cross-over trial design involves giving the two
or more interventions/treatments to a single group
of patients.
• At its most basic, this trial tests the efficacy of
two treatments where each patient spends a period
of time under both treatment options.
• Patients are randomized into which treatment they
receive first, and then swap to the other treatment
after a predetermined time.
Cross-over design

A A

B B
“Crossover”
Why are They Used?
• Cross-over trials are useful because they reduce
confounding factors associated with between-
subjects designs.
– Patients serve as their own controls
– Higher statistical power than between subjects
designs due to no between-subjects error (i.e. need
less patients to find statistical significance).
Increase the power of a hypothesis test

• Use a larger sample


• Improve your process
• Use a higher significance level (also called alpha
or α)
• Choose a larger value for Differences
• Use a directional hypothesis (also called one-
tailed hypothesis)
Weaknesses with these designs

Common threats to internal validity that


can tarnish these “gold standard” designs
Internal vs. External Validity
• One of the strengths of randomized designs are that they
have substantially higher internal & external validity.
• Internal Validity: refers to the integrity of the
experiment itself. It is the ability to draw a causal link
between your treatment and the dependent variable of
interest.
• External Validity: by contrast, refers to the ability to
generalize your study findings to the population at large.
In other words, are your findings from a sample of 12/24
patients with HYPERTENSION going to apply to all
patients (population) with HYPERTENSION?
Threats to Internal Validity
• Shadish et al., (2002) summarized a number of
possible threats to internal validity, which can
severely jeopardize the findings of even RCT
designs. In particular:
– History, Mortality, & Maturation
– Repeated Testing
– Confounding
– Diffusion & Compensatory Rivalry
Threats to Internal Validity
• History, Mortality, & Maturation
– History: events external to the experiment influence the
participants’. EX: Superstorm Sandy hits during a crossover
trial in New Jersey.
– Mortality: Patients either die (mortality) or drop out of the
study (attrition) at different rates.
– Maturation: Patients change over the course of the
treatment, which influences results. EX: Children grow up
during the course of a pediatric clinical trial.
• Repeated Testing
– Patients can become “test-wise” if given the same subjective
test multiple times, or they become conditioned to being
tested (EX: patient’s pulse increases before a needle stick).
Threats to Internal Validity
• Confounding
– Uncontrolled variables are interacting with treatment
effects, which can produce spurious or “random”
associations/results.
• Diffusion & Compensatory Rivalry
– Diffusion: Treatment effects can “spill over” or
“spread” across treatment groups. EX: Patients from
different groups live near each other and discuss /
share their experiences or treatments.
– Compensatory Rivalry: Patients perform in a
certain way because they know they’re in the control
/ experimental groups.
Strategies to eliminate errors/bias
• Control
• Randomization
• Cross-over design
• Blinding
• Placebo
Blinding in clinical trials
A- Non-blinded (open or open label)
denotes trials in which everyone involved
knows who has received which
interventions throughout the trial.
B- Blinding (masking) indicates that
knowledge of the intervention
assignments is hidden from participants,
trial investigators, or assessors.
Blinding in clinical trials
• The term blinding refers to keeping trial participants,
investigators (usually healthcare providers), or
assessors (those collecting outcome data) unaware of
an assigned intervention, so that they are not
influenced by that knowledge.
• Blinding prevents bias at several stages of a trial,
although its relevance varies according to
circumstance.
I- Single blinded

• Usually means that one of the three categories


of individuals (normally participant rather
than investigator) remains unaware of
intervention assignments throughout the trial.
• A single-blind trial might also, confusingly,
mean that the participant and investigator
both know the intervention, but that the
assessor remains unaware of it.
II-Double blinded
• In a double-blind trial, participants, and
investigators remain unaware of the
intervention assignments throughout the trial.
III- Triple blinded
• Triple blind usually means a double-blind trial that also
maintains a blind data analysis.
• Some investigators, however, denote trials as triple-blind if
investigators and assessors are distinct people and both, as
well as participants, remain unaware of assignments.
• Investigators rarely use quadruple blind, but those that do use
the term to denote blinding of participants, investigators,
assessors, and data analysts.
• Thus, quintuple blind must mean that the allocation schedule
has been lost and nobody knows anything.
Individual Potential benefits
blinded

Participants 1. Less likely to have biased psychological or physical


responses to intervention
2. More likely to comply with trial regimens
3. Less likely to seek additional adjunct interventions
4. Less likely to leave trial without providing outcome data,
leading to lost to follow-up Trial
5. Less likely to transfer their inclinations or attitudes to
participants

Investigators 1. Less likely to differentially administer co-interventions


2. Less likely to differentially adjust dose
3. Less likely to differentially withdraw participants
4. Less likely to differentially encourage or discourage
participants to continue trial

Assessors Less likely to have biases affect their outcome assessments,


especially with subjective outcomes of interest
Difficulty with blinding

• In surgical trials cannot be double blinded.


Specifically, a trial that compares degrees of
pain associated with sampling blood from the
ear or thumb cannot be double-blinded.
Masking or blinding

• Some people prefer the term masking to


blinding to describe the same procedure.
• Masking might be more appropriate in trials
that involve participants who have impaired
vision, and could be less confusing in trials in
which blindness is an outcome.
• Blinding, however, conveys a strong bias
prevention message.
Strategies to eliminate errors/bias
• Control
• Randomization
• Cross-over design
• Blinding
• Placebo
Placebo
• Placebo means “I will please”(Latin)
• A substance or procedure that is objectively
without specific activity for the condition
being used.
• Ex- inert tablets, vehicle infusions, sham
surgery, sham acupuncture, sham electrodes,
ultra sound.
• Inert tablets made up of starch or lactose
Placebo
• A placebo refers to a pharmacologically inactive agent
that investigators administer to participants in the
control group of a trial.
• The use of a placebo control group balances the
placebo effect in the treatment group, allowing for
independent assessment of the treatment effect.
• Although placebos can have a psychological effect,
they are administered to participants in a trial because
they are otherwise inactive.
Placebo
• An active placebo is a placebo with properties that mimic the
symptoms or side-effects— eg, dry mouth, sweating—that
might otherwise reveal the identity of the (pharmacologically)
active test treatment.

• Placebos should be administered to controls when assessing


the effects of a proposed new treatment for a condition for
which no effective treatment already exists.

• Blinding frequently necessitates the use of placebos. However,


a proven effective standard treatment, if exists, is usually given
to the control group for comparison against a new treatment.
Advantages of Placebo-controlled
Trials
• Ability to Demonstrate Efficacy
• Measures "Absolute" Efficacy and Safety
• Minimizing the Effect of Subject and
Investigator Expectations
Advantages of Placebo-controlled
Trials
• Ethical Concerns
• Patient and Physician Practical Concerns
• No Comparative Information
Assignment
Impact factor
• The impact factor (IF) or journal impact
factor (JIF) of an academic journal is
a scientometric index that reflects the yearly
average number of citations that articles
published in the last two years in a given journal
received. It is frequently used as a proxy for the
relative importance of a journal within its field;
journals with higher impact factors are often
deemed to be more important than those with
lower ones.
Impact factor
• The impact factor was devised by Eugene Garfield, the
founder of the Institute for Scientific Information (ISI).
Impact factors are calculated yearly starting from
1975 for journals listed in the Journal Citation
Reports (JCR). ISI was acquired by Thomson Scientific
& Healthcare in 1992,[1] and became known as
Thomson ISI. In 2018, Thomson ISI was sold to Onex
Corporation and Baring Private Equity Asia.[2] They
founded a new corporation, Clarivate, which is now
the publisher of the JCR.[3]
H-index
• The h-index is an author-level metric that measures
both the productivity and citation impact of
the publications of a scientist or scholar.
• The index is based on the set of the scientist's most
cited papers and the number of citations that they
have received in other publications.
• The index was suggested in 2005 by Jorge E. Hirsch,
a physicist at UC San Diego, as a tool for
determining theoretical physicists' relative
quality[4] and is sometimes called the Hirsch
index or Hirsch number.
Definition of h index
• The h-index is defined as the maximum value
of h such that the given author/journal has
published h papers that have each been cited
at least h times.
What is i-10 index
• i10-Index = the number of publications with at
least 10 citations. This very simple measure is
only used by Google Scholar, and is another
way to help gauge the productivity of a
scholar.
What is scopus index Journal?
• One among many, Scopus is an “abstract and
citation indexing database of peer-reviewed
literature: scientific journals, books and
conference proceedings” owned by the
Elsevier publishing firm (Elsevier,
https://www.elsevier.com/en-
au/solutions/scopus).
Some famous scientific journal
publishers
• Elsevier (Science direct)
• Springer
• Wiley inter-science
• ACS (American Chemical Society)
What is web of science??
• Web of Science (previously known as Web of
Knowledge) is a website which provides
subscription-based access to multiple databases that
provide comprehensive citation data for many
different academic disciplines.
• Disciplines: Science, social science, arts, humanity
etc.
• Record depth: Citation indexing, author, etc
• Format coverage: Full text articles, reviews etc
What is pubmed??
• PubMed is a free resource supporting the
search and retrieval of biomedical and life
sciences literature with the aim of improving
health–both globally and personally.
The PubMed database contains more than 30
million citations and abstracts of biomedical
literature.
How to know a researcher??
• Dr. Vijay Kothari
Institute of Science, Nirma University
S-G Highway, Ahmedabad-382481,India
Ph:(O) +91-079-71652757 (M) 09998365230
vijay23112004@yahoo.co.in; vijay.kothari@nirmauni.ac.in
http://scholar.google.co.in/citations?user=KtRl6p4AAAAJ
• ORCID ID: http://orcid.org/0000-0003-1035-7217;
• Scopus ID: 36608305900
Publons: https://publons.com/a/536628;
• Rxivist: https://rxivist.org/authors/258084
ResearchGate: http://www.researchgate.net/profile/Vijay_Kothari
• LinkedIn: www.linkedin.com/in/vijay-kothari-832bb2136

https://orcid.org/my-orcid

You might also like