Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 44

DEFINE AND DESCRIBE THE

TERM VALIDITY. ELABORATE


DIFFERENT TOOLS USED FOR
VALIDATION PURPOSE.
Presented to: Dr. Fahd Naveed Kausar
Presented by: Samina Gul
Roll no. 01 (Semester 2nd)
VALIDITY
 Validity is the extent to which an instrument
measures what it is supposed to measure.
 Validity explains how well the collected data
covers the actual area of investigation (Ghauri
and Gronhaug, 2005).
 Valid
Sound (reasoned and logical), well defined and
justified (supported by evidence) are some of
the words used to define the term valid.
 There are numerous statistical tests and
measures to assess the validity of quantitative
instruments.
BACKGROUND OF VALIDITY
 To understand validity, one must understand how
human make sense of their worlds through
inferences, interpretations, and conclusions.
 Inferences are statements about unseen
connections between phenomena (deduce
something from evidence and reasoning).
 Interpretations involve taking evidence and making
sense of it, generally it is more value laden explain
the meaning of something).
 Conclusions are summaries that take into account
a range of available data (final decision by
reasoning).
 Inferences, Interpretations and Conclusions involve
making sense of observable phenomenon and based
on available evidence.
CONCEPT OF VALIDITY

 The word "valid" is derived from the Latin validus,


meaning strong
 Validity in research refers to how accurately a
study answers the study question or the strength
of the study conclusions
 Validity is not a property of the tool itself, but of
the instrument’s scores and their interpretations
of the assessment tool with particular settings.
 Assessment instruments must be valid for study
results to be credible
 Validity must be examined and reported, or
references cited, for each assessment instrument
used to measure study outcomes.
CONT..
 Validity are considered the main measurement
properties of such instruments.
 Validity refers to the property of an instrument
to measure exactly what it proposes.
 Nowadays, a growing number of questionnaires
or measurement instruments that are used in
researches
 Although many instruments have been created,
many of them have not been adequately
validated.
 Researchers agreeable in considering the
validity as the main instruments’ measurement
properties.
VALIDATION
 The process of evaluating the logical
arguments and scientific evidence that
support claims is called validation.
 Validation involves collecting and analyzing
data to assess the accuracy of an instrument.
 Validation in research
 Validation in assessment
CONT..
 Validation in research
It involves close scrutiny of logical arguments
and the empirical evidence (knowledge
based on observation, actual experience) to
determine whether they support theoretical
claims
 Validation in assessment
It involves evaluating logical arguments and
the empirical evidence to determine whether
they support proposed inferences from as
well as interpretations and uses of
assessment results
POINTS TO BE MEASURED
 To ensure validity of a research following
points are measurable:
 Appropriate time scale for the study has to be
selected
 Appropriate methodology has to be chosen,
taking into account the characteristics of the
study
 The most suitable sample method for the
study has to be selected
 The respondents must not be pressured in any
way to select specific choices among the
answer sets
HOW TO IMPROVE VALIDITY
 There are some ways to improve validity as
follows:
 Make sure a researcher’s goals and objectives
are clearly defined and operationalized.
 Match the assessment measure to the goals
and objectives of research.
 Choose your data collection method and
instrument carefully
 Pilot test your data collection method and
instrument
CONT…

 Collect data from a representative


 Take adequate sample size
 Analyze data using appropriate and robust
statistical techniques
 Interpret and report results accurately and
transparently
 Increasing randomization to reduce sample
bias
THEORETICAL CONCEPT
 Validity is a theoretical concept that has
evolved considerably over time (e.g.,
American Psychological Association 1954,
1966; American Psychological Association,
American Educational Research Association
and National Council on Measurement in
Education 1974, 1985, 1999).
 Validity studies that have been carried out
will serve as empirical evidence ( knowledge
based on observation and actual experience)
CRONBACH’S CONTRIBUTION
 A significant change in validity theory was
Cronbach’s (1971) statement that: ‘one does
not validate a test, but an interpretation of
data arising from a specific procedure. This
meant that the emphasis of the validity
investigation shifted from the specific
instrument to the interpretations of the
measurement outcome. This view has
remained, and most researchers agree that it
is not the test itself but how its outcome is
interpreted and used that should be the
focus in a validation process.
MODERN VALIDITY THEORY
 In modern validity theory, they are often
referred to as a unitary validity framework.
In this framework, construct validity has a
central and overarching position by
embracing almost all forms of validity
evidence.
MESSICK’S (1989)
CONTRIBUTION
 The unitary concept of validity was gradually
developed, and eventually described by Messick
(1989), who established the consensus that
construct validity is the unifying concept of
validity.
 Messick (1989, 13) also describes validity as an,
‘integrated evaluative judgement of the degree to
which empirical evidence and theoretical rationales
support the adequacy and appropriateness of
inferences and actions based on test scores.
 Validity according to Messick was supported by
Cronbach 1988; Crooks, Kane, and Cohen 1996;
Kane 1992
ESSENTIAL PARTS OF VALIDITY
 Lee J. Cronbach and Paul E. Meehl first
introduced the issue of validity in quantitative
research in the mid 20th century in relation to
the establishment of the criteria for assessing
Psychological tests [Cronbach & Meehl, 1955].
 In research, validity has two essential parts:
 a) internal (credibility)
 b) external (transferability)
INTERNAL VALIDITY
 In research, Internal validity ask whether the
result of the investigation are truly due to
expected causal relationship among variable,
based on logical argument and empirical
evidences.
EXTERNAL VALIDITY
 External validity shows whether the results
given by the study are transferable to other
groups of interest [Last, 2001].
 A researcher can increase external validity
by:
 Achieving representation of the population
through strategies (random selection)
THREATS TO INTERNAL VALIDITY
 Errors of measurement that affect validity are
systematic or constant errors. Threats to the
internal validity may occur throughout the
research process.
 During data collection, possible threats to internal
validity are:
 Instrumentation issues
 Researcher bias in the use of techniques
 Order bias (person factor,sample not represent
the large population)
 Statistical factor(participant familiar with item in
pretesting and alter in post testing)
 Situational fator(event outside of the
investigation influence the result)
THREATS TO EXTERNAL
VALIDITY
 The external validity of a quantitative study
may threaten in:
 Interaction of history with treatment (crisis)
 Interaction of treatment with setting (hospital)
 Interaction of treatment with selection
(volunteer)
 Population, time and environmental validity
[Ryan et al., 2002].
 External validity is seriously threatened, if
biases or other limitations exist in the
accessible population [Howell, 1995].
VALIDATION TOOLS
 CKI

 CVI,CVR

 Pearson correlation Coefficient


CONT…
 Tools for measurement are dependent on the
type of validity which a researcher want to
measure
 For face validity = CKI
 For content validity, there is no statistical
tool, it is an expert opinion (CVR, CVI)
 For construct validity, the instrument is
checked by, factor analysis, the multi-trait,
multi-method matrix of correlations and
correlation analysis (Pearson correlation
coefficient)
PEARSON CORRELATION
COEFFICIENT
 Data of questionaire is transferred to SPSS
file
 Now, researcher check the validity of
questionaire
 Go to analyze
 Transform into compute variable, get the
total
 Critical values of Pearson’s correlation
coefficient (5% significance level is 95%
confidence interval uses in researches)
 click Analyze > Correlate > Bivariate (All of
the variables appear in the list on the left
side)
 Select Pearson, two tail, flag significant
correlation then click ok
 Get correlation table
 Horizontally vertically. Values are repeated
 If the total is more than .05, we will say it is
insignificant
 Then check the significance level, which is
same
COHEN’S KAPPA INDEX
 In order to examine the face validity, the
dichotomous scale can be used with
categorical option of “Yes” and “No” which
indicate a favourable and unfavourable item
respectively.
 Where favourable item means that the item is
objectively structured.
 Then the collected data is analysed using
Cohen’s Kappa Index (CKI) in determining the
face validity of the instrument.
 DM. et al. (1975) recommended a minimally
acceptable Kappa of 0.60
CVR
 A content validity survey is generated (each
item is assessed using three point scale (not
necessary, useful but not essential and
essential).The survey should sent to the
experts in the same field of the research.
The content validity ratio (CVR) is then
calculated for each item by employing
Lawshe (1975) ‘s method. Items that are not
significant at the critical level are
eliminated. In following the critical level of
Lawshe method is explained.
CVR
 Lawshe’s Method
The CVR (content validity ratio) proposed by
Lawshe (1975) is a linear transformation of a
proportional level of agreement on how many
“experts” within a panel rate an item “essential”
calculated in the following way:
CVR = 𝑛𝑒−( 𝑁/2 ) / 𝑁 /2
Where CVR is the content validity ratio, ne is the
number of panel members indicating “essential,”
and N is the total number of panel members. The
final evaluation to retain the item based on the
CVR is depends on the number of panels.
CVI
 A quantitative approach using the content validity index
(CVI)
 The CVI measures the proportion or percentage of judges
who agree on certain aspects of a tool and its items.This
method consists of a four-point Likert scale, where: 1 =
non-equivalent item; 2 = the item needs to be extensively
revised so equivalence can be assessed; 3 = equivalent
item, needs minor adjustments; and 4 = totally equivalent
item. The items that receive 1 or 2 points have to be
revised or removed. To calculate the CVI for each item of
the instrument, you have to add all the answers 3 and 4 of
the experts committee and divide the result by the number
of answers, according to the following formula:
 CVI = No. of answers 3 or 4/Total no. of answers
 The acceptable concordance index among the experts
committee must be at least 0.80 and, preferably, higher
than 0.90
REFERENCES
1. (Encyclopedia of Autism Spectrum Disorders pp 3212–3213, Springer
link)(https://link.springer.com/referenceworkentry/10.1007/978-1-4419-1698-3_1652

2. Catherine S. Taylor ( book)

3. (Gail M. Sullivan, MD, MPH)( Downloaded from


http://meridian.allenpress.com/jgme/article-pdf/3/2/119/2219553/jgme-d-11-00075_1.pdf by
Pakistan user on 20 June 2023).

4. Alexandre, N. etal. Psychometric properties in instruments evaluation of reliability and validity,


(doi: 10.5123/S1679-49742017000300022), Epidemiol. Serv. Saude, Brasília, 26(3), Jul-Sep 2017

5. Mohajan, Haradhan, 2017. Two Criteria for Good Measurements in Research: Validity and
Reliability MPRA Paper No. 83458, posted 24 Dec 2017 08:48 UTC Annals of Spiru Haret
University, 17(3): 58-82 (Online at https://mpra.ub.uni-muenchen.de/83458/)
REFERENCES
6.Taherdoost,H (2016).Validity and Reliability of the Research
Instrument; How to Test the Validation of a
Questionnaire/Survey in a Research, International Journal of
Academic Research in Management Volume 5, Issue 3, 2016, ISSN:
2296-1747 (CKI)
7.Ana Cláudia de Souza et al.(2017). Psychometric properties in
instruments evaluation of reliability and validity. Epidemiol.
Serv. Saude, Brasília, 26(3).(Criterian)
8. (Simon Wolminga and Christina Wikströmb, 2015.The concept of
validity in theory and practiceAssessment in Education:
Principles, Policy & PracticeVol. 17, No. 2, 117–132)

You might also like