Validity: Prepared By: R.Sreeraja Kumar Professor SNSR, Sharda University

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 38

VALIDITY

PREPARED BY:
R.SREERAJA KUMAR
PROFESSOR
SNSR,SHARDA UNIVERSITY
Introduction:

Validity of an instrument refers to the degree to which an


instrument measures what it is supposed to be measuring.
For example, a temperature- measuring instrument is
supposed to measure only the temperature; it cannot be
considered a valid instrument if it measures an attribute
other than temperature.
Similarly, if a researcher developed a tool to measure
the pain, and if it also includes the items to measure
anxiety, it cannot be considered a valid tool.
Therefore, a valid tool should only measure what it is
supposed to be measuring.
Definition:

*According to Treece and Treece, 'Validity to an


instrument or test actually testing what it is supposed
to be testing'.

*According to Polit and Hungler, ' Validity refers to the


degree to which an instrument measures what it is
supposed to measuring'.
*Validity is the ability of an instrument to measure
what it is designed to measure.

*Validity is the appropriateness, completeness, and


usefulness of an attribute measuring research
instrument.

*In general, a test is valid if it measures what it is a


claim to measure.
Characteristics of validity:

It has three important characteristics:

*Validity is a relative term. A test is not generally valid. It is


valid only for a particular purpose.

*For example, a test of statistical ability will be valid only for


measuring statistical ability because it is put only to the use of
measuring that ability. It will be worthless for others uses like
measuring the knowledge in physics, chemistry, English etc.
*Validity is not a fixed property of the test because
validation is not a fixed process rather an unending
process.

*With the discovery of the new concepts, and the


formulation of new meanings the old contents of the
test become less meaningful.

*Therefore, they need to be modified radically in the


light of new meanings.
*Validity like reliability, is a matter of degree and an
all-or-one properly.

* A test meant for measuring a particular trait or ability


cannot be said to be either perfectly valid or not valid
at all.

*It is usually more or less valid.


TYPES OF VALIDITY

*Face validity
*Content validity
Criterion-related validity
Predictive validity
*Concurrent validity
*Construct validity
Convergent validity
Discriminant validity
Face Validity:

*It is the extent to which the measurement method


appears “on its face” to measure the construct of
interest.

* Curricular Validity is the extent to which the content


of the test matches the objectives of a specific
curriculum.

 
*Face validity involves an overall look of an
instrument regarding its appropriateness to measure a
particular attribute or phenomenon.

*Though face validity is not considered a very


important and essential type of validity for an
instrument.
*For example; a Likert scale designed to measure the
attitude of the nurses towards the patients admitted
with HIV/AIDS; a researcher may judge the face value
of this instrument by its appearance, that is it looks
good or not; but it does not provide any guarantee
about the appropriateness and completeness of a
research instrument with regard to its content,
construct, and measurement score.
Content Validity:

*It is the extent to which the measurement method


covers the entire range of relevant behaviors, thoughts,
and feelings that define the construct being measured.

*It is concerned with scope of coverage of the content


area to be measured. More often it is applied in tests of
knowledge measurement. It is mostly used in
measuring complex psychologic tests of a person.
*It is a case of expert judgment about the content area
included in the research instrument to measure a particular
phenomenon.

*Judgment of the content viability may be subjective and is


based on previous researchers and experts opinion about
the adequacy, appropriateness, and completeness of the
instrument.

* Generally, this viability is ensured through the judgments


of experts about the content.
Criterion related validity:

*This type of validity is a relationship between


measurements if the instrument with some other
external criteria.

*For example, a tool is developed to measure the


professionalism among nurses and to assess the
criterion validity nurses were separately asked about the
number of research papers they published and the
number of professional conferences they have attended.
*Later a correlation coefficient is calculated to assess the
criterion validity.

*This tool is considered strong with criterion validity if a


positive cor-relation exists between score of the tool
measuring professionalism and the number of research
articles published and professional conferences attended
by the nurses.
*This instrument is valid if its measurements strongly
respond to the score of some other valid criteria. The
problem with criterion-related validity is finding a
reliable and valid external criterion.

*Mostly we are to rely on a less than perfect criterion


because the rating found by empirical and supervisory
methods may be computed mathematically, which can
correlate source of instrument with scores of criterion
variables.
*Hence the range of coefficient ≥0.70 is desirable.
Criterion-related validity may be differentiated by
predictive and concurrent validity.

* It is the extent to which people’s scores are correlated


with other variables or criteria that reflect the same
construct.
Types of criterion validity:

*Predictive Validity: When the criterion is something that will


happen or be assessed in the future, this is called predictive
validity.

*It is the degree of forecasting judgment; for example, some


personality tests on academic future of students can be predictive
of behavior patterns.

* It is the differentiation between performances on some future


criterion and instruments ability. An instrument may have
predictive validity when its score significantly correlates with
some future criteria.
Concurrent Validity: When the criterion is something
that is happening or being assessed at the same time as
the construct of interest, it is called concurrent validity.

*It is the degree of the measures in present. It relates to


the present specific behavior and characteristics; hence
the difference predictive and concurrent validity refers
to timing pattern of obtaining measurements of a
criterion.
Construct Validity:

*Construct validity refers to the degree to which a test or other


measure assesses the underlying theoretical construct it is
supposed to measure.

*Construct validity – determined by ascertaining the


contribution of each construct to the total variance observed
in a phenomenon.

* Based upon statistical procedure.


*The greater the variance attributable to the construct, the
higher the validity of the instrument.
Convergent validity:

*It is established when the test scores and its expected referents
are highly correlated. For example, if numerical amplitude test
correlate highly with an arithmetical reasoning test and
algebra test then there exists convergent validity.

*It consists of providing evidence that two tests that are


believed to measure closely related skills or types of
knowledge correlate strongly.


Discriminant validity:

* Discriminant validity is established, when the test scores correlates


poorly (or uncorrelated at all) with measures with which it should not
because it differs from those referents or measures. For example, if
numerical amplitude test correlate poorly (or uncorrelated) with a history
or geography test, then there exists discriminant validity. In this sense,
low correlation sometimes becomes the evidence for the validity of the
test.

* Discriminant validity by the same logic, consists of providing evidence


that two tests that do not measure closely related skills or types of
knowledge do not correlate strongly.
General criterion for Validity of Scale:

Logical validation:

*According to this criterion, scale is said to be valid, if


it stands to one's common senses.

*This approach is not universally accepted as it is


greatly influenced by ones subjective view.
Known opinion:

*In this criterion, instead of common sense of one


person, judgment of a group of persons is used.

*This criterion is superior to logical validation because


it is less subjective. But still it cannot be said
completely free from biases.

*A large number of people not necessarily free from


biases in their opinions.
Jury groups:

*In this criterion, a scale is administered to subjects who are


known to hold a particular opinion or belong to a particular
category and the results are compared with the known
facts.

*For example, if attitude of persons towards federal state is


to be measured, we deliberately select persons who are
either strong supporter of the federalism or opponent of the
federalism. if they tally with the known results, the scale
administered is considered valid.
Independent criteria:

*In this criterion, a scale is tested with various outer


independent variables. If all or most of the test show
the same or similar results, the scale is said to be valid.

*For example, we may obtain scores on a new


intelligence test and compare with the standard one. If
the sources obtained on the both scales are similar,
then the scale is said to be valid one.
Steps for Assessing Validity of an Experimental
Study Step Assessment Process Decision

*Validity of statistical conclusion: Assess statistical


significance (i.e., p value is 0.05 and statistical results
are valid).

*Difference is real and is not likely due to chance


variation; proceed to next step. Difference is likely
due to chance variation; stop here.
Internal validity:

*It is basically the extent to which a study is free from flaws


and that any differences in a measurement are due to an
independent variable and nothing else.

*Assess internal validity on basis of research design and


operational procedures. Difference is most likely due to the
treatment; proceed to next step. Difference is probably due
to the effects of confounding factors or bias; stop here.
External validity:

*It is the extent to which the results of a research study can be


generalized to different situations, different groups of people,
different settings, different conditions, etc.

*Examine inclusion and exclusion criteria and characteristics


of study participants. Study participants are similar to patients
the report reader sees; the treatment should be useful./Study
participants are very different from patients the report reader
sees; the treatment may or may not be useful.
Measurement of Validity

*Important points to evaluate the validity of a


measurement method.

* 1. First, this process requires empirical evidence. A


measurement method cannot be declared valid or
invalid before it has ever been used and the resulting
scores have been thoroughly analyzed.
*2. Second, it is an ongoing process. The conclusion that
a measurement method is valid generally depends on
the results of many studies done over a period of years.

*3. Third, validity is not an all-or-none property of a


measurement method. It is possible for a measurement
method to judged "somewhat valid" or for one measure
to be considered "more valid" than another.
Factors Affecting Validity:

*History
* Maturation
*Testing
*Instrumentation
* Statistical Regression
*Experimental Mortality
*Compensatory Rivalry by Control Group
* Compensatory Equalization of Treatments Resentful
Demoralization of Control Group
Summarization:
CONCLUSION:

*Validity is the ability of an instrument to measure what


it is designed to measure.

*Validity is the appropriateness, completeness, and


usefulness of an attribute measuring research instrument.
Assignment:

Explain about Validity.

Date of submittion: 15/1/ 201920

Student teacher: Netra Gautam , M.Sc. 1st year


REFERENCES

*Sharma, .K.S. (2018). "Nursing research and statistics".


(3rd edition). Published by Elsevier, New Delhi, Page
no.336-337.

*Patel, P.S (2016). "Essential Textbook of Nursing research


and statistics". (1st edition) Samikshya Publication,
Kathmandu, Nepal. Page no. 167-171.

*https://www.google.com/url?sa=t&rct=j&q=&esrc=s&sou
rce=web&cd=1&cad=rja&uact=8&ved=2ahUKEwicrLiFr
-_mAhXZdCsKHTEqBtsQFjAAegQICxAB&url=https%3
Thank you

You might also like