Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

I. Validity The degree to which an instrument measures and what it is intended to measure.

re. It is a more complex concept that concerns the soundness of the studys evidencethat is whether findings are cogent and well grounded. It is an important criterion for assessing the method of measuring variables. In this context, the validity question is whether there is evidence to support the assertion that the methods are really are really measuring the abstract concepts that they purport to measure. The validity criterion underscores the importance of having solid conceptual definitions of research variablesas well as high quality methods to operationalize them.

II. Internal Validity and External Validity Consumers evaluating the merits of a study need to pay attention to the research design. One framework for evaluating the adequacy of a research design is to assess its internal and external validity. ii.i Internal Validity Refers to the extent to which it is possible to make an inference that the independent variable is truly influencing the dependent variable. True experiments possess a high degree of internal validity because the use of such procedures as control groups and randomization enables the researcher to control extraneous variables, thereby ruling out most alternative explanations for the results.

ii.ii External Validity Refers to generalizability of the research findings to the other settings or samples. Research is almost never conducted with the intention of discovering relationships among variables for one group of people at one point in time.

III. Validity of Measuring Instruments iii.i Content Validity It is concerned with the sampling adequacy of the content area being measured.

It is of particular relevance to people designing tests of knowledge in a specific content area.

iii.ii Criterion-Related Validity It is a pragmatic approach in which the researcher seeks to establish the relationship between the scores on the instrument in question and some external criterion. The instrument whatever abstract attribute it is measuring, is said to be valid if it scores correspond strongly with scores on some criterion. o Validity coefficient is computed using a mathematic formula that correlates the scores on the instrument with scores on the criterion variable. o Predictive validity refers to the ability of an instrument to differentiate between the performances or behaviors of subjects on some future criterion. o Concurrent validity refers to the ability of an instrument to distinguish among people who differ on their present status on some criterion.

iii.iii Construct Validity It is concerned with the following question: What construct is the instrument actually measuring? It employs both logical and empirical procedures. It also requires a judgment pertaining to what the instrument is measuring. Logical operations required by construct validation are typically linked to a theory or conceptual framework. Construct validity and criterion related validity share an empirical component, but in the latter case, there is usually pragmatic, objective criterion with which to compare a measure rather than a second measure of an abstract theoretical construct.

iii.iv Interpretation of Validity The testing of an instruments validity is not proved or established but rather is supported by a greater or lesser degree of evidence. The more evidcence that can be gathered that an instrument is measuring what it is supposed to be measuring, the greater the confidence the researcher can have in its validity.

IV. References Polit and Beck (2007) Canadian Essentials of Nursing Research. United States: Lippincott Williams and Wilkins. Polit and Hungler (1993) Essentials of Nursing Research Methods, Appraisal and Utilization. United States: J.B. Lippincott Company

Submitted by: Jane Dear M. Pasal Student

Submitted to: Ma. Florecilla C. Cinches Professor

You might also like