Week 9 - Concept Notes - 0

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

MATT 122-PRACTICAL RESEARCH 2

Second Semester, FinalTerm


Concept Note (Week 9, Session1&2)

Research Instruments for Quantitative Research

In a broader definition, an instrument can be defined as a tool, such as a questionnaire or a


survey, that measures specific items to gather quantitative data. Researchers use instruments
to measure abstract concepts such as achievement, the ability of individuals, or personality.
It also allows researchers to observe behavior and interview individuals. (Plano Clark and
Creswell 2015).

Types of Research Instruments


According to Plano Clark and Creswell (2015), quantitative studies mostly have five general
types of research instruments: demographic forms, performance measures, attitudinal
measures, behavioral observation checklists, and factual information documents.

Demographic Forms
Demographic forms are used by the researchers to collect basic information about the
participants. Basic information such as age, gender, ethnicity, and annual income are some of
the information asked in a demographic form.

Example:
1. Age: ___
2. Gender:
____ Male
____ Female
____ Prefer not to say
3. Civil Status:
____ Single
____ Married
____ Widowed
4. Nationality: _____________________

Performance Measures
Performance measures are used to assess or rate an individual’s ability such as achievement,
intelligence, aptitude, or interests. Some examples of this type of measure include the
National Achievement Test administered by the Department of Education or the college
admission tests conducted by the different universities in the country.

Attitudinal Measures
Attitudinal measures are instruments used to measure an individual’s attitudes and opinions
about a subject. These instruments assess the respondent’s level of agreement to the
statements, which often requires them to choose from varied responses such as strongly
agree to strongly disagree. The questionnaire on the Explore part is an example of an
attitudinal measure since it determines the extent on which the participants will agree or
disagree with a given statement.

Behavioral Observation Checklist


Behavioral observation checklists are used to record individuals’ behaviors and are mostly
used when researchers want to measure an individual’s actual behavior instead of simply
recording a person’s views or perceptions.

Factual Information Documents


Factual information documents are accessed to tell information about the participants’
documents such as available public records. An example of these documents are school
records, attendance records, medical records or even census information.

Constructing Research Instruments for Quantitative Research


It is necessary for the researcher to establish first the objectives or research questions that
they aim to answer. This is a prerequisite for constructing a good quality research
instrument. Kumar (2011) suggests the following procedure for beginners in constructing a
research instrument for quantitative research:
1. State your research objectives. To begin with, all specific objectives, research questions, or
hypotheses that you aim to explore must be clearly stated and defined.
2. Ask questions about your objectives. Construct related questions for each objective,
research question, or hypothesis that you aim to explore in your study.
3. Gather the required information. All questions constructed must be taken into
consideration to identify how these can be answered.
4. Formulate questions. Finalize all possible questions that you will ask to your participants
to obtain relevant information about the study.

Assessing the Quality of an Instrument


Now that you have learned about the different types of instruments used in quantitative
research, it is also important to ensure that the instruments you will use are of good quality
since this will determine the quality of the data you can collect for your research study. The
researcher can either construct their own instrument or use a well-developed instrument by
another researcher for their study.
How can you find out whether the instrument is good or bad? This question can be answered
using the two criteria that are often used to evaluate research instruments: reliability and
validity.

Reliability
The reliability of a measure can be simply defined as the stability and consistency of an
instrument under different circumstances or points in time. This is true for all the types of
reliability although it differs in the type of consistency of the measure. Reliability can tell
about the instrument’s internal consistency, stability over time, and alternate forms.

Internal Consistency
Internal consistency means that any group of items taken out from a specific instrument will
likely bring about the same results just like when the entire instrument was administered. It
will tell how consistent are the items from a research instrument measuring a specific
concept. The internal consistency of a measure can be obtained through the following
techniques (Howitt 2014):
● Split-half reliability. The score which resulted from half of the items on the instrument was
correlated with the score on the other half of the instrument.
● Odd-even reliability. The obtained score of the even-numbered items (e.g., items 2, 4, 6, 8,
and so on) was correlated with the score on the odd-numbered items (e.g., items 1, 3, 5, 7,
and so on) of the same instrument.
● Cronbach’s alpha. Also called the alpha reliability. This is obtained by getting the mean of
every possible half of the items correlated with every possible other half of the items. In other
words, Cronbach’s alpha gets the average of all possible halves into two equal sets of items.

Stability Over Time


An instrument’s stability over time is also known as test-retest reliability. This is simply the
correlation between the scores of the participants on an instrument at one point in time with
their scores on the same instrument given at a later point in time.
An example of this type of reliability is when a school administers a diagnostic exam at the
beginning of the school year. Then, their scores on that test will be compared to the scores
on the same diagnostic exam administered at the end of the school year. One thing to be
considered in this type of reliability is that the participants may remember some of the items
since the instrument to be used will be the same.

Alternate Forms
To cancel out the effects of remembering the items as discussed above in the test-retest
reliability, another way to measure reliability is by using alternate forms. Another term for
this type of reliability is called parallel forms reliability. This requires the researcher to use
equivalent versions of the test, wherein the participant’s scores in both tests are being
correlated. For example, a teacher may use alternate versions of math tests (e.g., Set A and
Set B) for their students, but it basically measures the same scope or content (e.g., quadratic
equation).

Validity
A general definition of validity is the instrument’s capacity to measure what it is supposed to
measure. This means that the instrument is an accurate measure of the variable being
measured. There are three types of validity: face and content validity, criterion validity, and
construct validity (Kumar 2011).

Face and Content Validity


The face validity and content validity of an item both speak of the instrument’s capability to
measure the intended construct by evaluating the questions in the instrument in relation to
the research question.
Face validity is the extent to which an instrument appears to measure what it is supposed
to measure. In this type of validity, the items are evaluated if it has a logical relationship
between the research objectives. This is also considered the weakest form of validity since it
merely requires the researcher to look at the instrument and evaluate whether the items
measure an intended construct based on its appearance.

Content validity is the ability of the test items to include important characteristics of the
concept that is intended to be measured. For example, you may take your first periodical test
for the school year and judge immediately whether the scope of the test is in line with the
lessons your teacher has taught you for the entire first quarter.

Criterion Validity
Criterion validity tells whether a certain research instrument can give the same result as
compared to other similar instruments. There are two types of criterion validity: concurrent
and predictive validity (Langdridge and Hagger-Johnson 2013).
Concurrent validity can be obtained by correlating two research instruments taken at the
same time. This type of validity is similar to alternate forms reliability wherein two similar
instruments are used to evaluate the quality of the instrument.
Predictive validity refers to the ability of an instrument to predict another variable, which is
called a criterion. The criterion should be different from the construct originally being
measured. For example, a college entrance exam composed of different subtests such as
reasoning, numerical, and verbal ability has a predictive validity to a student’s likelihood to
succeed in the university they are applying to.

Construct Validity
Construct validity can be assessed by examining whether a specific instrument relates to
other measures. This type of validity is the most sophisticated among all the other types of
validity. The process of obtaining construct validity involves correlating the scores between
the instrument to be evaluated to other instruments. Construct validity can be classified into
two types: convergent validity and discriminant validity (Leary 2011).
Convergent validity is obtained when an instrument correlates with other similar
instruments that it is expected to correlate with. For example, a scale about self-esteem can
be correlated to other instruments measuring related constructs such as self-confidence.

Discriminant validity, on the other hand, is obtained if an instrument does not correlate
with other instruments that it should not correlate with. For example, a scale about self-
esteem must be correlated to instruments not related to the construct, such as intelligence.

You might also like