Questionnaire (PracRes2)

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

RESEARCH INSTRUMENTS

“The quality of the research findings depends on the quality of research instrument.”

Research Instrument is the general term use by the researcher for measuring device such as surveys,
questionnaire, test, and etc.

How to construct research. instrument?

1. Generate questions based on the objectives of the research study. These are the guidelines in
developing questions for your questionnaire:

 The questions should be clear, concise and simple. Avoid lengthy and confusing
question.
 Classify question under each statement based on your problem statement.
 Questions should be consistent within the needs of study.
 Avoid sensitive and debatable question.
 Avoid jargon or unfamiliar words.
2. Choose type of questions in developing the statement. It can be:
 Dichotomous questions. A question with only two choices such as “Yes/No” or
“Like/Dislike”.
 Open-ended questions. A question that normally answers the question “why”.
Example: What do you like most about your school?
 Closed questions. It is also called multiple-choice questions. It consists of three or
more choices.
Example: What is the highest education of your mother?
___Elementary ____high school _____college
 Rank- order Scale questions. A type of question that ask for ranking the given choices
or items.
Example: Rank the following base on their importance in work as SHS student. (3=
highest and 1=lowest)
_____Doing homeroom activities
_____Going to library
_____Using computer
 Rating Scale questions. It is the Likert scale form.
Statements:
I feel lazy doing homework
I am motivated to learn because of interesting
Learning tools

Establishing the validity of the questionnaire

 Validity refers to a degree to which the instrument measures what it intends to


measure. It involves collecting and analyzing data to assess the accuracy of an
instrument.
Types of Validity

 Face Validity. A subjective type of assessment. This is the simplest and easiest type of
validity where in the validator skim the surface of the instrument in order to form an
opinion. Moreover, it is often criticized as the weakest form of validity (Stephanie,
2015).
 Content Validity. It refers to the appropriateness of the content of an instrument. A
type of validity that most often includes expert or people familiar with the construct
being measured. The expert make judgement about the degree to which the items in
the questionnaire match the objective of the study.
 Criterion Validity. This type of validity measures how well the relationship between a
measure and an outcome. It can be measured in three ways:
 Convergent validity. Shows instrument is highly correlated with
instruments measuring similar variables. (e.g. geriatric suicide correlated
to depression)
 Divergent validity. Shows the instrument is poorly correlated to
instruments that measure different variables. (e.g. correlation is
Low in instrument measuring motivation and instrument measuring self-
efficacy)
 Predictive validity. The instrument that has correlation with future
criterion. (e.g. score of high efficacy test related to preforming task
should predict the likelihood of completing task)
 Construct Validity. Defines how well a test measures what it claims to measure. It is
used to know whether the operational definition of a construct align to the true
theoretical meaning of a concept. There are three types of construct validity:
 Homogeneity. This means that the instrument measures one construct
only.
 Convergence. The instrument measures construct similar to another
instrument.
 Theory evidence. This is when behavior is similar to theoretical
propositions of the construct measured in the instrument.

Establishing the reliability of the questionnaire

 Reliability refers to how accurate and precise the measuring instrument. It yields for
consistent responses over repeated measurements. In order to have a reliable
instrument, you need to have questions that yields consistent scores when asked
repeatedly.
 Types of reliability test
 Stability or Test-retest reliability. This is the simplest type of reliability
where in the same questionnaire is administered twice and correlation
between two set of score is computed.
 Split-half method. Also called equivalent or parallel forms. This is done by
administering two different set of questionnaires but with same topic and
correlation between two set of score is computed.
 Internal consistency. This is when the instrument measures a specific
concept. It is estimate based on a single form of test administered on a single
occasion.

You might also like