Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 39

Instrumentation in Research

Dr. Ayaz Muhammad Khan


Professor &
University of Education, Lahore, Pakistan
Email: ayaz@ue.edu.pk
Burning Questions
 What is the difference between instrument and
instrumentation?
 What sort of phenomena we measure through
instrument objective/subjective
 Difference between Likert scale and Likert
questionnaire
 What is the role of theory in instrument development
 What is a theory?
What Is an Instrument
 Instrument is the generic term that researchers use for a
measurement device (survey, test, questionnaire, etc.). To help
distinguish between instrument and instrumentation, consider
that the instrument is the device and instrumentation is the
course of action (the process of developing, testing, and using
the device).
 Instruments fall into two broad categories, researcher-
completed and subject-completed, distinguished by those
instruments that researchers administer versus those that are
completed by participants. Researchers chose which type of
instrument, or instruments, to use based on the research
question.
Instrument construction is a learning process

An instrument is a mechanism for measuring phenomena,


which is used to gather and record information for
assessment, decision making, and ultimately understanding.
An instrument such as a questionnaire is typically used to
obtain factual information, support observations, or assess
attitudes and opinions.
Objective Measurement of Subjective Phenomena
In the social sciences most instruments are of the paper-and-
pencil variety, meaning that the individual completing the
instrument is expected to record information on a form.
Instrument Versus Tool
Different Instrument
Researcher Completed Subject Completed
Instruments Instruments
 Rating Scales  Questionnaires
 Interviews Schedule  Self checklists
 Flowcharts  Attitudes Scales
 Tally Sheets  Personality
 Performance Inventories
Checklists  Achievements tests
 Time and motion logs  Projected Devices
 Observation Forms  Socio-metric Devices
Instrument Development

 Pre Field/ Instrument Development


 Field/ Administration
 Post Field/ Instrument Evaluation
Components of an Instrument
Typically, there are six parts to an instrument, and all or
most will be included regardless of the intended
purpose or the process used to collect the data.
 Title
 Introduction
 Directions or instructions
 Items & response scale
 Demographics
 Closing section
Selecting an Appropriate Instrument
As the developer of an instrument, you have some latitude
as to the type of instrument to develop, such as a
questionnaire, checklist, or inventory; the mode of
administration to use; and ultimately the item format(s) to
use, such as open-ended questions, rating scales, or
ranking items. The decision about the type of instrument
and item format(s) will typically be based on the following
considerations.
 The purpose of the study
 The research design
 Object of measurement
 Data collection methodology
 Resources
Instrument Construction Process

 Articulate the purpose and focus of the study


 Formulate items/ Constructs to be measure
 Structure and format the items
 Organize and format the instrument
 Administer and revise the instrument
Steps in Instrumentation Construction
and Evaluation
 A theory and accompanying concepts (or a taxonomy) should be
selected first. Many instruments developed on the social sciences
and education has no conceptual or theoretical to basis to serve a
guide in the selection of items and the construction of scales. The
concepts to be measured should be defined as carefully and precisely
as possible and considered in relation to the specific population to be
studied.
 Need an item Pool: Next an item pool should be written. This will
be accomplished based on theoretical and conceptual framework
provided in step one. Statements or items will be created to which
subjects may respond and will serve as stimuli to the content or
subject matter to be measured. One may obtained items from a verity
of sources review of the theoretical and research literature other
instrument, documents, through observation of phenomena
behavioral traits or attitudes to be studied.
In writing items one should be sensitive to the following:
 Frame of reference
 Content and population specificity
 Descriptive vs. evaluative items
 Items that are clear concise and as unambiguous as possible
 Simple as opposed to compound statements
 Frame of reference
Smith et al. (1969) provided the research evidence that indicates that perceptual
selectivity in human judgment is dramatic. When two individuals with different frame
of reference are exposed to the same object or stimulus, they will select different
aspects and provide different summary evaluations of the situation.
Van de Van and Ferry (19980) have suggested that frame of reference are the
cognitive filters (perceptual screens) a person uses in describing or evaluating a
situation. They indicate that for the purpose of instrumentation construction and
evaluation. There are three issues to be examined relative to a respondent’s frame of
reference.
 The characteristics of the stimulus a person is exposed to
 The systematic
 The unsystematic individual differences and basis that respondents bring to a
stimulus as a result of their prior experience, predilections, expectations and
personalities.
 Content and population specificity
Content and population specificity suggest the items
wording should be specific to the articular population or
selection of subjects. Measurement created in one test
situation may not be appropriate for another. Items should
be written to be situational specific.
 Descriptive versus Evaluative Items

Items should be distinguished during test construction.


Descriptive items are positive and value free focusing on the
factual characteristics and behaviors that actually exists.
Evaluative items are normative and value laden and ask a
respondent to provide an opinion about the strength and
weakness likes or dislikes of events or behaviors. They
indicate that evaluative statements are more susceptible to
variation in frames of reference and perceptions of
respondents than descriptive items.
 Numbers of Scale points
Numbers of Scale points on the answer sheet may also
influence response bias. If too few scale points are used the
answer scale is a coarse one and much information may be
lost.
 A pilot Study

A pilot Study should be conducted with the statements


retained from step three for the purpose of performing some
initial items analysis. The items can be appropriately
“Packaged” and administered to a small sample of subjects
similar to those to be used in the final study (subjects for
whom the instrument was designed). Generally 25-40
subjects should be sufficient for a pilot study. Reliability
estimates can be obtained and several items analysis can be
performed with the data.
 The final Study The final Study will be conducted. In the case of most
dissertation research, the instrument developed from step one through
four will be administered to the final study sample. Depending on the
concepts to be measured 40-50 items may survive the initial screening.
 One need not too critical on rejecting statements during step three and
four because much of this work will be repeated with the final study
sample.
 This provide for a refinement of the instrument repeating statistical
procedure with a mush larger selection of subjects. Also some statistical
procedures such as factor analysis can be used with a larger study sample
which is not possible with the smaller pilot group.
 It is important to recognize no instrument should be considered the final
or completed version additional and continued refinement to any
instrument is generally always useful and necessary.
 Items that is clear concise and as unambiguous as possible
 Simple as opposed to compound statements
Constructs Testimonial and Measurement
A construct is an image or idea specially invented for a
given research and or theory building purpose. We build
the construct by combining the simpler concepts especially
when the idea or image we intend to convey is not directly
subject to observation. Researcher can use given
distributions while listing it in an instrument:
○ Continuous distribution:
○ Dichotomous Distribution:
○ Polytomous Distribution:
○ Ordered Categorical Scale:
 Context and sensitivity effects: the questionnaire should be
supported by a presentation on survey objectives and a clear
confidentiality assurance.
 Memory effect: memory aids, retrieval cues and appropriate
reference periods should be used.
 Hypothetical questions: this type of questions should be used
with caution, particularly when concerning opinions and
attitudes.
 Response categories: there should be no overlapping among
the response categories and they should cover all possible
answers;
 Order of response options: if possible, options should be
presented in a meaningful order:
 Use of standard classification systems: when available,
standard definitions for concepts, variables and classifications
should be applied
 Language: simple language for questions and instructions
should be used
 Double-Barrelled questions: they should not be used
 Leading questions and unbalanced questions: questions may
easily become leading o appealing or may contain persuasive
definition; therefore wording should be designed with caution
not to construct leading questions. Attitude questions should be
balanced: the question should reflect both sides of an opinion.
 Sequencing: the questionnaire flow should follow a logical
stream.
 Order of questions: the questions must be scrutinized with
respect to their context.
 Length: the questionnaire length should be balanced
considering the response burden, the mode of data collection
and the fulfillment of survey goals
Questionnaire as Research Instrument
When developing a strategy, the entire cycle of questionnaire
design and testing has to be covered. Five main steps have to be
distinguished (see figure). All five steps must be covered by the
strategy.
• Conceptualization
• Questionnaire design
• Questionnaire testing
• Revision
Conceptual scheme: entity/relationship scheme
Steps in Questionnaire Construction
 Reviewing the literature
 Deciding what information should be sought
 Knowing respondents
 Constructing Items
 Re-examining and revision of Items
 Pre testing questionnaire
 Editing questionnaire and specifying Procedure
 A. Reviewing the Literature
Before constructing the questionnaire the researcher
must review all the related literature to see if an
already prepared questionnaire is available to similar
his/her topic of study. This will save time and effort
required to construct an entirely new questionnaire.
Changes can be made as the study demands
 B. Deciding what information should besought:

List specific objectives to be achieved by the


questionnaire. Methods of data analysis that will be
applied to the retuned questionnaire should also be
kept in mind
 C. Knowing respondents:
‡Researcher must know his target population in relation
to occupation, special sensitivities, education, ethnicity
language etc.
 D. Constructing Questionnaire Items:

‡Each item on the questionnaire must be developed to measure


a specific aspect of objectives or hypothesis.‡ Researcher must
be able to explain in detail why a certain question is being
asked and how the responses will be analyzed.
Making ”dummy tables´ that show how item-by-item results of
the study will be analyzed is a good idea(Borg & Gall, 1983).
Rules for constructing items
 Both open as well as closed ended questions
can be used however closed ended questions
are preferred
 Clarity of all the items is necessary to obtain
valid results.
 Short items are preferable to long items as
they are easier to understand.
 Negative items should be avoided as they
are often misread by many respondents
 Avoid double-barreled´ items, which require the
subject to respond to two separate ideas with as ingle
answer.
 Avoid using technical terms, jargons or big words
that some respondents may not understand.
 When a general and a related specific question are to
be asked together it is preferable to ask general
question first. Otherwise it will narrow the focus of
the general question if specific question is asked first.
 Avoid biased or leading questions.(Borg & Gall, 1983
Scaling questions
Scaling questions are special types of closed-ended
questions.
They include following categories of questions.
 Behavioral/Attitudinal questions.
 Agree and Disagree questions.
 Preference questions.
 Ranking questions.
 The questions can

be labeled or unlabeled
E. Reexamination and revision of the questions:
After questionnaire formulation revise it which
involves:
 supplementation of ones
effort by the critical opinion of experts (should
represent different approaches and belong to
different social backgrounds)
 reviewing by representative different groups such as
minorities, racial groups, women etc.
 questionnaire should be scrutinized for any technical
defects (Selltiz et al., 1976)
F. Pretesting questionnaires:
According to Selltiz et al ., (1976) pre test helps in
identifying and solving the unforeseen problems in
administration of questionnaire such as phrasing,
sequence of questions or its length, identifying the
need for any additional questions or elimination of
undesired ones
After making necessary changes second pretest
should be conducted. Sometimes, in fact series of
three or four or even more revisions and pre-testing is
required (Selltiz et al., 1976).
Tests as Research Instrument
‡A means of measuring the knowledge, skill, feeling,
intelligence or aptitude of an individual or group.
Produce numerical scores that can be used to identify,
classify or evaluate test takers(Gay,1996).
 Reviewing literature
 Define objectives
 Define target population
 Review related measures
 Develop an item pool
 Prepare a prototype
 Evaluate the prototype
 Revise measure
 Reviewing literature:
Before developing a new test review the available literature in
order to seek a test already available that can be used for the
study as test development is an extremely complex process and
require training.
 Define objectives:

Give careful thought to the specific outcomes that measure is to


achieve. E.g. construction of achievement tests requires careful
description of the knowledge or skill that the test should
measure.
 Define target population:
Definition of target population is required as
characteristics of target population must be considered in
many of the decisions on such questions as item type,
reading level, test length and type of directions.
 Review related measures: A careful review of tests
that measure similar characteristics provide idas on
methods for establishing validity, application
of different type of items, expected level of validity
and reliability and possible formats.
 Develop an item pool: Before starting to write test
items researcher needs to make decisions regarding
type of items that should be used and amount
of emphasis that should be given to each aspect of
characteristics or content area to be measured.
 Prepare a prototype: The first form of the test
puts into effect the earlier decision made regarding
the format, item type etc. through tryouts.
 Evaluate the prototype: Obtain a critical review by at
least three or more experts in test construction. This
review identifies needed changes and the prototype is
field-tested with a sample from target population.
 After collecting data item analysis is conducted to
identify good and bad items. Analysis and interpretation
depends upon nature of test. E.g. in developing a norm-
referenced achievement test item analysis is usually
concerned with difficulty level of each item and its
ability to discriminate between good and poor students.
 Revise measure: On the basis of field test
experience and results of item analysis prototype is
revised and again field tested. This cycle may be
repeated several times in order to develop an effective
test. Collect data on test reliability and validity.
Instrument Post Field Evaluation

Lecture Part two For


Post Field Evaluation of Research Instrument

By
Dr Ayaz Muhammad Khan
Basic Measurement Procedure
 Item difficulty Index
 Several items Discrimination Indices
 Critical Ratio
 Corrected Items Total Correlation
 Item Means and Variances
 Items Frequency Distribution
 Content validation
Content validation should be used with the items pool.
(Even though this procedure is a ---- mentioned once, it
is not uncommon for one to repeat this step several times
in the process of refining the items pool). Determining
content validity is sometimes referred to as subject
matter that is primarily a judgmental process. Generally
a penal of five to seven experts will selected who are
expert in the area of study and with the concepts to be
measured. At this stage researcher is attempting to
accomplish at two objectives.
1. To determine which items from items pool are most
representative of the total universe of items that could
be selected dealing with the subject matter to be
measured to determine if voids in statements exist.
(New items should be included) and if items should be
discarded or rewritten
2. if the theory to be measured is multi conceptual (most
social science theories are) then to determine which
items logically cluster or deal with which sub concepts.
To accomplish this each judge should work
independently rating each statement in the item pool
and categorizing it under appropriate concept or sub
concept. Usually a criteria of atheist 80% agreement
among judge is established for inclusion of an item in
the final or revised poll.

You might also like