Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

What Are Data?

The term data refers to the kinds of information researchers obtain on the subjects of their research.
Demographic information, such as age, gender, ethnicity, religion, and so on, is one kind of data; scores from a
commercially available or researcher-prepared test are another. Responses to the researchers questions in an
oral interview or written replies to a survey questionnaire are other kinds. Essays written by students, grade
point averages obtained from school records, performance logs kept by coaches, anecdotal records maintained
by teachers or counselorsall constitute various kinds of data that researchers might want to collect as
part of a research investigation. An important decision for every researcher to make during the planning phase
of an investigation, therefore, is what kind(s) of data he or she intends to collect. The device (such as a pencil
and paper test, a questionnaire, or a rating scale) the researcher uses to collect data is called an instrument.*

KEY QUESTIONS
Generally, the whole process of preparing to collect data is called instrumentation. It involves not only the
selection or design of the instruments but also the procedures and the conditions under which the instruments
will be administered. Several questions arise:
1. Where will the data be collected? This question refers to the location of the data collection. Where will it be?
in a classroom? a schoolyard? a private home? on the street?
2. When will the data be collected? This question refers to the time of collection. When is it to take
place? in the morning? afternoon? evening? over a weekend?
3. How often are the data to be collected? This question refers to the frequency of collection. How many
times are the data to be collected? only once? twice? more than twice?
4. Who is to collect the data? This question refers to the administration of the instruments. Who is to do this?
the researcher? someone selected and trained by the researcher?
These questions are important because how researchers answer them may affect the data obtained. It is
a mistake to think that researchers need only locate or develop a good instrument. The data provided by any
instrument may be affected by any or all of the preceding considerations. The most highly regarded of
instruments will provide useless data, for instance, if administered incorrectly; by someone disliked by
respondents; under noisy, inhospitable conditions; or when subjects are exhausted.
All the above questions are important for researchers to answer, therefore, before they begin to collect
the data they need. A researchers decisions about location, time, frequency, and administration are always
affected by the kind(s) of instrument to be used. And for it to be of any value, every instrument, no matter what
kind, must allow researchers to draw accurate conclusions about the capabilities or other characteristics of the
people being studied.

VALIDITY, RELIABILITY, AND OBJECTIVITY


A frequently used (but somewhat old-fashioned) definition of a valid instrument is that it measures
what it is supposed to measure. A more accurate definition of validity revolves around the defensibility of the
inferences researchers make from the data collected through the use of an instrument. An instrument, after all, is
a device used to gather data. Researchers then use these data to make inferences about the characteristics of
certain individuals.* But to be of any use, these inferences must be correct. All researchers, therefore, want
instruments that permit them to draw warranted, or valid, conclusions about the characteristics (ability,
achievement, attitudes, and so on) of the individuals they study.
To measure math achievement, for example, a researcher needs to have some assurance that the
instrument she intends to use actually does measure such achievement. Another researcher who wants to know
what people think or how they feel about a particular topic needs assurance that the instrument used will allow
him to make accurate inferences. There are various ways to obtain such assurance, and we discuss them in
Chapter 8.
A second consideration is reliability. A reliable instrument is one that gives consistent results. If a
researcher tested the math achievement of a group of individuals at two or more different times, for example, he
or she should expect to obtain close to the same results each time. This consistency would give the researcher
confidence that the results actually represented the achievement of the individuals involved. As with validity,
a number of procedures can be used to determine the reliability of an instrument. We discuss several of them
in Chapter 8.
Afinal consideration is objectivity. Objectivity refers to the absence of subjective judgments.
Whenever possible, researchers should try to eliminate subjectivity from the judgments they make about the
achievement, performance, or characteristics of subjects. Unfortunately, complete objectivity is probably never
attained. We discuss each of these concepts in much more detail in Chapter 8. In this chapter, we look at some of
the various kinds of instruments that can be (and often are) used in research and discuss how to find and select
them.
USABILITY
A number of practical considerations face every researcher. One of these is how easy it will be to use
any instrument he or she designs or selects. How long will it take to administer? Are the directions clear? Is it
appropriate for the ethnic or other groups to whom it will be administered? How easy is it to score? to interpret
the results? How much does it cost? Do equivalent forms exist? Have any problems been reported by others who
used it? Does evidence of its reliability and validity exist? Getting satisfactory answers to such questions can
save a researcher a lot of time and energy and can prevent a lot of headaches.

Means of Classifying Data-Collection Instruments


Instruments can be classified in a number of ways. Here are some of the most useful. WHO
PROVIDES THE INFORMATION? In educational research, three general methods are available for
obtaining information. Researchers can get the information (1) themselves, with little or no involvement of other
people; (2) directly from the subjects of the study; or (3) from others, frequently referred to as informants, who
are knowledgeable about the subjects. Let us follow a specific example. Aresearcher wishes to test the
hypothesis that inquiry teaching in history classes results in higher-level thinking than does the lecture
method. The researcher may elect option 1, in which case she may observe students in the classroom,
noting the frequency of oral statements indicative of higher-level thinking. Or, she may examine existing student
records that may include test results and/or anecdotal material she considers indicative of higher-level
thinking. If she elects option 2, the researcher is likely to administer tests or request student products (essays,
problem sheets) for evidence. She may also decide to interview students using questions designed to reveal
their thinking about history (or other topics). Finally, if the researcher chooses option 3, she is likely to interview
persons (teachers, other students) or ask them to fill out rating scales in which the interviewees assess each
students thinking skills based on their prior experience with the student. Examples of each type of method are
given below.
1. Researcher instruments
A researcher interested in learning and memory development counts the number of times it takes different
nursery school children to learn to navigate their way correctly through a maze located in their school
playground. He records his findings on a tally sheet. A researcher interested in the concept of mutual attraction
describes in ongoing field notes how the behaviors of people who work together in various settings have been
observed to differ on this variable.
2. Subject instruments
A researcher in an elementary school administers a weekly spelling test that requires students to spell correctly
the new words learned in classduring the week. At a researchers request, an administrator passes out a
questionnaire during a faculty meeting that asks the facultys opinions about the new
mathematics curriculum recently instituted in the district. Aresearcher asks high school English teachers to
have their students keep a daily log in which they record their reactions to the plays they read each week.
3. Informant instruments
Aresearcher asks teachers to use a rating scale to rate each of their students on their phonic reading skills.
A researcher asks parents to keep anecdotal records describing the TV characters their preschoolers
spontaneously role-play. A researcher interviews the president of the student council about student views on the
schools disciplinary code. Her responses are recorded on an interview schedule.

WHERE DID THE INSTRUMENT COME FROM?


There are essentially two basic ways for a researcher to acquire an instrument: (1) find and administer a
previously existing instrument of some sort or (2) administer an instrument the researcher personally developed
or had developed by someone else. Developing an instrument has its problems. Primarily, it is not easy to do.
Developing a good instrument usually takes a fair amount of time and effort, not to mention a considerable
amount of skill. Selecting an already developed instrument when appropriate, therefore, is preferred. Such
instruments are usually developed by experts who possess the necessary skills. Choosing an instrument that has
already been developed takes far less time than developing a new instrument to measure the same thing.
Designing ones own instrument is time-consuming, and we do not recommend it for those without a
considerable amount of time, energy, and money to invest in the endeavor. Fortunately, a number of already
developed, useful instruments exist, and they can be located quite easily by means of a computer. The most
comprehensive listing of testing resources currently available can be found by accessing the ERIC database at
the following Web site: http://eric.ed.gov (Figure 7.1).
We typed the words social studies instruments in the box labeled Search Terms. This produced a list
of 1080 documents. As this was far too many to peruse, we changed the search terms to social studies
competency based instruments. This produced a much more manageable list of just five references as shown in
Figure 7.2. We clicked on #1 Social Studies. Competency-Based Education Assessment Series to obtain the
description of this instrument (Figure 7.3).
Almost any topic can be searched in this way to obtain a list of instruments that measure some aspect
of the topic. Notice that a printed copy of the instrument can be obtained from the ERIC Document
Reproduction Service.
A few years ago ERIC underwent considerable changes. The clearinghouses were closed in early 2004,
and in September 2004 a new Web site was introduced that provides users with markedly improved search
capabilities that utilize more efficient retrieval methods to access the ERIC database (1966present). Users can
now refine their search results through the use of the ERIC thesaurus and various ERIC identifiers (check it
out!). In October 2004 free full-text nonjournal ERIC resources were introduced, including over 100,000 full-
text documents authorized for electronic ERIC distribution from January 1993 through July 2004. Beginning in
December, new bibliographic records and full-text journal and nonjournal resources after July
2004 were added. The basic ERIC Web site remains http://www.eric.ed.gov.

Note that the search engines thatwedescribed in Chapter 5 can be used to locate ERIC. What you want
to find is ERICs test collection of more than 9,000 instruments of various types, as well as The Mental
Measurements Yearbooks. Now produced by the Buros Institute at the University of Nebraska,* the yearbooks
are published about every other year, with supplements produced between issues. Each yearbook provides
reviews of the standardized tests that have been published since the last issue. The Institutes Tests in Print is a
comprehensive bibliography of commercial tests. Unfortunately, only the references to the instruments and
reviews of them are available online; the actual instruments themselves are available only in print form.
Here are some other references you can consult that list various types of instruments:
T. E. Backer (1977). A directory of information on tests. ERIC TM Report 62-1977. Princeton, NJ: ERIC
Clearinghouse on Assessment and Evaluation, Educational Testing Service.
K. Corcoran and J. Fischer (Eds.) (1994). Measures for clinical practice (2 volumes). New York: Free
Press. ETS test collection catalog: Volume 1, Achievement tests (1992); Volume 2, Vocational tests (1988);
Volume 3, Tests for special populations (1989); Volume 4, Cognitive, aptitude, and intelligence tests (1990);
Volume 5, Attitude measures (1991); Volume 6, Affective measures and personality tests (1992). Phoenix, AZ:
Oryx Press.
E. Fabiano and N. OBrien (1987). Testing information sources for educators. TME Report 94. Princeton, NJ:
ERIC Clearinghouse on Assessment and Evaluation, Educational Testing Service. This source updates Backer to
1987, but it is not as comprehensive.
B. A. Goldman and D. F. Mitchell (19741995). Directory of unpublished experimental mental measures
(6 volumes). Washington, DC: American Psychological Association.
M. Hersen and A. S. Bellack (1988). Dictionary of behavioral assessment techniques. New York: Pergamon.
J. C. Impara and B. S. Plake (1999). Mental measurements yearbook. Lincoln, NE: Buros Institute, University of
Nebraska.
S. E. Krug (published biannually). Psychware sourcebook. Austin, TX: Pro-Ed, Inc. A directory of
computer-based assessment tools, such as tests, scoring, and interpretation systems.
H. I. McCubbin and A. I. Thompson (Eds.) (1987). Family assessment inventories for research and
practice. Madison, WI: University of Wisconsin Madison.
L. L. Murphy et al. (1999). Tests in print. Lincoln, NE: Buros Institute, University of Nebraska.
R. C. Sweetland and D. J. Keyser (Eds.) (1991). Tests: A comprehensive reference for assessments in
psychology, education, and business, 3rd ed. Kansas City, MO: Test Corporation of America.
With so many instruments now available to the research community, we recommend that, except in
unusual cases, researchers devote their energies to adapting (and/or improving) those that now exist rather than
trying to start from scratch to develop entirely new measures.

WRITTEN RESPONSE VERSUS PERFORMANCE


Another way to classify instruments is in terms of whether they require a written or marked response
from subjects or a more general evaluation of subjects performance. Written-response instruments include
objective (e.g., multiple-choice, true-false, matching, or short-answer) tests, short-essay examinations,
questionnaires, interview schedules, rating scales, and checklists. Performance instruments include any device
designed to measure either a procedure or a product. Procedures are ways of doing things, such as mixing a
chemical solution, diagnosing a problem in an automobile, writing a letter, solving a puzzle, or setting the
margins on a typewriter. Products are the end results of procedures, such as the correct chemical solution, the
correct diagnosis of auto malfunction, or a properly typed letter. Performance instruments are designed to
see whether and how well procedures can be followed and to assess the quality of products.
Written-response instruments are generally preferred over performance instruments, since the use of
the latter is frequently quite time-consuming and often requires equipment or other resources that are not readily
available. A large amount of time would be required to have even a fairly small sample of students (imagine 35!)
complete the steps involved in a high school science experiment.

You might also like