Professional Documents
Culture Documents
Introduction To Research Methods A Hands On Approach 1St Edition Pajo Test Bank Full Chapter PDF
Introduction To Research Methods A Hands On Approach 1St Edition Pajo Test Bank Full Chapter PDF
1. When we ask questions that really do not capture what we are trying to measure, we
create ______.
a. measurement error
b. random error
c. systematic error
d. inter-observer error
Ans: A
Cognitive Domain: Knowledge
Answer Location: Defining Measurement Error
Difficulty Level: Easy
2. When some respondents do not really understand a question but answer it based on
what they think the question is asking may result in what type of measurement error?
a. random error
b. systematic error
c. concurrent error
d. measurement error
e. reliability error
Ans: A
Cognitive Domain: Application
Answer Location: Types of Measurement Errors
Difficulty Level: Medium
3. ______ may be inconsistent with no real pattern, so our measurement may cause
participants to understand and answer questions differently.
a. Face validity
b. Reliability
c. Random error
Instructor Resource
Pajo, Introduction to Research Methods
SAGE Publishing, 2018
d. Predictive validity
Ans: C
Cognitive Domain: Knowledge
Answer Location: Types of Measurement Errors
Difficulty Level: Easy
4. Dr. Smith is interested in studying the level of religiosity of adults aged 18 and older
in Pakistan. She conducted a quantitative study using a survey and asked the question:
How often do you go to church? After she left the field and started her data entry, she
noticed that more than 80% of the respondents had responded that they did not go to
church. It is possible that the wording of Dr. Smith’s question may have caused what?
a. random error
b. systematic error
c. measurement error
d. specificity error
Ans: B
Cognitive Domain: Application
Answer Location: Types of Measurement Errors
Difficulty Level: Medium
6. Which of the following distinguishes random error from other types of measurement
errors?
a. Our measurements are open to various interpretations by respondents.
b. Questions may be clear and specific about what we are measuring.
c. Careful consideration is given to the questions used.
d. Large numbers of respondents provide the wrong answers to more than five of the
questions asked.
Ans: A
Cognitive Domain: Comprehension
Answer Location: Types of Measurement Errors
Difficulty Level: Medium
Instructor Resource
Pajo, Introduction to Research Methods
SAGE Publishing, 2018
7. ______ indicates that our measure is not accurately measuring our concept but is
systematically perceived as something else by respondents.
a. Research error
b. Measurement error
c. Systematic error
d. Random error
Ans: C
Cognitive Domain: Knowledge
Answer Location: Types of Measurement Errors
Difficulty Level: Easy
9. When you are worried about whether you will get the same results if you repeat your
experiment again in the future, you are worried about ______.
a. internal validity
b. external validity
c. reliability
d. content validity
Ans: C
Cognitive Domain: Application
Answer Location: Reliability
Difficulty Level: Medium
Dr. Wilson and his research team have come up with a new measure to test customer
satisfaction with their cell phone carriers. They claim that someone who is very satisfied
will have a score of 20, while someone who is very dissatisfied will have a score of 5
with responses on a scale of 1 (very dissatisfied) to 4 (very satisfied) on a 5-question
scale. In order to see how well the measure was, they used an older measure with 10
respondents in a pilot test. One week later, they used their new measure with the same
group of 10 respondents. The team found that those who scored low on the old test also
scored low on the new test. Further, they had the 10 respondents retake the new test
one month later. They found that a respondent’s level of satisfaction did not change
over the three times the test was taken. After claiming concurrent validity, the research
team used a number of groups of participants to test both tools multiple times over the
span of one year. Their results over time still showed that the old test and new test
measured predicted basically the same way.
11. By comparing the new measurement with an old one, the research team has shown
that the new measure has ______.
a. internal validity
b. concurrent validity
c. construct validity
d. face validity
Ans: B
Cognitive Domain: Application
Answer Location: Criterion Validity, Concurrent Validity, and Predictive Validity
Difficulty Level: Medium
12. By comparing the new measurement with an old one over time using multiple
groups of different participants, the research team has shown that the new measure has
both ______ and ______ validities.
a. internal, face
b. criterion, content
c. construct, concurrent
d. concurrent, predictive
Ans: D
Cognitive Domain: Analysis
Answer Location: Criterion Validity, Concurrent Validity, and Predictive Validity
Difficulty Level: Hard
13. Which of the following refers to the level of clarity in the research instrument?
a. accuracy
b. test-retest reliability
c. reliability
Instructor Resource
Pajo, Introduction to Research Methods
SAGE Publishing, 2018
d. inter-rater reliability
Ans: C
Cognitive Domain: Comprehension
Answer Location: Reliability
Difficulty Level: Medium
18. In an inter-rater reliability, how many researchers are present during data collection?
a. only one researcher
b. more than one
c. more than five
d. six minimum
Ans: B
Cognitive Domain: Knowledge
Answer Location: Inter-Observer or Inter-Rater Reliability
Difficulty Level: Easy
19. ______ ensures that our data collection tool consistently yields the same results
regardless of the passing of time.
a. Test-retest reliability
b. Inter-rater reliability
c. Reliability
d. Random error
Ans: A
Cognitive Domain: Comprehension
Answer Location: Test-Retest Reliability
Difficulty Level: Medium
23. ______ refers to the extent or level that a measurement tool captures all the aspects
of the construct that is being tested or measured.
a. Content validity
b. Construct validity
c. Predictive validity
d. Validity
Ans: A
Cognitive Domain: Application
Answer Location: Content Validity
Difficulty Level: Medium
24. Dr. Gates wishes to measure his students’ performance in statistics during the
spring semester. To do this, he includes questions from all lectures so that a broad
coverage of the material covered during the semester may be assessed. It is clear that
Dr. Gates is interested in establishing what type of validity?
a. content validity
b. face validity
c. construct validity
d. criterion validity
Ans: A
Cognitive Domain: Application
Answer Location: Content Validity
Difficulty Level: Medium
25. ______ is necessary when we seek to create a new measurement tool for a
construct that already has a measurement tool in place.
a. Construct validity
b. Face validity
c. Criterion validity
d. Content validity
Instructor Resource
Pajo, Introduction to Research Methods
SAGE Publishing, 2018
Ans: C
Cognitive Domain: Application
Answer Location: Criterion Validity, Concurrent Validity, and Predictive Validity
Difficulty Level: Medium
True/False
1. Measurement error occurs because we are asking questions that do not capture what
we truly want.
Ans: T
Cognitive Domain: Comprehension
Answer Location: Defining Measurement Error
Difficulty Level: Medium
6. Being consistent in our measuring does not mean that our results are valid.
Ans: T
Cognitive Domain: Knowledge
Answer Location: Validity
Difficulty Level: Easy
Instructor Resource
Pajo, Introduction to Research Methods
SAGE Publishing, 2018
7. Face validity is the degree of embodying and understanding the new culture for
immigrants.
Ans: F
Cognitive Domain: Comprehension
Answer Location: Validity
Difficulty Level: Medium
8. Face validity is often used because we can always confirm that a measurement tool
is valid.
Ans: F
Cognitive Domain: Comprehension
Answer Location: Face Validity
Difficulty Level: Medium
9. Content validity helps to ensure that our measure truly includes all the aspects that
make up intelligence.
Ans: T
Cognitive Domain: Knowledge
Answer Location: Content Validity
Difficulty Level: Easy
12. Validity refers to the link between the conceptual and operational definitions.
Ans: T
Cognitive Domain: Comprehension
Answer Location: Validity
Difficulty Level: Medium
13. The test-retest method for assessing reliability assumes that the phenomena under
study do not change.
Ans: T
Instructor Resource
Pajo, Introduction to Research Methods
SAGE Publishing, 2018
Cognitive Domain: Comprehension
Answer Location: Test-Retest Reliability
Difficulty Level: Medium
Essay
3. Though measurement tools may already exist, some researchers may decide to
create new ones. In doing so, researchers must be concerned with criterion validity.
Discuss criterion validity and the role of concurrent validity and predictive validity in
achieving criterion validity. Use an example to show you understand the concepts.
Ans: Criterion validity refers to those cases when we are not satisfied with the way the
construct has been measured. The measurement tool may have some flaws or it does
not apply to our specific population so we decide to create a new measurement for a
construct that has already been measured. In that case, we need to be clear that our
new measurement tool will be compared to the old measurement tool, so we know it
has high criterion validity. Concurrent validity requires that the new test works as well as
the old test. Therefore, comparing the new test with the established measurement tool
and examining the matching results are compulsory. Predictive validity is achieved after
establishing concurrent validity; the new test is then tested a few times with different
population samples. If we can claim concurrent validity with one sample, we need a
number of groups of participants to test both tools multiple times. If our results over time
still show that the new mathematics test is better than the old test, it establishes high
predictive validity.
Cognitive Domain: Application
Answer Location: Criterion Validity, Concurrent Validity, and Predictive Validity
Difficulty Level: Medium
4. What are the differences between face, content, and criterion validity? How does
each form of validity add to a measure’s overall measurement validity?
Ans: Face validity means that a researcher or a group of researchers have decided to
classify the measurement instrument or tool as an accurate measure of the construct in
question. Therefore, rather than questioning or testing the measurement tool used to
collect data, the researcher subjectively considers it an accurate form of measurement
and interprets the results or follows up with some other questions. Content validity
refers to the extent or level that a measurement tool captures all the aspects of the
construct that is being tested or measured. This helps us to ensure that our measure
Instructor Resource
Pajo, Introduction to Research Methods
SAGE Publishing, 2018
truly includes all the aspects that make up intelligence, or mathematics, and so on.
Criterion validity refers to those cases when we are not satisfied with the way the
construct has been measured. The measurement tool may have some flaws or it does
not apply to our specific population so we decide to create a new measurement for a
construct that has already been measured. Each type of validity is used in varying
circumstances, all with the aim to ensuring that whatever is done during the
measurement process the study remains valid and the findings will be acceptable.
Cognitive Domain: Application
Answer Locations: Face Validity; Content Validity; Criterion Validity, Concurrent Validity,
and Predictive Validity
Difficulty Level: Medium