Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Multiple Choice

1. _______ is the extent to which a score from a test is stable and free from error.
a. Reliability
b. Validity
c. Psychometrics
d. Stratification
ANSWER: a

2. There are three major ways to determine whether a test is reliable. With the _______ method, several people each take
the same test twice.
a. repeat reliability
b. test-retest reliability
c. dual reliability
d. alternate forms reliability
ANSWER: b

3. A student takes the Scholastic Aptitude Test (SAT) as a requirement to get into college. Her score on the test is 600.
Three weeks later, she is asked to take the identical test again. This time she scores 500. This inconsistency in scores is an
issue of:
a. internal reliability
b. parallel forms reliability
c. test-retest reliability
d. form stability
ANSWER: c

4. Test-retest reliability taps ____ stability.


a. form
b. temporal
c. item
d. score
ANSWER: b

5. The process of counterbalancing test-taking order is used in which method of estimating reliability?
a. Test-retest reliability
b. Alternate-forms reliability
c. Internal reliability
d. All of the above use counterbalancing
ANSWER: b

6. When the two scores from alternate forms of a test are correlated and found to be similar, the test is said to have
________.
a. temporal stability
b. internal consistency
c. form stability
d. version consistency
ANSWER: c

Cengage Learning Testing, Powered by Cognero Page 1


7. In general, the longer the test, the higher its _______.
a. internal validity
b. external validity
c. internal reliability
d. external reliability
ANSWER: c

8. To ask if all of the items measure the same thing, or do they measure different constructs, is related to ________.
a. internal validity
b. test-retest reliability
c. scorer reliability
d. item homogeneity
ANSWER: d

9. When computing internal reliability, _____ is used for dichotomous items and _____ is used for interval and ratio
items.
a. coefficient alpha / K-R 20
b. coefficient alpha / Spearman-Brown
c. K-R 20 / coefficient alpha
d. Spearman-Brown / coefficient alpha
ANSWER: c

10. An industrial/organizational psychologist correlates the responses to the even numbered items on a selection test with
the responses to the odd-numbered items from the same test. Which of the following answers BEST describes the concern
of the psychologist?
a. Parallel form reliability
b. Split-half reliability
c. Test-retest reliability
d. Scorer reliability
ANSWER: b

11. The Spearman-Brown prophecy formula is used to adjust the correlation in which of the following reliability estimate
methods?
a. Test-retest method
b. Alternate form method
c. Counterbalancing method
d. Split-half method
ANSWER: d

12. _______ is an issue especially in projective or subjective tests in which there is no one correct answer.
a. Internal validity
b. Test-retest reliability
c. Scorer reliability
d. Item homogeneity
ANSWER: c

13. If we use the weight of an infant to predict the subsequent performance of the infant in college, the weight measure is
probably:
Cengage Learning Testing, Powered by Cognero Page 2
a. reliable and valid
b. reliable but not valid
c. not reliable but valid
d. not reliable and not valid
ANSWER: b

14. The extent to which tests or test items sample what they are supposed to measure is related to the measure's _______
validity.
a. content
b. construct
c. criterion
d. concurrent
ANSWER: a

15. The extent to which tests or test items sample the content that they are supposed to measure refers MOST specifically
to:
a. face validity
b. content validity
c. construct validity
d. criterion validity
ANSWER: b

16. In industry, _____ is used to establish the content validity of selection tests or test batteries.
a. job analysis
b. correlational analysis
c. job evaluation
d. an experiment
ANSWER: a

17. _______ validity is a measure which refers to the extent to which a test score is related to some measure of job
performance.
a. Content
b. Construct
c. Criterion
d. Concurrent
ANSWER: c

18. With a __________ validity design, the test is administered to a group of employees who are already on the job.
a. concurrent
b. predictive
c. content
d. face
ANSWER: a

19. With a _______ validity design, the test is administered to a group of job applicants who are going to be hired. The
test scores, which are not used in the actual hiring decision, are then compared to a future measure of job performance.
a. concurrent

Cengage Learning Testing, Powered by Cognero Page 3


b. predictive
c. content
d. face
ANSWER: b

20. The _______ of performance scores makes obtaining a significant validity coefficient more difficult with a concurrent
validity design.
a. variety
b. restricted range
c. heterogeneous nature
d. cost
ANSWER: b

21. The extent to which a test found valid for a job in one location is valid for the same job in another location refers to
the concept of _______.
a. the cross over effect
b. temporal stability
c. validity generalization
d. known group validity
ANSWER: c

22. ______ is the basis for validity generalization.


a. Face validity
b. Known-group validity
c. Meta-analysis
d. Utility
ANSWER: c

23. If a small police department uses a cognitive ability test because a meta-analysis indicated cognitive ability is the best
predictor of performance in the police academy, it is:
a. breaking the law
b. using the Taylor-Russell method
c. going to see a reduction in quality
d. using validity generalization
ANSWER: d

24. _______ is based on the assumption that tests that predict a particular component of one job should predict
performance on the same component for another job.
a. Content validity
b. Validity generalization
c. Synthetic validity
d. Face validity
ANSWER: c

25. Construct validity is usually determined by correlating scores on a test with _______.
a. performance on the job
b. the items within the test

Cengage Learning Testing, Powered by Cognero Page 4


c. scores from similar tests
d. correlational analysis is not used
ANSWER: c

26. A researcher correlates scores on a test (Test 1) with scores on other tests (Test 2 and Test 3). The analysis
demonstrates that the scores on Test 1 correlate highly with scores on Test 2 but do not correlate with scores on Test 3.
This type of analysis is used to determine:
a. content validity
b. construct validity
c. concurrent validity
d. predictive validity
ANSWER: b

27. If a police applicant is asked questions about her favorite hobbies and religious beliefs, she may feel the test is not
valid. In this case, her impression demonstrates the importance of ______ validity.
a. construct
b. criterion
c. concurrent
d. face
ANSWER: d

28. _______ validity refers to the extent to which a test appears to be valid.
a. Content
b. Criterion
c. Construct
d. Face
ANSWER: d

29. If test takers do not believe that items on a test measure what they are supposed to measure then the test probably
lacks:
a. face validity
b. concurrent validity
c. criterion validity
d. reliability
ANSWER: a

30. Barnum statements are most associated with:


a. known-group validity
b. face validity
c. concurrent validity
d. construct validity
ANSWER: b

31. Setting your clock ten minutes fast will affect the _______ of the clock.
a. reliability
b. validity
c. psychometrics

Cengage Learning Testing, Powered by Cognero Page 5


d. speed
ANSWER: b

32. Which of the following sources contain reliability and validity information on various tests?
a. Radford Guide to Reliability
b. California Index
c. Validity Studies
d. Mental Measurements Yearbook
ANSWER: d

33. Even though a test is both reliable and valid, it is not necessarily useful. The _______ are designed to estimate the
percentage of future employees who will be successful on the job if an organization uses a particular test.
a. Taylor-Russell tables
b. Expectancy charts
c. Lawshe tables
d. Brogden-Cronbach-Gleser utility formula
ANSWER: a

34. Which of the following pieces of information is NOT required to use the Taylor-Russell tables?
a. Criterion validity coefficient
b. Selection ratio
c. Base rate
d. Reliability
ANSWER: d

35. Even though a test is both reliable and valid, it is not necessarily useful. The _______ were created to determine the
probability that a particular applicant will be successful.
a. Taylor-Russell tables
b. Expectancy charts
c. Lawshe tables
d. Brogden-Cronbach-Gleser utility formula
ANSWER: c

36. Even though a test is both reliable and valid, it is not necessarily useful. The _______ were/was developed to
determine the amount of money that an organization would save if it used a particular test to select employees.
a. Taylor-Russell tables
b. Expectancy charts
c. Lawshe tables
d. Brogden-Cronbach-Gleser utility formula
ANSWER: d

37. The ______ are (is) used to determine the amount of money that an organization would save if it used a particular test
in place of the test it currently uses to select employees.
a. Brogden-Cronbach-Gleser formula
b. Taylor-Russell Tables
c. expectancy charts
d. Lawshe tables

Cengage Learning Testing, Powered by Cognero Page 6


ANSWER: a

38. When the criterion validity coefficient is ____, and the selection ratio is ____, a test will have the most utility in
selecting successful employees.
a. large / large
b. large / small
c. small / small
d. small / large
ANSWER: b

39. A test is considered to have _____ if there are race differences in test scores that are unrelated to the construct being
measured.
a. differential validity
b. measurement bias
c. selection bias
d. adverse impact
ANSWER: b

40. If the selection rate for any of the protected groups is less than 80% of the selection rate for either white applicants or
males, the test is considered to have _______.
a. differential validity
b. adverse impact
c. selection bias
d. known group validity
ANSWER: b

41. Single-group validity and differential validity are types of:


a. adverse impact
b. predictive bias
c. measurement bias
d. validation strategies
ANSWER: b

42. If a test of reading ability predicts performance of white clerks but not African American clerks, the test has _______.
a. known-group validity
b. differential validity
c. single-group validity
d. validity generalization
ANSWER: c

43. Single-group validity is very rare and is usually the result of _______.
a. small sample sizes
b. methodological problems
c. both a and b
d. none of the above
ANSWER: c

Cengage Learning Testing, Powered by Cognero Page 7


44. If a test is valid for two groups, but more valid for one than the other it is said to have _______.
a. known group validity
b. differential validity
c. single group validity
d. validity generalization
ANSWER: b

45. A test predicts performance for two different groups of applicants (e.g., men and women); however, the test predicts
the performance significantly better for men than it does for women. This exemplifies:
a. utility
b. single-group validity
c. differential validity
d. known-group validity
ANSWER: c

46. If an HR director believes the higher that applicants score on a test, the better they will do on the job, she could take a
________ approach to hiring decisions.
a. top-down
b. nonlinear
c. passing score
d. banding
ANSWER: a

47. Of the available approaches to making a hiring decision, the _______ method results in the highest levels of adverse
impact.
a. multiple hurdle
b. top-down selection
c. passing score
d. banding
ANSWER: b

48. San Antonio, Texas, has a system in which the names of the top three applicants for promotion are submitted to the
Chief of Police who then selects one of the three to be the new Captain. This system uses:
a. the rule of three
b. top down selection
c. a passing score
d. banding
ANSWER: a

49. An HR director determines that all applicants who receive at least an 81 on their test will be able to perform the
functions of the job. The hiring decision strategy to be used in this situation is:
a. multiple hurdle
b. top down selection
c. passing score
d. banding
ANSWER: c

50. With a _______ approach, the applicant is administered one test at a time.
Cengage Learning Testing, Powered by Cognero Page 8
a. multiple hurdle
b. top down selection
c. passing score
d. banding
ANSWER: a

51. Which approach to employee selection would administer several tests to employees one at a time, with the least
expensive tests being administered first; would score the various tests on a pass/fail basis; and would continue to test each
applicant until he/she failed one of the tests?
a. Multiple-regression approach
b. Cutoff approach
c. Multiple-cutoff approach
d. Multiple-hurdle approach
ANSWER: d

52. Which of the following hiring decision strategies takes into consideration the degree of error associated with any test
score?
a. Multiple hurdle
b. Top down selection
c. Passing score
d. Banding
ANSWER: d

53. To compute a band, one needs the:


a. standard error
b. validity
c. F ratio
d. mean
ANSWER: a

54. In constructing a band, how many standard errors are normally used?
a. Three
b. One
c. None
d. Two
ANSWER: d

Objective Short Answer

55. What are the three main ways of determining test reliability and what type of stability does each method tap?
ANSWER:  Test-retest - temporal stability
 Alternate forms - form stability
 Internal reliability - item stability

56. What are the five methods for determining test validity?
ANSWER:  Content
 Criterion
Cengage Learning Testing, Powered by Cognero Page 9
 Construct
 Face
 Known groups

57. What components are used in the Taylor-Russell Tables?


ANSWER:  validity
 selection ratio
 base rate

58. What are the three types of criterion validity?


ANSWER:  concurrent
 predictive
 validity generalization

59. What are three important aspects in determining the fairness of a test?
ANSWER:  adverse impact
 single group validity
 differential validity

60. What are the components of the Brogden-Cronbach-Gleser utility formula?


ANSWER:  number of applicants hired per year
 average tenure
 test validity
 standard deviation of performance in dollars
 mean standardized predictor score of selected applicants
 number of applicants
 cost of the test

61. What are the three four approaches to making a hiring decision
ANSWER:  Unadjusted top-down
 Rule of 3
 Banding
 Passing scores

Cengage Learning Testing, Powered by Cognero Page 10

You might also like