Professional Documents
Culture Documents
Finals Reviewer (IO)
Finals Reviewer (IO)
Construct Validity - The extent to which a test actually measures the construct that it purports
to measure
– Convergent Validity
– Discriminant Validity
Face Validity - The extent to which a test appears to be job-related
• Three components
– Validity
–
Current employees not performing well
● Vertical: cut off ng score, 50% score, average ng score
● Horizontal: job performance; company standard
● Quadrants:
I: high JP, low TS (wrong decision to hire)
II: high JP, high TS (right decision) to hire)
III: low JP, high TS (wrong decision to hire)
IV: low JP, low TS (correct decision to hire)
● Proportion of correct decisions with test
Q1 and Q4 / Total employees = (Q2 + Q4) / (Q1+Q2+Q3+Q4)
● Baseline of correct decisions successful employees (high JP) / total employees
= (Q1 + Q2) / (Q1+Q2+Q3+Q4)
Example:
• Suppose we have
• Quad 1- employee did poorly on the test but performed well on the job
• Quad 2-employees who scored well and performed well on the job
• Quad 3- employee scored high on test and performed poorly on the job
= 21 ÷ 30 = .70
• Baseline of Correct Decisions
5 + 10 ÷ 5 + 10 + 4 + 11
Quadrants I + II Quadrants I+II+III+IV
= 15 ÷ 30 = .50
•
= 14 ÷ 20 = .70
• Baseline of Correct Decisions
4+8 ÷ 4+8+6+2
Quadrants I + II Quadrants I+II+III+IV
= 12 ÷ 20 = .60
Lawshe Tables
• Gives you probability of a particular applicant being successful.
– Validity coefficient
– Base rate
– Applicant score
– Table 6.5
Selection ratio
The ratio between the number of openings to the number of applicants
Validity coefficient
Base rate of current performance
The percentage of employees currently on the
job who are considered successful.
SDy
The difference in performance (measured in dollars)
between a good and average worker (workers one
standard deviation apart)
Calculating m
• For example, we administer a test of mental ability to a group of 100 applicants and hire
the 10 with the highest scores. The average score of the 10 hired applicants was 34.6, the
average test score of the other 90 applicants was 28.4, and the standard deviation of all test
scores was 8.3. The desired figure would be:
• You administer a test of mental ability to a group of 150 applicants, and hire 35 with the
highest scores. The average score of the 35 hired applicants was 35.7, the average test
score of the other 115 applicants was 24.6, and the standard deviation of all test scores was
11.2. The desired figure would be:
» (35.7 - 24.6) ÷ 11.2 = ?
Standardized Selection Ratio
Example:
– Suppose:
– A test is biased if there are group differences in test scores (e.g., race, gender) that
are unrelated to the construct being measured (e.g., integrity)
• Test Fairness
– A test is fair if people of equal probability of success on a job have an equal chance
of being hired
• Adverse Impact
– Occurs when the selection rate for one group is less than 80% of the rate for the
highest scoring group
Male Female
Number of applicants 50 30
Number hired 20 10
Selection ratio .40 .33
• Predictive Bias
– Predictive level of job success favors one group over the other
• Single-Group Validity
– Very rare
• Differential Validity
– Top down selection - pplicants are rank-ordered on the basis of their test scores.
Selecting applicants in straight rank order of their test scores.
Advantages
• Higher quality of selected applicants
• Assumes test score accounts for all the variance in performance (Zedeck,
Cascio, Goldstein & Outtz, 1996).
– Compensatory - A method of making selection decisions in which a high score on
one test can compensate for a low score on another test. For example, a high GPA
might compensate for a low GRE score.
• Rule of 3 - A variation on top-down selection in which the names of the top three
applicants are given to a hiring authority who can then select any of the three. A technique
often used in the public sector. Gives more flexibility to the selectors
• Passing Scores - The minimum test score that an applicant must achieve to be considered
for hire. A means for reducing adverse impact and increasing flexibility
– Who will perform at an acceptable level?
A passing score is a point in a distribution of scores that distinguishes acceptable
from unacceptable performance (Kane, 1994).
– Multiple cut off - Must meet or exceed passing score in more than one test.
Applicants will give all the tests at the same time. If they fail in one test they will not
be considered
– A selection strategy in which applicants must meet or exceed the passing score on
more than one selection test.
• All Applicants take every test
• Must achieve passing on each
• Can lead to different decisions than regression approach
– Multiple hurdle - To reduce the costs associated with applicants failing one or more
tests, multiple-hurdle approaches are often used.
– Selection practice of administering one test at a time so that applicants must pass that
test before being allowed to take the next test.
• All Applicants take the first test
• Useful when many applicants and tests are costly and time consuming
• Banding - A statistical technique based on the standard error of measurement that allows
similar test scores to be grouped.
– It is a compromise between top down hiring and passing scores
– Banding attempts to hire the top scorer while allowing flexibility for AA
• How many points apart do two applicants have to be so their tests scores are
significantly different?
– Attempts to hire the top test scorers while still allowing some flexibility for
affirmative action (Campion et al., 2001).
– To compute you need the standard deviation and reliability of the test
Standard error =
Advantage of Banding
- Helps reduce adverse impact, increase workforce diversity,and increase
perceptions of fairness (Zedeck et al., 1996).
- Allows you to consider secondary criteria relevant to the job (Campion et al.,
2001).
Disadvantages of Banding
- Lose valuable information
- Lower the quality of people selected
- Sliding bands may be difficult to apply in the private sector
- Banding without minority preference may not reduce adverse impact
Example:
Test individuals on an intelligence test (b; our predictor) and job performance (Y;
our criterion)
•
Example:
We assess individuals on their GRE scores, GPA, Letters of Recommendation,
and Vita/Resume for graduate school admissions. Different weights are given based on
their validity with performance in graduate school.