Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Technology in Society 68 (2022) 101848

Contents lists available at ScienceDirect

Technology in Society
journal homepage: www.elsevier.com/locate/techsoc

Examining perceptions towards hiring algorithms


Lixuan Zhang a, *, Christopher Yencha b
a
Goddard School of Business and Economics Weber State University 1337 Edvalson St Ogden, UT, 84408, USA
b
Foster College of Business, 1501 West Bradley Ave. Peoria, IL, 61625, USA

A R T I C L E I N F O A B S T R A C T

Keywords: Companies are increasingly turning to AI software to select candidates, despite concerns that hiring algorithms
Algorithms may produce biased evaluations. This study explores the public perceptions of algorithms used in resume and
Artificial intelligence video interview screening. In addition, the effects of individual characteristics on these perceptions are exam­
Fairness
ined. Using a nationally representative sample, we find that the public generally has a negative attitude towards
Effectiveness
Algorithm acceptance
the use of algorithms in hiring, and the majority do not consider them fair and effective. We also find clear
People analytics individual differences regarding the perceptions towards algorithms. Specifically, males, people with higher
education level and people with higher income have more positive perceptions towards hiring algorithms than
their counterparts. The findings contribute to the emerging body of research on hiring algorithms and suggest
strategies to increase public acceptance of hiring algorithms.

1. Introduction data such as body language and facial expression [6]. The algorithms are
able to assess candidates’ communication skills and personality traits [7,
Many business functions are turning to artificial intelligence (AI) for 8]. These algorithms are increasingly used by companies. For example,
competitive insights to generate impressive business results [1]. Human UK-based Vodafone uses AI to screen interview videos of their candi­
Resources (HR) departments are no exception. Traditionally relying on dates, who record themselves answering standard questions. Algorithms
decision-making based on experience and intuition, managers have assess candidate suitability across 15,000 dimensions from body lan­
increasingly realized that HR professionals may benefit from adopting a guage and facial cues to voice intonation and speech cadence. Top
data analytics mindset to understand and manage human capital and candidates will be invited to in-person interviews after passing the video
improve individual and organizational performance [2]. Progress in AI interview [9].
presents an opportunity for HR departments to improve performance. AI The utilization of hiring algorithms has been claimed to result in
can automate repetitive, low-value tasks so human professionals can reduced time-to-hire, bigger talent pools, and better ability for the
focus on more strategic work. For example, AI can tackle standard company to assess soft skills [10]. Based on the Global Recruiting Trends
onboarding processing for new employees, answer common questions 2018 report from LinkedIn, hiring managers agreed that hiring algo­
and employee requests, and handle basic benefits management [3]. rithms could save time, remove human bias, and help to deliver good
Increasingly, companies are using AI to improve their hiring process. candidate matches [9]. Similarly, research suggests the superiority of
Employers like Target, Cisco, PepsiCo, along with major staffing algorithms over humans in recruitment – they are at least 25% more
agencies, are testing and adopting hiring algorithms [4]. Hiring algo­ effective than humans in selecting qualified candidates by improving
rithms are often trained on past applications and interviews to assess consistency in assessment [11]. However, other researchers argue that
candidates and predict the likelihood of success in a given position. instead of reducing human bias, hiring algorithms could amplify it [12,
Resume screening algorithms rely on machine learning tools such as 13]. Reliance on data does not make algorithms free of bias since
classification and natural language processing to parse unstructured humans build the algorithms. Through selections of the target variable,
language and extract desired information using pattern match­ training data, and candidate predictors, software developers may build
ing/language analysis techniques [5]. Video screening algorithms, algorithms rooted in past discrimination, leading to potential disparities
empowered by facial recognition, deep learning, and natural language [14,15].
processing technologies, rank candidates based on speech and biometric Recent studies have started to address public opinions on algorithms

* Corresponding author.
E-mail addresses: lixuanzhang@weber.edu (L. Zhang), cyencha@bradley.edu (C. Yencha).

https://doi.org/10.1016/j.techsoc.2021.101848
Received 26 May 2021; Received in revised form 30 November 2021; Accepted 29 December 2021
Available online 3 January 2022
0160-791X/Published by Elsevier Ltd.
L. Zhang and C. Yencha Technology in Society 68 (2022) 101848

used in HR practices [16–19]. Contrary to the claims about the increased feedback, and a personal focus [26].
fairness of hiring algorithms through reduced human bias, studies have To reduce time-to-hire and attract qualified candidates, organiza­
concluded that hiring algorithms are perceived as less fair [17–19], even tions are utilizing algorithms to automate screening and ranking in the
when extra efforts were made to improve perceptions of algorithms (e. recruitment process. There is ample evidence that algorithms can in­
g., involving humans in the decision making process, considering qual­ crease efficiency by shortening time to assess and score, reaching a
itative factors explicitly, and explaining the decision making process). broader talent pool, and reducing costs in recruitment. According to the
The psychological drivers of this perception of unfairness may stem from Global Recruiting Trends 2018 LinkedIn report, the most important
lower social presence, less opportunity to perform, higher ambiguity, benefits of using hiring algorithms are saving time and money [9].
and lower perceived behavioral control [16–18]. On a positive note, A primary concern regarding hiring algorithms is that of fairness.
hiring algorithms were perceived as more consistent than humans The use of hiring algorithms may enhance overall fairness by removing
[16–18]. human judgment from the process and introducing greater consistency
These studies have yielded valuable insight into consumer percep­ in assessment [11]. For example, responsible use of algorithms can
tions of hiring algorithms; however, several limitations remain unad­ mitigate unconscious bias by focusing on skills, traits, and behaviors that
dressed. First, the studies mainly focused on the fairness perception, are directly related to merits or success [27]. According to Cowgill [28]
ignoring other perceptions such as effectiveness. Not only should hiring who conducted a field experiment applying algorithms for hiring
algorithms be equitable, but they should also be capable of finding workers for white-collar team-production jobs, one advantage of algo­
qualified candidates. In addition, consumers may have different per­ rithms is the capability to select non-traditional candidates such as those
ceptions towards algorithms used in resume screening versus those used who lack job referrals, those without prior experience, and those with
for video screening. Previous studies have not examined the differences atypical credentials. On the other hand, many researchers have ques­
in perceptions between these two types of algorithms. Third, previous tioned the fairness of hiring algorithms. Although algorithms are touted
studies have not considered the effects of individual characteristics such as fair and impartial, researchers caution that algorithms used in hiring
as gender, race, income, or education on the acceptance of algorithms. It may replicate discriminatory practices in existing processes, inherit the
is important to examine these effects since individual characteristics bias of human decision-makers, and mirror biases in society [12].
play a critical role in new technology adoption [20,21]. Additionally, Instead of mitigating human bias in hiring, algorithms may enable,
the AmazonTurk platform and other means of acquiring convenience facilitate, and even amplify such biases [29]. For example, Amazon
samples dominate prior studies, so conclusions reached may not be began to automate its recruitment in 2014, creating 500 computer
generalizable. models to screen resumes looking for 50,000 keywords. After using the
To address these research gaps, this exploratory study examines system, the company found that the automated hiring algorithms
perceptions about hiring algorithms used in resume and video interview preferred men since male resumes were overrepresented in the sample
screening. The aim of the study is twofold: first, we examine different used for training the algorithms. In early 2018, Amazon discontinued
perceptions towards hiring algorithms and investigate if there are any the use of the system [30].
differences in these perceptions towards resume screening algorithms Another main concern is the effectiveness of the hiring algorithms.
and video screening algorithms. Second, we explore the effects of indi­ Some research shows that algorithms can perform as well as human
vidual characteristics on algorithm perception using a nationally decision-makers or even outperform them in candidate selection pro­
representative sample. By addressing these two research goals, the study cesses. Algorithms have been shown to be able to rate applicants’
contributes to the emerging body of research on hiring algorithm accomplishment essays as reliably as human raters [31], select high
adoption. In addition, the findings provide practical insights to organi­ performers [32], accurately predict the ratings for interview traits such
zations that plan to utilize these hiring algorithms, as well as to algo­ as friendliness and engagement, and quantify the relative importance of
rithm developers. For organizations, adoption of hiring algorithms could prosody, language, and facial expressions from video resumes [33].
improve productivity and reduce costs; however, implementing these Algorithms used for video interview screening take into consideration
algorithms while ignoring the public’s negative perceptions on these factors such as voice intonation, facial expression, and word choice;
algorithms may lead to undesirable consequences, such as failure to however, speech recognition software does not necessarily perform well
attract potential high-quality applicants, reduced organizational with people with regional and nonnative accents, and facial analysis
commitment from current employees, reputational damage, and even systems may have trouble reading the faces of women with darker skin
legal risks [19,22]. For algorithm developers, understanding public [4]. To complicate things further, researchers have found that facial
perceptions of hiring algorithms will motivate them to implement movements convey a range of information, causing difficulty inferring a
techniques to reduce unintended discrimination in the algorithms and person’s emotions from facial analysis [34]. A deeper concern is that
improve the public’s trust. since there is no causal link between facial expressions and workplace
success, the legitimacy of using algorithms for video screening to make
2. Relevant literature hiring decisions may not be substantial [4].
We examine the public’s perception towards hiring algorithms,
2.1. Use of algorithms in resume and video interview screening especially regarding perceptions of fairness and effectiveness. Since the
use of algorithms in recruitment is relatively new, especially for video
Resumes are one of the most frequently used instruments in the interviews, we propose the following research questions:
employment selection process [23]. Typically, resumes are credentials
RQ1a. What are the public’s perceptions of fairness towards hiring
that consist of an applicant’s biographic information, which can be used
algorithms used in resume screening vs. video interview screening?
to draw inferences about personality, intelligence, interpersonal skills,
leadership abilities, and motivations [24]. Videos are increasingly being RQ1b. What are the public’s perceptions of effectiveness towards
adopted by organizations thanks to the development of web-based hiring algorithms used in resume screening vs. video interview
communications platforms. During the COVID-19 pandemic, 86% of screening?
organizations have reported using video interviews for candidate se­
In addition, we examine the overall acceptability of hiring algo­
lection [25]. Gartner [25] predicts that virtual interviews may become
rithms. Acceptability can be defined as the overall evaluation of tech­
the standard after social distancing guidelines are lifted. Video in­
nology, and it is highly related to the intention to use the technology
terviews could offer more rich and varied information than resumes
[35].
since the medium allows both verbal and nonverbal cues (e.g., facial
expression, word choice, body language), the capability of immediate RQ1c. What are the public’s perceptions of acceptability towards

2
L. Zhang and C. Yencha Technology in Society 68 (2022) 101848

hiring algorithms used in resume screening vs. video interview socio-economic status. People with high education levels have higher
screening? cognitive abilities due to continued intellectual pursuits [52]. Therefore,
they likely will have higher levels of technology literacy. Furthermore,
2.2. Individual differences affecting perceptions towards algorithms used highly educated people are likely more able to learn and adapt to
in recruitment changing technology. Individuals with high education levels expend less
effort in learning how to use technology [53] and find new technology
While the above research questions focus on public perceptions of easier to use [50]. According to Rogers [48]; early adopters of tech­
algorithms used in hiring, our next research questions examine how nology tend to have higher education levels. In the context of algorithm
individual differences affect perceptions towards algorithm use in awareness [46], found that more educated people were more aware of
recruitment. Recent research on algorithm perception [36–40] has not the use of algorithms, but held a more negative attitude towards them. In
specifically focused on individual differences, with the exceptions of contrast, Thurman et al. [41] found that education level was positively
Thurman et al. [41] and Araujo et al. [42]. In the context of hiring al­ related to algorithmic appreciation in the context of news selection.
gorithms, Kaibel et al. [16] found that people who had discrimination Race Previous research has recognized that race is a significant
experiences preferred algorithms while those who had non-standard source of influence on technology adoption, even after controlling for
professional resumes preferred humans. None of the studies systemati­ income and education [54]. The disparities in the intensity and nature of
cally examined the influence of demographic variables on perceptions of Internet use remain among racial groups despite decreasing racial dis­
hiring algorithms. Since researchers believe that hiring algorithms may parities in access to the Internet [55]. For example, after controlling for
be affected by gender and race biases [14], it is worthwhile to examine other demographic characteristics, Mitchell et al. [56] found that older
the effects of demographic variables on perceptions. In addition to the blacks and Hispanics are less likely than whites to use technology to
demographic variables, we also include two other individual charac­ manage their health. Race also plays a role in attitudes towards algo­
teristics: perceived bias in algorithms and trust in major technology rithms. About 45% of blacks think that using algorithms to get a per­
firms. In the following sections, we provide a review of literature that sonal finance score for consumers is fair, while the number drops to 25%
shows how public perceptions towards algorithm use in recruitment among whites. About 61% of blacks believe that using algorithms to
may differ on an individual basis. calculate a criminal risk score is not fair, and the number falls to 49%
Gender Research has implied that there are significant differences among whites [57].
across gender in various decision-making situations (e.g., Ref. [43]. In Trust in major technology companies According to a recent survey,
the context of technology adoption, gender shapes the initial decision public trust in technology firms has suffered the largest drop among all
process that drives new technology adoption and, in turn, influences industry sectors across 28 countries [58]. This is problematic since trust
sustained technology usage [21]. Males are more likely to use technol­ in technology firms is related to service usage [59]. Trust in technology
ogy to finish an instrumental task and gain utilitarian benefits [44], firms may affect the adoption of emerging technologies. For example,
thereby possibly improving their perceptions of algorithms. Compared trust in major technology companies is negatively related to risk per­
to males, females tend to have higher levels of anxiety when using new ceptions of autonomous vehicle (AV) adoption and positively correlated
technologies and lower levels of computer aptitude [21]. Females have with the willingness to pay for AV insurance [60]. In a survey conducted
also been shown to care more about usability [45]. Recent studies have by Edelman [58]; 61% of the respondents agreed that the pace of change
yielded inconsistent results on the effect of gender on the perception of in technology was too fast, and governments did not understand
algorithms. Some researchers find no gender differences in attitudes emerging technologies enough to regulate them effectively. This dem­
towards algorithms [40,41], while others find that females perceive onstrates the anxiety that people have towards emerging technologies.
algorithms as less useful than males [42]. Although males have a higher Top major technology firms have been aggressively integrating algo­
level of awareness of algorithms than females, they tend to demonstrate rithms in their products by developing them and acquiring other AI
a greater variance in attitudes towards algorithms than females [46]. companies [61]. Although not all hiring algorithms are designed by
Age Researchers have shown that age is negatively related to attitude major technology companies, trust in major technology companies may
towards technology, as well as to both short-term and long-term tech­ still influence individual perceptions of algorithms.
nology usage [20,47]. One possible reason for this correlation is that Perceived bias in algorithms Two conflicting views are present when
older people are exposed to newer technologies at a relatively later stage considering bias in algorithm use. The first view is that algorithms can
in life than younger people are. Older people, therefore, may be more remove or reduce bias since human decision-makers are prone to judg­
likely to seek and apply traditional ways to solve problems, while ment errors due to biases from emotional responses, intuition, and ir­
younger people are more reliant on the use of new technology for task rational tolerance for ambiguity [62]. Algorithmic decisions are
accomplishment. Since older people are less familiar with algorithms, considered fairer as they are free of personal biases [11,28,63]. How­
they may have less positive attitudes towards algorithms. Recent studies ever, others insist that algorithms still produce bias. Since algorithms
find conflicting results on the relationship between age and attitudes rely on a large volume of training data where bias may already exist,
towards algorithms. While Logg et al. [40] found that age was not algorithms may replicate real-world inequalities [64,65]. In addition,
related to reliance on algorithmic advice, other studies [41,42,46] found since humans build algorithms, their biases may be built into the models
that age was negatively related to perceptions towards algorithms. when they choose outcome variables to predict, select training datasets,
Income Innovation tends to enter society through individuals who and choose candidate predictors [12].
have higher social-economic status [48], and people from different To examine the effects of the individual characteristics discussed in
socio-economic classes have different values towards technology use the section above, we consider the following research questions:
[49]. Income is an important indicator of social-economic status. In an
RQ2a. How do individual characteristics affect the fairness percep­
earlier study on Internet adoption, individuals with lower income
tions of hiring algorithms?
perceived the Internet as less useful than those with higher incomes
[50]. In addition, low-income US residents often have unstable Internet RQ2b. How do individual characteristics affect the effectiveness per­
access, outdated or malfunctioning hardware, or trouble accessing the ceptions of hiring algorithms?
public Internet due to a lack of reliable transportation. Thus low-income
RQ2c. How do individual characteristics affect the acceptability per­
residents have more limited access to employment and biased attitudes
ceptions of hiring algorithms?
towards technology [51]. The literature on the relationship between
income and perception towards algorithms has been scant.
Education In addition to income, education is another indicator of

3
L. Zhang and C. Yencha Technology in Society 68 (2022) 101848

3. Methods question on a four-point Likert scale ranging from “not fair at all” to
“very fair”: “How FAIR do you think this type of program would be to
3.1. Sample and data people applying for jobs?” Effectiveness was measured by one question
on a four-point Likert scale ranging from “not effective at all” to “very
This study uses data from the American Trends Panel (ATP) Wave 35 effective”: “How EFFECTIVE do you think this type of program would be
conducted by the Pew Research Center [66]. The ATP is a nationally at identifying good job candidates?” Acceptability was measured by a
representative panel of randomly selected US adults recruited from single item: “Do you think it is acceptable or not for companies to use
landline and cellphone random-digital surveys. PEW reports generated this type of program when hiring job candidates?” The respondents were
from this wave consider topics such as how the public views technology asked to choose either “acceptable” or “not acceptable."
companies, the changing climate of online activism, and concerns over In addition to the demographic variables, we included an indepen­
social media use of personal data. Wave 35 was conducted during May dent variable: trust in major technology firms. This item was measured by
29- June 11th, 2018 with 4594 total respondents. Among all the re­ one question: “How much of the time do you think you can trust major
spondents, 2274 respondents responded to the Algorithms Used in Resume technology companies to do what is right?” with answers ranging from
Screening scenario, and 2320 respondents responded to the Algorithms “hardly ever” to “just about always” on a four-point Likert scale. Another
Used in Video Interview Screening scenario (For scenarios used in the independent variable was perceived bias in algorithms. The respondents
survey, please see Appendix A). were asked which of the two statements came closest to their view: 1) “It
Among the 4594 observations, 349 of them had missing data. Most is possible for computer programs to make decisions without human
missing data occurred along the income dimension, perhaps due to bias” and 2) “Computer programs will always reflect the biases of the
relatively low or high-income respondents choosing not to report in­ people who designed them.” Table 2 presents the summary statistics for
come out of embarrassment or modesty. This is a common issue within demographic variables, perceived bias in algorithms, and trust in major
social science research, particularly in the collection of primary data technology firms.
[67,68]. To more carefully determine whether income data are missing
at random, we follow Byrne [69] and order the observations in the 4. Results
dataset by the level of education. Education is a measure much less
prone to underreporting than household income and is also correlated The source data file was downloaded from the PEW Research Center
with household income, both in theory and as borne out by the ATP website. The open-source R programming language was used for anal­
Wave 35 data (r = 0.4009). With data sorted by education level, it is ysis. For RQ1a, RQ1b, and RQ1c, descriptive statistics were used to
apparent that missing data in income is random, assuaging concerns examine the public’s perceptions. Mann-Whitney U tests were used to
about a potential confound. This result holds true for the entire sample compare the differences between perceptions towards algorithms used
as well as for both resume and video subsamples. Therefore, the cases in resume screening vs. video interviews. Fig. 1 presents the fairness,
with missing data were dropped, leaving a total sample of 4245 effectiveness and acceptability perceptions towards the algorithms in
observations. the entire sample, resume subsample, and video subsample.
RQ1a is concerned with perceptions towards the fairness of the hir­
3.2. Weighting ing algorithms. About 62.39% of respondents think hiring algorithms
are not fair at all or are not very fair. We are interested to know whether
Data weights were calculated by PEW via an iterative proportional there are significant differences in fairness perceptions of hiring algo­
fitting method, often referred to as “raking,” to correct for imbalances rithms between the two subsamples. To test this, we conduct the test of
between sample and US population demographics. Unfortunately, PEW- differences in mean fairness perceptions towards hiring algorithms via
weighted means do not match the 2018 population data well, likely due Mann-Whitney U test across the two media. Respondents regarded
to a combination of parameterizing with older population data and in­ resume screening algorithms as fairer than video screening algorithms
clusion of dimensions that are irrelevant to the current study.1 In (Mresume = 2.29; Mvideo = 2.11, p < 0.01).
response, we used an equivalent iterative proportional fitting method to RQ1b is concerned with perceptions towards the effectiveness of the
produce weights with the following raking dimensions pertinent to the hiring algorithms. About 56.14% of respondents regarded them as not
study at hand: gender*age, gender*education, age*education, and race. effective at all or not very effective. The Mann-Whitney U test indicates
Marginal distributions of the survey were matched to contemporary that the resume screening algorithms are perceived as more effective
known population margins collected from the American Community than video screening algorithms (Mresume = 2.38; Mvideo = 2.25, p <
Survey and the Bureau of Labor Statistic’s Current Population Survey. 0.01).
Calculated weights lead to a sample that is more representative of the RQ1c is concerned with perceptions towards the acceptability of
population than those provided by PEW. Summary statistics demon­ hiring algorithms. Generally, respondents find the use of algorithms in
strating comparisons of 2018 US population demographics and the hiring process unacceptable (Acceptable 37.71% vs. Not Acceptable
weighted sample means are available in Table 1. All data analysis was 62.29%). Acceptability of resume screening is significantly higher than
conducted using the weighted data with the self-developed weights. that of the video screening algorithm (Mresume = 0.47; Mvideo = 0.32, p
< 0.01). Table 3 summarizes the results from the Mann-Whitney tests.
Ordinal regression was used to examine individual factors affecting
3.3. Measures
perceptions of fairness (RQ2a) and effectiveness (RQ2b). Ordinal
regression was performed by fitting a vector of coefficients and a set of
Three perceptions of hiring algorithms were used in this study:
ordered thresholds, separating the real number line into as many disjoint
fairness, effectiveness, and acceptability. Fairness was measured by one
segments as dependent variable response values. In doing so, maximum
likelihood estimation can provide the marginal effect of a change in an
1 independent variable on the likelihood of observing a particular alter­
PEW supplied weights were produced via iterative proportional fitting with
native response outcome [70]. This is a convenient statistical method for
the following raking dimensions: gender*age, gender*education, age*educa­
tion, census region, race, population density, internet usage, party affiliation, working with ordinal data, such as Likert-type scales, as researchers do
volunteerism. Population parameter estimates were gathered from the 2016 not have to make assumptions about the interval distances between
American Community Survey, the 2010 Census, the 2015 Current Population dependent variable options. To examine individual factors affecting
Survey Volunteer Supplement, with the remainder (Internet Usage, Party perceptions on acceptability (RQ2c), logistic regression was used since
Affiliation) estimated from various previous PEW surveys. acceptability is coded as a binary variable.

4
L. Zhang and C. Yencha Technology in Society 68 (2022) 101848

Table 1
Comparison of unweighted sample, weighted samples, and US population demographics.
2018 Pop. Means Unweighted Means PEW Weighted Means Weighted Means Using self-developed weights

Gender
Gender(male) 0.49 0.51 0.49 0.489
Gender (female) 0.51 0.49 0.51 0.511
Race
Race(white) 0.763 0.749 0.647 0.763
Race(nonwhite) 0.237 0.251 0.353 0.237
Age
Age(18–29) 0.164 0.104 0.198 0.157
Age (30–49) 0.24 0.306 0.342 0.233
Age (50–64) 0.191 0.313 0.27 0.192
Age (65+) 0.165 0.277 0.19 0.179
Education
Ed (HighSchoolOrLess) 0.380 0.152 0.378 0.372
Ed (SomeCollege) 0.260 0.295 0.318 0.254
Ed(bachelor>) 0.361 0.553 0.304 0.374
Observations 327,167,400 4245 4245 4245

*italicized are anchors.

for the resume subsample, and one for the video subsample. A dummy
Table 2
variable coded to separate the sample into subsamples based on the
Summary statistics.
scenario about which the respondent was questioned is present in the
Resume screening Video screening variable resume. The resume dummy was equal to 1 if the respondent was
Unweighted Weighted Unweighted Weighted asked about the use of algorithms in screening resumes, and 0 if the
Age respondent was asked about their feelings concerning the use of algo­
18-29 226 431.38 220 436.15 rithms used for video interview screening. Fairness was measured via a
(10.83%) (20.66%) (10.20%) (20.22%) 4-point Likert-type scale and was estimated via ordinal regression.
30-49 650 638.51 648 662.63 Results for fairness perceptions regarding the use of algorithms for
(31.13%) (30.58%) (30.04%) (30.72%)
50-64 648 529.93 679 542.05
screening in the hiring process are presented in Table 4. Females felt that
(31.03%) (25.38%) (31.48%) (25.13%) the use of algorithms in the screening process was less fair than males.
65 or older 564 488.17 610 516.17 Nonwhites considered algorithms screening fairer than whites. Those
(27.01%) (23.38%) (28.28%) (23.93%) who belonged in the 30–49 years old and 65+ years old cohorts felt that
Gender
algorithm use in the hiring process was fairer than younger respondents
Male 1048 1023.75 1116 1050.03
(50.19%) (49.03%) (51.74%) (48.68%) (Age 18–29). These groups predominantly drove this effect in the video
Females 1040 1064.25 1041 1106.97 subsample. Middle income ($30k-$75k) and less educated cohorts
(49.81%) (50.97%) (48.26%) (51.32%) negatively predicted fairness. Perceived algorithm bias was negatively
Race related to fairness, and trust in major technology companies was posi­
White 1561 1593.14 1619 1645.79
tively related to fairness. The estimated coefficient on trust in major
(74.76%) (76.30%) (75.06%) (76.30%)
Non-white 527 494.86 538 511.21 technology companies is the largest effect found and is statistically
(25.24%) (23.70%) (24.94%) (23.70%) different from the next largest, being in the 30-75k income group, in
Education absolute value (t > |12| in all samples), suggesting the importance of this
High school or 320 778.20 325 801.33
factor in guiding perceptions of fairness. Finally, the resume dummy was
less (15.33%) (37.27%) (15.07%) (37.15%)
Some college 607 535.15 645 541.41 significant, demonstrating that algorithm fairness perceptions for re­
(29.07%) (25.63%) (29.90%) (25.10%) sumes were higher than for video interviews on average. Compared to
College Graduate 1161 774.65 1187 814.27 the entire sample, in the resume subsample race was not a significant
(55.60%) (37.10%) (55.03%) (37.75%) factor affecting fairness perceptions. There was also no difference in
Family Income
fairness perceptions between the respondents in the 30–49 age group
<$30K 426 687.79 417 628.33
(20.40%) (32.94%) (19.33%) (29.13%) and those in the 18–29 age group. For the video subsample, gender and
$30K – 75K 713 733.51 727 790.54 income were not significant factors for fairness perceptions of algorithm
(34.15%) (35.13%) (33.70%) (36.65%) used in video interviews.
$75K+ 949 666.70 1013 738.13
RQ2b is interested in how individual characteristics affect the
(45.45%) (31.93%) (46.97%) (34.22%)
Perceived bias towards algorithms
effectiveness perceptions of hiring algorithms. Effectiveness was
No bias 1269 1191.20 1346 1337.34 measured via a 4-point Likert-type scale and was estimated via ordinal
(60.78%) (57.05%) (62.40%) (62.00%) regression. Table 5 presents ordered probit regression results of hiring
Reflect human 819 896.80 811 819.66 algorithms’ effectiveness. Considering the entire sample, gender was
bias (39.22%) (42.95%) (37.60%) (38.00%)
only marginally related to effectiveness, with females considering the
Trust towards major tech companies to do right
Just about 598 582.13 557 563.19 algorithms less effective. Similarly, education and income were
always/Most of (28.64%) (27.88%) (25.82%) (26.11%) marginally related to effectiveness with the middle-income group
the time (30–75K) regarding algorithm use as less effective than the high-income
Some of the time/ 1490 1505.87 1600 1593.81
group (>75K). Respondents with some college considered the algo­
Hardly ever (71.36%) (72.12%) (74.18%) (73.89%)
rithms less effective than those with at least a bachelor’s degree. The
oldest cohort (65+) considered the hiring algorithms more effective
RQ2a asks how individual characteristics affect the fairness percep­ than the youngest cohort (18–29). In the resume subsample, gender and
tions of hiring algorithms. To answer this question, we estimate three education had no effect on perceptions of algorithm effectiveness. In the
regressions for each dependent variable: one for the entire sample, one video subsample, education and income had no significant effects, but

5
L. Zhang and C. Yencha Technology in Society 68 (2022) 101848

Fig. 1. Perceptions on hiring algorithms.

6
L. Zhang and C. Yencha Technology in Society 68 (2022) 101848

Table 3 Table 5
Mann Whitney U Tests for Differences in Means between Resume vs. Video for Ordered probit regressions of effectiveness of algorithmic screening.
Perceptions of Hiring Algorithms. Dependent variable: Effectiveness
Resume- Video-mean difference U Rho
Entire sample Resume Video
mean
Gender (female) − 0.055*(0.034) 0.026 (0.047) − 0.138***
Fair 2.294 2.109 0.185 2,487,663 4.36E-
(0.048)
[0.032] [0.031] 10
Race(nonwhite) 0.021 (0.041) 0.004 (0.057) 0.030 (0.058)
Effective 2.381 2.249 0.132 2,469,037 6.88E-
Age (30–49) − 0.0002 0.057 (0.065) − 0.064 (0.070)
[0.033] [0.030] 09
(0.048)
Acceptable 0.436 0.322 0.114 2,479,610 1.31E-
Age (50–64) − 0.002 (0.050) 0.073 (0.069) − 0.080 (0.073)
[0.018] [0.016] 11
Age (65+) 0.131**(0.051) 0.164**(0.072) 0.104 (0.074)
*Standard deviation in brackets. Inc(<30k) 0.067 (0.046) 0.046 (0.064) 0.087 (0.068)
Inc(30-75k) − 0.070*(0.042) − 0.106*(0.058) − 0.036 (0.060)
Ed (HighSchoolOrLess) − 0.0002 − 0.017 (0.058) 0.028 (0.060)
(0.042)
Table 4
Ed (SomeCollege) − 0.074*(0.044) − 0.075 (0.060) − 0.076 (0.063)
Ordered probit regressions of fairness of algorithmic screening. Algorithms – Bias − 0.223*** − 0.169*** − 0.287***
Dependent variable: fairness (0.034) (0.047) (0.050)
Trust In Tech 0.290*** 0.304*** 0.273***
Entire sample Resume Video Companies (0.038) (0.052) (0.055)
Gender (female) − 0.068** − 0.095** − 0.040 (0.048) Resume 0.150***
(0.034) (0.047) (0.033)
Race(nonwhite) 0.101**(0.041) 0.063 (0.057) 0.139**(0.058) Observations 4245 2088 2157
Age (30–49) 0.114**(0.048) 0.098 (0.065) 0.139**(0.071) Akaike Inf. Crit. 10193.16 5091.41 5112.57
Age (50–64) 0.011 (0.050) − 0.056 (0.069) 0.086 (0.073) Note.
Age (65+) 0.143*** 0.118*(0.072) 0.176**(0.075)
1. Reference groups: Race: White; Age (18-29); Income(75K+); Education
(0.051)
Inc(<30k) 0.023 (0.047) − 0.024 (0.064) 0.080 (0.068)
(College graduate or higher)
Inc(30-75k) − 0.109*** − 0.113*(0.058) − 0.096 (0.060) 2. Numbers in bracket are standard errors
(0.042) 3. *p< 0.1*;*p** < 0.05; ***p<0.01
Ed (HighSchoolOrLess) − 0.135*** − 0.131** − 0.149**
(0.042) (0.058) (0.061)
Ed (SomeCollege) − 0.129*** − 0.169*** − 0.086 (0.063) Table 6
(0.044) (0.060) Logistic regression of acceptability of algorithmic screening.
Perceived algorithm − 0.181*** − 0.210*** − 0.150***
bias (0.034) (0.047) (0.050) Dependent variable: Acceptability
Trust in tech companies 0.384*** 0.361*** 0.423***
Entire sample Resume Video
(0.038) (0.052) (0.055)
Resume 0.213*** Gender (female) − 0.248** − 0.098 (0.150) − 0.450***
(0.033) (0.107) (0.149)
Observations 4245 2088 2157 Race(nonwhite) − 0.067 (0.127) − 0.170 (0.174) 0.009 (0.187)
Akaike Inf. Crit. 10403.32 5229.70 5181.32 Age (30–49) 0.041 (0.182) − 0.092 (0.250) 0.227 (0.278)
Age (50–64) 0.032 (0.182) − 0.254 (0.248) 0.387 (0.278)
Note. Age (65+) 0.123 (0.186) − 0.107 (0.260) 0.443 (0.279)
1. Reference groups: Race: White; Age (18–29); Income (75K+); Education Inc(<30k) − 0.209 (0.146) − 0.278 (0.202) − 0.089 (0.208)
(College graduate or higher) Inc(30-75k) − 0.353*** − 0.364** − 0.322*(0.176)
2. Numbers in bracket are standard errors (0.122) (0.170)
3. *p < 0.1*; *p** <0.05; ***p < 0.01 Ed (HighSchoolOrLess) − 0.259*(0.142) − 0.342* − 0.188 (0.198)
(0.201)
Ed (SomeCollege) − 0.308*** − 0.421** − 0.158 (0.171)
females considered video interview algorithms significantly less effec­ (0.116) (0.163)
tive than males. The resume dummy variable was significant for the Perceived algorithm − 0.319*** − 0.277* − 0.387**
entire sample, indicating that resume algorithms were considered more bias (0.108) (0.150) (0.157)
Trust in tech companies 0.494***(0.117) 0.499*** 0.532***
effective than video interview algorithms. Not surprisingly, perceived
(0.163) (0.168)
bias in algorithms had a negative relationship with effectiveness. Trust Resume 0.481***(0.104)
in major technology companies was positively correlated with and, Sample size 4245 2088 2157
again, the most important driver of perceptions of effectiveness in all Log Likelihood − 2508.803 − 1285.864 − 1208.603
three samples. Akaike Inf. Crit. 5043.605 2595.728 2441.205

RQ2c examines how individual characteristics affect the accept­ Note.


ability perceptions of hiring algorithms. Acceptability was measured as a 1. Reference groups: Race: White; Age (18-29); Income(75K+); Education
dummy variable, and was estimated via GLM modeled with a binomial (College graduate or higher)
distribution and a logit link function. Table 6 presents results for the 2. Numbers in bracket are standard errors
models predicting the acceptability of algorithmic screening in the hir­ 3. *p< 0.1*;*p** < 0.05; ***p<0.01
ing process. Gender, income, education, perceived bias toward algo­
rithms, and trust in major technology firms drove results across the three above, felt that the use of algorithms in the hiring process was less
samples. Among these variables, trust in major technology firms had the acceptable on average. Finally, the resume dummy indicated that the
largest coefficient in the regression models for predicting acceptability. resume and video subsamples differed significantly in their perceptions
Female respondents found hiring algorithms to be less acceptable than of algorithm use acceptability within their specific media (as was shown
males. Age did not appear to be correlated with the acceptability of al­ previously in Table 3). Interestingly, while there was a significant
gorithms in the hiring process, and the middle-income group ($30k- divergence between females and males in the acceptability of hiring
$75k) considered hiring algorithms as significantly less acceptable algorithms for the video subsample, there was no divergence between
compared with the high-income group ($75k+). Respondents with the sexes for the resume subsample. In contrast, lower education levels
lower education levels compared to those with a bachelor’s degree or negatively predicted acceptability of hiring algorithms for the resume

7
L. Zhang and C. Yencha Technology in Society 68 (2022) 101848

subsample, but did not for acceptability in the video subsample. Table 8
We also conducted regression analysis using the PEW weighted data. Ordered probit regressions of effectiveness of algorithmic screening,
The results are similar to those using the self-developed weights in spite unweighted.
of inconsistencies between PEW-weighted and population sample means Dependent variable: Effectiveness
(See Appendix B for the results from PEW-weighted data). Since it is Entire sample Resume Video
recommended that both weighted and unweighted estimates be reported
Gender (female) − 0.110*** − 0.083*(0.048) − 0.137***
[71], we present the regression results for the unweighted data in
(0.034) (0.047)
Tables 7–9 to compare to the previous results, focusing primarily on Race(nonwhite) − 0.035 (0.040) − 0.039 (0.057) − 0.028 (0.056)
differences. Concerning fairness, the unweighted data did not find the Age (30–49) − 0.100*(0.060) − 0.107 (0.085) − 0.091 (0.085)
more favorable perceptions from nonwhites and from the oldest cohort Age (50–64) − 0.124** − 0.105 (0.086) − 0.139 (0.086)
(age 65+). On the contrary, the 50–64 cohort perceived the hiring al­ (0.060)
Age (65+) 0.066 (0.062) 0.028 (0.088) 0.104 (0.087)
gorithms as less fair than the youngest cohort. Similarly, the 50–64 Inc(<30k) 0.023 (0.050) − 0.034 (0.071) 0.080 (0.071)
cohort perceived hiring algorithms as less effective than the youngest Inc(30-75k) − 0.041 (0.039) − 0.087 (0.056) 0.003 (0.055)
cohort. For the predictors of acceptability, the unweighted data show Ed (HighSchoolOrLess) 0.043 (0.052) 0.053 (0.074) 0.032 (0.074)
that compared to whites, nonwhites considered hiring algorithms less Ed (SomeCollege) − 0.051 (0.040) − 0.021 (0.057) − 0.081 (0.055)
Algorithms – Bias − 0.162*** − 0.160*** − 0.162***
acceptable, though primarily for the resume subsample. Compared to
(0.035) (0.049) (0.049)
the youngest cohort (age 15–29), the age 50–64 cohort perceived hiring Trust In Tech 0.278*** 0.265*** 0.296***
algorithms as less acceptable. Again, this difference was only found in Companies (0.038) (0.053) (0.054)
the resume subsample. Discrepancies between the unweighted and Resume 0.178***
weighted results in terms of age and race likely stem from dis­ (0.033)
Observations 4245 2088 2157
proportionalities of the surveyed group compared to the population. Akaike Inf. Crit. 10027.56 4946.92 5094.75

5. General discussion Note.


4. Reference groups: Race: White; Age (18–29); Income (75K+); Education
(College graduate or higher)
This study aims to gauge the general public’s perceptions of hiring
5. Numbers in bracket are standard errors
algorithms and how individual factors affect these perceptions. As 6. *p < 0.1*; *p** <0.05; ***p < 0.01
shown by the results of RQ1a -c, respondents generally have a negative
attitude towards the use of algorithms in hiring. Regarding fairness,
about 62% of respondents considered it was not fair to use these types of Table 9
algorithms for applicants. Regarding effectiveness, about 56% of the Logistic regression of acceptability of algorithmic screening, unweighted.
respondents believed the algorithms were not effective in identifying Dependent variable: Acceptability
good candidates. In addition, about 62% of the respondents believed it
Entire sample Resume Video
was not acceptable for organizations to utilize hiring algorithms. Re­
spondents considered resume screening algorithms fairer, more effec­ Gender (female) − 0.411*** − 0.295*** − 0.547***
(0.066) (0.091) (0.095)
tive, and more acceptable than video screening algorithms.
Race(nonwhite) –0.191** − 0.308*** − 0.066 (0.114)
The influence of individual characteristics on perceptions of fairness, (0.079) (0.109)
effectiveness and acceptability was also examined. Consistent findings Age (30–49) − 0.141 (0.116) − 0.299*(0.161) 0.041 (0.171)
Age (50–64) − 0.303*** − 0.502*** − 0.073 (0.173)
(0.117) (0.162)
Age (65+) − 0.108 (0.119) − 0.263 (0.165) 0.074 (0.174)
Table 7
Inc(<30k) − 0.236** − 0.300** − 0.160 (0.143)
Ordered probit regressions of fairness of algorithmic screening, unweighted. (0.098) (0.135)
Dependent variable: Fairness Inc(30-75k) − 0.259*** − 0.297*** − 0.217*(0.111)
(0.077) (0.106)
Entire sample Resume Video Ed (HighSchoolOrLess) − 0.208** − 0.237*(0.141) − 0.175 (0.149)
Gender (female) − 0.158*** − 0.155*** − 0.164*** (0.102)
(0.034) (0.048) (0.047) Ed (SomeCollege) − 0.323*** − 0.355*** − 0.299***
Race(nonwhite) − 0.006 (0.040) − 0.031 (0.057) 0.015 (0.056) (0.078) (0.108) (0.112)
Age (30–49) − 0.026 (0.060) − 0.059 (0.085) 0.008 (0.086) Perceived algorithm − 0.246*** − 0.223** − 0.268***
Age (50–64) − 0.169*** − 0.244*** − 0.094 (0.086) bias (0.067) (0.093) (0.096)
(0.060) (0.085) Trust in tech companies 0.434*** 0.373*** 0.511***
Age (65+) − 0.004 (0.061) − 0.072 (0.087) 0.066 (0.087) (0.072) (0.100) (0.104)
Inc(<30k) 0.015 (0.050) − 0.031 (0.070) 0.066 (0.071) Resume 0.438***
Inc(30-75k) − 0.080** − 0.102*(0.056) − 0.057 (0.055) (0.065)
(0.039) Sample size 4245 2088 2157
Ed (HighSchoolOrLess) − 0.039 (0.052) − 0.026 (0.073) − 0.055 (0.074) Log Likelihood − 2722.23 − 1388.38 − 1328.35
Ed (SomeCollege) − 0.096** − 0.100*(0.056) − 0.095*(0.056) Akaike Inf. Crit. 5470.45 2800.76 2680.70
(0.040)
Note.
Algorithms – Bias − 0.146*** − 0.129*** − 0.163***
1. Reference groups: Race: White; Age (18-29); Income(75K+); Education
(0.034) (0.049) (0.049)
Trust In Tech 0.343*** 0.324*** 0.370*** (College graduate or higher)
Companies (0.037) (0.053) (0.054) 2. Numbers in bracket are standard errors
Resume 0.196*** 3. *p< 0.1*;*p** < 0.05; ***p<0.01
(0.033)
Observations 4245 2088 2157
from weighted and unweighted data include: 1) females were less
Akaike Inf. Crit. 10297.02 5107.89 5209.09
enthusiastic about algorithms used in recruitment than males. Specif­
Note: ically, they considered the hiring algorithms as less fair, less effective,
1. Reference groups: Race: White; Age (18-29); Income(75K+); Education and less acceptable than males; 2) Respondents with high incomes
(College graduate or higher)
considered resume screening algorithms as fairer, more effective, and
2. Numbers in bracket are standard errors
more acceptable than middle-income respondents; 3) Respondents with
3. *p< 0.1*;*p** < 0.05; ***p<0.01

8
L. Zhang and C. Yencha Technology in Society 68 (2022) 101848

at least a bachelor’s degree perceived resume screening algorithms as information about the decision-making processes of algorithms and
fairer and more acceptable; 4) When respondents trusted major tech­ emphasize the level of sophistication and consistency of the algorithms,
nology firms, their attitudes toward the algorithms were more positive; increasing candidates’ cognitive trust towards them. Organizations can
and 5) When respondents believed that computer programs would also increase candidates’ emotional trust by establishing a social pres­
reflect the biases of the people who designed them, they found algo­ ence through instant messaging or an avatar [76]. Finally, organizations
rithms less fair, less effective, and less acceptable. that rely on algorithms for hiring should regularly evaluate them and
correct them for biases [14].
6. Theoretical contributions This study results could also help organizations to design promo­
tional messages to increase acceptance of these algorithms. When
Our research makes four contributions to the emerging body of examining the influence of individual factors on algorithm perceptions,
research on hiring algorithm adoption. First, some researchers believe we find that males, respondents with the highest levels of education, and
that hiring algorithms may be able to reduce gender bias, since tradi­ those in the highest income groups are more positive towards the al­
tional hiring methods tend to focus on wrong traits such as confidence gorithms than others. When implementing hiring algorithms, organi­
and charisma, resulting in the hiring of more males, especially in lead­ zations should be sensitive to possible diversity in decision-making
ership positions [72]. However, the results of the paper clearly processes among different demographic segments. To increase algo­
demonstrate that women find the hiring algorithms less appealing than rithm acceptance, promotional messages might be tailored to emphasize
men, which may affect their willingness to use the hiring algorithms. It is features that are important to each segment. For example, since females
possible that women believe that the hiring algorithms may exacerbate are more likely to be influenced by peer testimonials and adequate
existing gender bias. Ajunwa [29] suggests that the use of automation as support availability [21], an algorithm promotional message might be
an anti-bias intervention can be paradoxical. In this case, the algorithms best designed to focus on these factors if the objective is to attract female
may discourage women from applying for jobs instead of reducing the candidates.
gender bias in hiring. Our work raises an important question about the Second, the study provides valuable information to designers of al­
impact of hiring algorithms on the perceptions and behaviors of un­ gorithms. Since the belief that algorithms always reflect the biases of
derrepresented groups. Second, contributing to previous research that their designers is negatively related to positive perceptions, algorithm
mainly focused on fairness, this study also examines effectiveness per­ designers should be vigilant about possible biases when designing hiring
ceptions of hiring algorithms, the ability of the algorithm to produce a algorithms. Algorithms may inherit the biases of society, the decision-
desired outcome – in this case, finding the right candidates. As shown in makers, and the designers, whether on purpose or by intention. When
previous research on the evaluation of performance management sys­ designing algorithms, designers should carefully define the target vari­
tems [73], both effectiveness and fairness are considered critical by able (i.e., possible definitions of a good candidate), collect quality and
employees. Third, by empirically documenting the roles of individual representative data, use unbiased training data, and select features that
characteristics, we add an important contribution towards explaining do not systematically discount members of any particular class [12]. In
potential sources of resistance to hiring algorithms. This is one of the addition, designers may try exploration-based methods, such as the
earliest studies on how individual characteristics affect perceptions of conceptual bandit approach, to select from under-represented groups
hiring algorithms. In addition to demographic variables, the study has and improve modeling by learning more about hiring quality (D [77].
also examined the influences of attitudes towards algorithms and big
technology firms on hiring algorithms. Forth, previous research on 6.2. Limitation and future research
perceptions of hiring algorithms utilized convenience samples including
Mturkers, employees, and students. While these studies provide valuable Our work has several limitations. First, since we conducted a sec­
insights, the findings may not be generalizable to the entire population. ondary data analysis of a data set collected by the PEW Research Center,
By utilizing a nationally representative sample and producing sample all constructs were measured using a single item, raising questions about
weights to match the sample with 2018 US population demographics, reliability and content validity to capture all facets of the constructs.
we believe that the findings from this study can be generalized with However, Bergkvist and Rossiter [78] empirically demonstrated that
confidence to the population. “carefully crafted single-item measures … are at least as valid as
multi-item measure of the same constructs” (p.618). Nevertheless,
6.1. Practical implications future research should use multiple items for perceptions of algorithms
to increase reliability and capture more dimensions. For example, future
The present findings introduce several practical contributions to research could measure procedure fairness and outcome fairness for the
practitioners. First, organizations should be cautious when deciding to hiring algorithms. Second, while the use of cross-sectional data is well
utilize algorithms in recruitment, especially video screening algorithms. equipped to answer the research questions posed in the study, causality
Our results indicate that the majority of respondents hold a negative cannot be established. Therefore, longitudinal or experimental ap­
attitude towards the algorithms, considering them less fair and less proaches are encouraged to validate our findings further. As algorithms
effective, and therefore less acceptable. Considering that the reliance of are increasingly utilized in hiring processes, the influence of individual
algorithms may be the future of recruitment, managers may implement characteristics on perceptions may change accordingly. Third, the re­
several techniques when rolling out recruitment algorithms. For sults of the study indicate that respondents had a more negative attitude
example, researchers have found that if services empowered by AI towards video screening algorithms than resume screening algorithms.
support a human provider instead of replacing him/her, consumers are Future research could extend this finding and explore the underlying
more likely to adopt the services [74]. Organizations can use a hybrid psychological mechanisms of the result. Video screening algorithms may
model by teaming up human decision-makers and the algorithm when be relatively new, but they are poised to become a dominant interview
evaluating candidates [75]. In addition, organizations can provide method in the future due to the COVID-19 pandemic.

9
L. Zhang and C. Yencha Technology in Society 68 (2022) 101848

Appendix A

Scenarios

Resume Screening Scenario.


In an effort to improve the hiring process, some companies are now using computers to screen resumes. The computer assigns each candidate an
automated score based on the content of their resume, and how it compares with resumes of employees who have been successful. Only resumes that
meet a certain score are sent to a hiring manager for further review.

Video Interview Scenario

In an effort to improve the hiring process, some companies are now recording interviews with job candidates. These videos are analyzed by a
computer, which matches the characteristics and behavior of candidates with traits shared by successful employees. Candidates are then given an
automated score that helps the firm decide whether or not they might be a good hire.

Appendix B
Table B1
Ordered Probit Regressions of Fairness of Algorithmic Screening, PEW weighted

Dependent variable: fairness

Entire sample Resume Video

Gender (female) − 0.067**(0.034) − 0.105**(0.047) − 0.034 (0.048)


Race(nonwhite) 0.115***(0.037) 0.081 (0.051) 0.151***(0.053)
Age (30–49) 0.109**(0.048) 0.078 (0.065) 0.150**(0.070)
Age (50–64) 0.018 (0.050) − 0.060 (0.070) 0.098 (0.073)
Age (65+) 0.146***(0.055) 0.101 (0.077) 0.197**(0.080)
Inc(<30k) 0.056 (0.047) 0.017 (0.065) 0.103 (0.068)
Inc(30-75k) − 0.114***(0.043) − 0.139**(0.059) − 0.083 (0.062)
Ed (HighSchoolOrLess) − 0.120***(0.044) − 0.133**(0.062) − 0.121*(0.064)
Ed (SomeCollege) − 0.151***(0.044) − 0.183***(0.060) − 0.118* (0.063)
Perceived algorithm bias − 0.149***(0.034) − 0.190***(0.047) − 0.106***(0.050)
Trust in tech companies 0.368***(0.037) 0.355***(0.052) 0.399***(0.055)
Resume 0.195***(0.033)
Observations 4245 2088 2157
Akaike Inf. Crit. 10449.68 5407.53 5050.78
Note.
1. Reference groups: Race: White; Age (18-29); Income(75K+); Education (College graduate or higher)
2. Numbers in bracket are standard errors
3. *p< 0.1*;*p** < 0.05; ***p<0.01

Table B2
Ordered Probit Regressions of Effectiveness of Algorithmic Screening, PEW weighted

Dependent variable: Effectiveness

Entire sample Resume Video

Gender (female) − 0.054 (0.034) 0.005 (0.047) − 0.114**(0.048)


Race(nonwhite) 0.023 (0.037) 0.015 (0.051) 0.030 (0.053)
Age (30–49) − 0.033 (0.047) 0.054 (0.065) − 0.013 (0.070)
Age (50–64) − 0.031 (0.050) 0.084 (0.070) − 0.019 (0.073)
Age (65+) 0.163***(0.055) 0.151**(0.077) 0.184** (0.079)
Inc(<30k) 0.109*** (0.047) 0.084 (0.065) 0.134** (0.068)
Inc(30-75k) − 0.074*(0.043) − 0.117*(0.059) − 0.031 (0.062)
Ed (HighSchoolOrLess) − 0.005 (0.044) − 0.002 (0.062) 0.0002 (0.064)
Ed (SomeCollege) − 0.116***(0.043) − 0.080 (0.060) − 0.157** (0.063)
Algorithms – Bias − 0.192***(0.034) − 0.133***(0.047) − 0.260***(0.050)
Trust In Tech Companies 0.278***(0.037) 0.297***(0.052) 0.253***(0.055)
Resume 0.156***(0.033)
Observations 4245 2088 2157
Akaike Inf. Crit. 10400.10 5374.41 5036.88
Note.
1. Reference groups: Race: White; Age (18-29); Income(75K+); Education (College graduate or higher)
2. Numbers in bracket are standard errors
3. *p< 0.1*;*p** < 0.05; ***p<0.01

10
L. Zhang and C. Yencha Technology in Society 68 (2022) 101848

Table B3
Logistic Regression of Acceptability of Algorithmic Screening, PEW weighted

Dependent variable: Acceptability

Entire sample Resume Video

Gender (female) − 0.236** (0.108) − 0.134 (0.151) − 0.376**(0.155)


Race(nonwhite) − 0.059 (0.124) − 0.147 (0.172) 0.03 (0.178)
Age (30–49) 0.069 (0.174) − 0.034 (0.236) 0.229 (0.267)
Age (50–64) 0.050 (0.180) − 0.185 (0.240) 0.363 (0.280)
Age (65+) 0.172 (0.181) − 0.047 (0.247) 0.475* (0.274)
Inc(<30k) − 0.164 (0.147) − 0.206 (0.205) − 0.126 (0.213)
Inc(30-75k) − 0.390***(0.125) − 0.403**(0.172) − 0.394**(0.184)
Ed (HighSchoolOrLess) − 0.243*(0.136) − 0.331*(0.193) − 0.139 (0.190)
Ed (SomeCollege) − 0.329***(0.119) − 0.433***(0.165) − 0.218 (0.172)
Perceived algorithm bias − 0.278**(0.110) − 0.246 (0.152) − 0.330**(0.159)
Trust in tech companies 0.437***(0.119) 0.450***(0.167) 0.462***(0.171)
Resume 0.434***(0.107)
Sample size 4245 2088 2157
Log Likelihood − 2505.69 − 1288.45 − 1200.16
Akaike Inf. Crit. 5037.38 2600.89 2424.33
Note.
1. Reference groups: Race: White; Age (18-29); Income(75K+); Education (College graduate or higher)
2. Numbers in bracket are standard errors
3. *p< 0.1*;*p** < 0.05; ***p<0.01

References [17] M. Langer, C.J. König, M. Papathanasiou, Highly automated job interviews:
acceptance under the influence of stakes, Int. J. Sel. Assess. 27 (3) (2019) 217–234,
https://doi.org/10.1111/ijsa.12246.
[1] T.R. Shah, Can big data analytics help organisations achieve sustainable
[18] M. Langer, C.J. König, D.R.P. Sanchez, S. Samadi, Highly automated interviews:
competitive advantage? A developmental enquiry, Technol. Soc. 68 (November
applicant reactions and the organizational context, J. Manag. Psychol. 35 (4)
2021) (2022) 101801, https://doi.org/10.1016/j.techsoc.2021.101801.
(2019) 301–314, https://doi.org/10.1108/JMP-09-2018-0402.
[2] N. Shah, Z. Irani, A.M. Sharif, Big data in an HR context: exploring organizational
[19] D.T. Newman, N.J. Fast, D.J. Harmon, S. California, U. States, When eliminating
change readiness, employee attitudes and behaviors, J. Bus. Res. 70 (2017)
bias isn’t fair : algorithmic reductionism and procedural justice in human resource
366–378, https://doi.org/10.1016/j.jbusres.2016.08.010.
decisions, Organ. Behav. Hum. Decis. Process. 160 (March) (2020) 149–167,
[3] Ernst and Young, The New Age : Artificial Intelligence for Human Resource
https://doi.org/10.1016/j.obhdp.2020.03.008.
Opportunities and Functions, 2018. Retrieved from, https://www.ey.com/Publi
[20] M.G. Morris, V. Venkatesh, Age difference in technology adoption decisions:
cation/vwLUAssets/EY-the-new-age-artificial-intelligence-for-human-resource-o
implications for a changing work force, Person. Psychol. 53 (2000) 375–403.
pportunities-and-functions/$FILE/EY-the-new-age-artificial-intelligence-for-hu
[21] V. Venkatesh, M.G. Morris, P.L. Ackerman, A longitudinal field investigation of
man-resource-opportunities-and-functions.pdf.
gender differences in individual technology adoption decision-making processes,
[4] M. Bogen, A. Rieke, Help Wanted: an Examination of Hiring Algorithms, Equity,
Organ. Behav. Hum. Decis. Process. 83 (1) (2000) 33–60, https://doi.org/10.1006/
and Bias, 2018. Retrieved from, https://pdfs.semanticscholar.org/8775/1e071235
obhd.2000.2896.
c53fdb4411545b449537cb8ab96d.pdf?_ga=2.87551581.416061156.157806805
[22] A. Köchling, M.C. Wehner, Discriminated by an algorithm: a systematic review of
9-1295291997.1577131277.
discrimination and fairness by algorithmic decision-making in the context of HR
[5] A. Sinha, M. Akhtar, A. Kumar, Resume screening using natural language
recruitment and HR development, Bus. Res. 13 (3) (2020) 795–848, https://doi.
processing and machine learning: a systematic review, in: D. Swain, P. Pattnaik,
org/10.1007/s40685-020-00134-w.
T. Athawale (Eds.), Machine Learning and Information Processing. Advances in
[23] D. Steiner, Personnel selection across the globe, in: N. Schmitt (Ed.), The Oxford
Intelligent Systems and Computing, 2021, pp. 207–214.
Handbook of Personnel Assessment and Selection, Oxford Library of Psychology,
[6] Y. Kong, C. Xie, J. Wang, H. Jones, H. Ding, AI-Assisted recruiting technologies :
Oxford, 2012.
tools , challenges , and opportunities, in: The 39th ACM International Conference
[24] G.N. Burns, N.D. Christiansen, M.B. Morris, D.A. Periard, J.A. Coaster, Effects of
on Design of Communication, 2021, pp. 359–361.
applicant personality on resume evaluations, J. Bus. Psychol. 29 (4) (2014)
[7] S. Rasipuram, D.B. Jayagopi, Automatic assessment of communication skill in
573–591, https://doi.org/10.1007/s10869-014-9349-6.
interview-based interactions, Multimed. Tool. Appl. 77 (14) (2018) 18709–18739,
[25] Gartner, Gartner HR Survey Shows 86% of Organizations Are Conducting Virtual
https://doi.org/10.1007/s11042-018-5654-9.
Interviews to Hire Candidates during Coronaviruspandemic, 2020. Retrieved from,
[8] H.Y. Suen, K.E. Hung, C.L. Lin, Intelligent video interview agent used to predict
https://www.gartner.com/en/newsroom/press-releases/2020-04-30-gartner-hr-s
communication skill and perceived personality traits, Human Centr. Comput. Inf.
urvey-shows-86–of-organizations-are-cond.
Sci. 10 (1) (2020), https://doi.org/10.1186/s13673-020-0208-3.
[26] M. Waung, R.W. Hymes, J.E. Beatty, The effects of video and paper resumes on
[9] LinkedIn, Global Recruiting Trends 2018, 2018. Retrieved from, https://business.li
assessments of personality, applied social skills, mental capability, and resume
nkedin.com/content/dam/me/business/en-us/talent-solutions/resources/pdfs/li
outcomes, Basic Appl. Soc. Psychol. 36 (3) (2014) 238–251, https://doi.org/
nkedin-global-recruiting-trends-2018-en-us2.pdf.
10.1080/01973533.2014.894477.
[10] B.R. Maurer, Digital Video Upgrades the Hiring Experience, 2017. Retrieved from
[27] K.A. Houser, Can AI solve the diversity problem in the tech industry? Mitigating
Society for Human Resource Management website: https://www.shrm.org/resour
noise and bias in employment decision-making, Stanford Technol. Law Rev. 22
cesandtools/hr-topics/talent-acquisition/pages/digital-video-upgrades-the-hirin
(2019) 1–42.
g-experience.aspx.
[28] B. Cowgill, Automating judgement and decisionmaking: theory and evidence from
[11] N.R. Kuncel, D.M. Klieger, D.S. Ones, Hiring, Algorithms Beat Instinct, Harvard
resume screening, in: Columbia University, 2015 Empirical Management
Business Review, (May), 2014, p. 32.
Conference, 2017.
[12] S. Barocas, A.D. Selbst, Big data’s disparate impact, Calif. Law Rev. 104 (3) (2016)
[29] I. Ajunwa, The Paradox of Automation as Anti-bias Intervention. Cardozo Law
671–732.
Review, 2020, https://doi.org/10.1158/0008-5472.CAN-07-2325.
[13] Miranda Bogen, All the Ways Hiring Algorithms Can Introduce Bias. Harvard
[30] I.A. Hamilton, Amazon Built an AI Tool to Hire People but Had to Shut it Down
Business Review, May 6, 2019. Retrieved from, https://hbr.org/2019/05/all-the-
Because it Was Discriminating against Women. Business Insider, 2018. Retrieved
ways-hiring-algorithms-can-introduce-bias.
from, https://www.businessinsider.com/amazon-built-ai-to-hire-people-discrimi
[14] I. Ajunwa, S. Friedler, C.E. Scheidegger, S. Venkatasubramanian, Hiring by
nated-against-women-2018-10.
Algorithm: Predicting and Preventing Disparate Impact, 2016, https://doi.org/
[31] M.C. Campion, M.A. Campion, E.D. Campion, M.H. Reider, Initial investigation
10.5121/csit.2011.1207. Available at: SSRN.
into computer scoring of candidate essays for personnel selection, J. Appl. Psychol.
[15] J. Kleinberg, J. Ludwig, S. Mullainathan, C.R. Sunstein, Discrimination in the age
101 (7) (2016) 958–975.
of algorithms, J. Legal Anal. 10 (2018) 113–174, https://doi.org/10.1093/jla/
[32] S. Sajjadiani, A.J. Sojourner, J.D. Kammeyer-mueller, E. Mykerezi, Using machine
laz001.
learning to translate applicant work history into predictors of performance and
[16] C. Kaibel, I. Koch-Bayram, T. Biemann, M. Mühlenbock, Applicant perceptions of
turnover, J. Appl. Psychol. 104 (10) (2019) 1207–1225.
hiring algorithms-uniqueness and discrimination experiences as moderators, in:
[33] I. Naim, I. Tanveer, M. Hoque, Automated analysis and prediction of job interview
Academy of Management Proceedings, 18172. Briarcliff Manor, NY, 2019.
performance, IEEE Transac. Affec. Comput. 9 (2) (2018) 191–204.

11
L. Zhang and C. Yencha Technology in Society 68 (2022) 101848

[34] L.F. Barrett, R. Adolphs, S. Marsella, A.M. Martinez, S.D. Pollak, Emotional [56] U.A. Mitchell, P.G. Chebli, L. Ruggiero, N. Muramatsu, The Digital divide in health-
expressions reconsidered : challenges to inferring emotion from human facial related technology use: the significance of race/ethnicity, Gerontol. 59 (1) (2019)
movements, Psychol. Sci. 20 (1) (2019) 1–68, https://doi.org/10.1177/ 6–14, https://doi.org/10.1093/geront/gny138.
1529100619832930. [57] A. Smith, Public Attitudes towards Computer Algorithms, 2018. Retrieved from,
[35] W. Payre, J. Cestac, P. Delhomme, Intention to use a fully automated car: attitudes https://www.pewresearch.org/internet/2018/11/16/public-attitudes-toward-c
and a priori acceptability, Transport. Res. F Traffic Psychol. Behav. 27 (2014) omputer-algorithms/.
252–263. [58] Edelman, Edelman Trust Barometer 2020, 2020. Retrieved from https://www.
[36] V. Alexander, C. Blinder, P.J. Zak, Why trust an algorithm? Performance, edelman.com/sites/g/files/aatuss191/files/2020-01/2020 Edelman Trust
cognition, and neurophysiology, Comput. Hum. Behav. 89 (2018) 279–288, Barometer Global Report.pdf?utm_campaign=Global: Trust Barometer 2020&utm_
https://doi.org/10.1016/j.chb.2018.07.026. source=Website.
[37] N. Castelo, M.W. Bos, D.R. Lehmann, Task-dependent algorithm aversion, [59] E. Maltz, A.K. Kohli, Market intelligence dissemination across functional
J. Market. Res. 56 (5) (2019) 809–825, https://doi.org/10.1177/ boundaries, J. Market. Res. 33 (1996) 47–61.
0022243719851788. [60] X. Xu, C. Fan, Autonomous vehicles, risk perceptions and insurance demand: an
[38] B.J. Dietvorst, J.P. Simmons, C. Massey, Algorithm aversion: people erroneously individual survey in China, Transport. Res. Part A 124 (2019) 549–556, https://
avoid algorithms after seeing them err, J. Exp. Psychol. Gen. 144 (1) (2015) doi.org/10.1016/j.tra.2018.04.009.
114–126, https://doi.org/10.2139/ssrn.2466040. [61] CBInsights, Big Tech in AI: what Amazon, Apple, Google, GE, and Others Are
[39] M.K. Lee, Understanding perception of algorithmic decisions: fairness, trust, and Working on, 2017. Retrieved from, https://www.cbinsights.com/research/top-te
emotion in response to algorithmic management, Big Data Soc. 5 (1) (2018), ch-companies-artificial-intelligence-expert-intelligence/.
https://doi.org/10.1177/2053951718756684, 205395171875668. [62] R. Wagner, Managerial problem solving, in: R. Sternberg, P. Frensch (Eds.),
[40] J.M. Logg, J.A. Minson, D.A. Moore, Algorithm appreciation: people prefer Complex Problem Solving: Principles and Mechanisms, Lawrence Erlbaum,
algorithmic to human judgment, Organ. Behav. Hum. Decis. Process. 151 Hillsdale, NJ, 1991, pp. 159–183.
(February) (2019) 90–103, https://doi.org/10.1016/j.obhdp.2018.12.005. [63] A.P. Miller, Want Less-Biased Decisions ? Use Algorithms. Harvard Business
[41] N. Thurman, J. Moeller, N. Helberger, D. Trilling, My friends, editors, algorithms, Review, 2018. Retrieved from, https://hbr.org/2018/07/want-less-biased-decisi
and I: examining audience attitudes to news selection, Digital J. 7 (4) (2019) ons-use-algorithms.
447–469, https://doi.org/10.1080/21670811.2018.1493936. [64] A. Chander, The racist algorithm? Mich. Law Rev. 115 (6) (2017) 1023–1044.
[42] T. Araujo, N. Helberger, S. Kruikemeier, C.H. de Vreese, in: AI We Trust? [65] J. Li, J.S. Huang, Dimensions of artificial intelligence anxiety based on the
Perceptions about Automated Decision-Making by Artificial Intelligence, AI and integrated fear acquisition theory, Technol. Soc. 63 (October) (2020) 101410,
Society, 2020, https://doi.org/10.1007/s00146-019-00931-w. https://doi.org/10.1016/j.techsoc.2020.101410.
[43] A. Bandura, Social Foundation of Thought and Action: A Social-Cognitive View. [66] Pew Research Center, American Trends Panel Wave 35, 2018. Retrieved from, htt
Englewood Cliffs, 1986. ps://www.pewresearch.org/internet/dataset/american-trends-panel-wave-35/.
[44] V. Venkatesh, J.Y.L. Thong, X. Xu, Consumer acceptance and use of information [67] C. Nicoletti, F. Peracchi, F. Foliano, Estimating income poverty in the presence of
technology: extending the unified theory of acceptance and use of technology, MIS missing data and measurement error, J. Bus. Econ. Stat. 29 (1) (2011) 61–72,
Q. 36 (1) (2012) 157–178. https://doi.org/10.1198/jbes.2010.07185.
[45] Z. Huang, J. Mou, Gender differences in user perception of usability and [68] N. Schenker, T.E. Raghunathan, P.L. Chiu, D.M. Makuc, G. Zhang, A.J. Cohen,
performance of online travel agency websites, Technol. Soc. 66 (May) (2021) Multiple imputation of missing income data in the national health interview
101671, https://doi.org/10.1016/j.techsoc.2021.101671. survey, J. Am. Stat. Assoc. 101 (475) (2006) 924–933, https://doi.org/10.1198/
[46] A. Gran, P. Booth, T. Bucher, To Be or Not to Be Algorithm Aware: A Question of a 016214505000001375.
New Digital Divide ? Information, Communication & Society, 2020, pp. 1–18, [69] B. Byrne, Structural Equation Modeling with AMOS, Lawrence Erlbaum Associates
https://doi.org/10.1080/1369118X.2020.1736124. Publishers, New Jersey, 2001.
[47] J.O. Maldifassi, E.C. Canessa, Information technology in Chile: how perceptions [70] A. O’Connell, Logistic Regression Models for Ordinal Response Variables, 2011,
and use are related to age, gender, and social class, Technol. Soc. 31 (3) (2009) https://doi.org/10.4135/9781412984812.
273–286, https://doi.org/10.1016/j.techsoc.2009.03.006. [71] G. Solon, S.J. Haider, J.M. Wooldridge, What are we weighting for? J. Hum.
[48] E.M. Rogers, Diffusion of innovations, in: Diffusion of Innovations, The Free Press, Resour. 50 (2) (2015) 301–316, https://doi.org/10.3368/jhr.50.2.301.
New York, NY, 2005. [72] T. Chamorro-Premuzic, Will AI Reduce Gender Bias in Hiring? Harvard Business
[49] M.G. Ames, J. Go, J. Kaye, M. Spasojevic, Understanding technology choices and Review, 2019. Retrieved from, https://hbr.org/2019/06/will-ai-reduce-gender-bi
values through social class, in: Proceedings of the ACM 2011 Conference on as-in-hiring.
Computer Supported Cooperative Work, 55–64. Hangzhou, China, 2011. [73] M. Makhubela, P.A. Botha, S. Swanepoel, Employees’ perceptions of the
[50] C. Porter, N. Donthu, Using the technology acceptance model to explain how effectiveness and fairness of performance management in a South African public
attitudes determine Internet usage : the role of perceived access barriers and sector institution, SA J. Hum. Resour. Manag. 14 (1) (2016) 1–11, https://doi.org/
demographics, J. Bus. Res. 59 (2006) 999–1007, https://doi.org/10.1016/j. 10.4102/sajhrm.v14i1.728.
jbusres.2006.06.003. [74] C. Longoni, A. Bonezzi, C.K. Morewedge, Resistance to medical artificial
[51] A. Gonzales, The contemporary US digital divide: from initial access to technology intelligence, J. Consum. Res. 46 (4) (2019) 629–650, https://doi.org/10.1093/jcr/
maintenance, Inf. Commun. Soc. 19 (2) (2016) 234–248. ucz013.
[52] D. Compton, L. Bachman, D. Brand, T. Avet, Age-associated changes in cognitive [75] J. Ostheimer, S. Chowdhury, S. Iqbal, An alliance of humans and machines for
function in highly educated adults: emerging myths and realities, Int. J. Geriatr. machine learning: hybrid intelligent systems and their design principles, Technol.
Psychiatr. 15 (1) (2000) 75–85. Soc. 66 (2021) 101647, https://doi.org/10.1016/j.techsoc.2021.101647.
[53] M.E. Ouirdi, A.E. Ouirdi, J. Segers, I. Pais, Technology adoption in employee [76] V. Chattaraman, W.S. Kwon, J.E. Gilbert, Y. Li, Virtual shopping agents Persona
recruitment: the case of social media in Central and Eastern Europe, Comput. Hum. effects for older users, J. Res. Indian Med. 8 (2) (2014) 144–162, https://doi.org/
Behav. 57 (2016) 240–249. 10.1108/JRIM-08-2013-0054.
[54] M. Dupagne, M.B. Salwen, Communication technology adoption and ethnicity, [77] D. Li, L. Raymond, P. Bergman, Hiring as Exploration, National Bureau of
Howard J. Commun. 16 (1) (2005) 20–32, https://doi.org/10.1080/ Economic Research, 2020, https://doi.org/10.2139/ssrn.3630630. No. w27736.
10646170590915826. [78] L. Bergkvist, J. Rossiter, Tailor-made single-item measures of doubly concrete
[55] M. Madden, Internet Penetration and Impact, 2006. Retrieved from, https://www. constructs, Int. J. Advert. 28 (4) (2009) 3427–3429, https://doi.org/10.2501/
pewresearch.org/internet/2006/04/26/internet-penetration-and-impact/. S0265048709200783.

12

You might also like