4 - Artificial Intelligence Vs Behavioural Economics

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Article

When Artificial Intelligence Meets NHRD Network Journal


1–12
Behavioural Economics © 2020 National HRD
Network, Gurgaon
Reprints and permissions:
in.sagepub.com/journals-permissions-india
DOI: 10.1177/2631454120974810
journals.sagepub.com/home/nhr

Girish Balasubramanian1

Abstract
Behavioural economics has its roots in the problems of rationality and optimising the expected utility,
specially the empirical evidence of individuals acting against expected norms. Artificial intelligence (AI),
on the other hand, is premised on the dominant idea being that because of the dispositional factors, the
human being often might be akin to a disturbance to an otherwise smooth system.Thus, the intersection
of both these areas is decision-making under uncertainty. Both these concepts put together have inter-
esting implications for organisations. This article explores the impact of AI and Behavioural Economics
on the human resources (HR) function of an organisation. Some of the contemporary applications of AI
augmenting decision-making have been presented using the lens of the HR Value Chain. Based on these
applications, implications for organisations are discussed. Despite limitations, AI, as a technology, is soon
going to be embraced by the firms, leading to hybrid organisations. As a result, organisations need to
redesign their processes and policies.

Keywords
Behavioural economics, artificial intelligence, meta-organisations, hybrid organisations

Introduction
The much decorated sports legend of Indian Cricket Mahendra Singh Dhoni announced his retirement
from all forms of international cricket (Espncricinfo, 2020). He was known for out-of-the-box tricks on
the field. Two of his arguably most remembered decisions are—(a) the first T20-International Cricket
World cup when the match between India and Pakistan was tied, and it was to be decided using a bowl-
out,1 and (b) the tense moments of the final—when the ball was tossed to a lesser known Joginder
Sharma to defend 13 runs. The penultimate over was bowled by a more regular R. V. Singh, and all other
strike bowlers had exhausted their quota (Espncricinfo, 2020). That both these decisions worked in the
captain and the team’s favor is a different story—but cricketing pundits were shocked at the unconventional
decisions taken by the captain. In both situations, if one were to rely on conventional wisdom or
rationality—strike bowlers/regular bowlers should have been chosen over lesser known faces.

1
Indian Institute of Management Lucknow, Uttar Pradesh, India.

Corresponding author:
Girish Balasubramanian, Indian Institute of Management Lucknow, Uttar Pradesh 226013, India.
E-mail: girish.balasubramanian@iiml.ac.in
2 NHRD Network Journal

Dissecting these decisions almost after a decade most certainly would be prone to survivorship bias
and hindsight bias. Outside the sporting world as well, these decisions definitely reignited the debate of
basis of decisions based on rationality versus intuition. Given the complex environment, managers often
tend to rely on their intuition—however, the contemporary business situations are also high-pressure
situations, where managers and executives are expected to take beneficial decisions.
There is a rich legacy of inquiry into irrational decisions, the foundations of which were laid by
Kahneman and Tversky who can be considered as the founding fathers of Behavioural Economics (Floris
Heukelom, 2007), further supplemented by work such as Black Swan, Fooled by Randomness (Taleb,
2001) and Predictably Irrational (Ariely, 2008). Decisions play an important role in our daily life as well
as in organisations. Much of the academic pursuit has been devoted towards understanding the nuts and
bolts of decision-making, including mathematical models (Bernoulli, 1954; Neumann & Morgenstern,
2007; Samuelson, 1977), and it would suffice to say that decision-making is still a black box. Given the
complex world that we are in and the limitation of the human mind to process huge amount of data,
making decisions after processing such huge amount of data often would be enervating, leading to what
is also known as ‘decision stress’(Toeffler, 1999, p. 355).
Not so far in history, the invention of steam engine, which mechanised production of goods, brought the
industrial revolution. The industrial revolution also laid the foundation for large modern-day industrial
corporations. The assembly line actually implemented the division of labour to the hilt, and statistical quality
control improved the production process (Douglas, 2003; Junji, 1995). Rapid strides in computing and storing
data led to the inevitable possibilities of cognition in machines. IBM (Electronic Numerical Integrator and
Computer [ENIAC], Deep Blue and Watson) opened the doors for intelligent machines—one of the first
instances being that of Deep Blue defeating the then grand master Gary Kasparov in a game of chess in 1997.2
In contemporary times, artificial intelligence (AI) has invaded our personal spaces knowingly or
unknowingly in the form of Siri/Cortana/Alexa. Even as this article is being written, driverless cars are
a reality, robots have been deployed at Hyderabad airport and Saudi Arabia became the first nation to
grant citizenship to a humanoid robot Sophia (Griffin, 2017; Sources, 2017). It is posited that in the near
future AI is going to be a major disruption in doing business as we know, and we have to be prepared for
it, for instance, use of bots for testing of codes, developing chatbots for assisting the customers or
developing apps to manage the employee payroll and attendance.
Rather than adopting an adversarial approach towards AI, one can try and co-opt AI in the business
processes (Raisch & Krakowski, 2020). As per estimates, the adoption of AI would lead to net creation
of about 60 million new jobs by 2022 (Choudhury, 2018). The confluence of AI and Behavioural
Economics is the decision-making process—more specifically decision-making under risk/uncertainty.
The central thesis of this article is that AI would be altering the decision-making process in organisations.
The attributes for decision-making have been identified as speed, consistency and efficiency. Given
these attributes, the possible impact on some of the activities across the value chain have been discussed.
In the next section, key terms such as decision-making, rationality, heuristics and intuition are defined,
followed by a brief assessment of the current situation in terms of both Behavioural Economics and AI.
Subsequently, an attempt is made to speculate about the future. I end this article by giving some pointers
for the way forward.

Understanding Some Key Terms


Before proceeding further, it is important to have a basic understanding of some of the key terms, namely
decision-making, rationality, heuristics and intuition. The first and foremost is to clarify what is decision-
Balasubramanian 3

making. Traditionally, decision-making is associated with the choice that is made; however, a broader
view of decision-making encompasses pre-screening of options to narrow down to a more manageable
options as well (Beach, 1993). While it is acknowledged that decision-making is complex—akin to a
black box—decisions are often made based on intuitions, heuristics or rationality, or a combination of one
or more of these. It is beyond the scope of this article to delve into a detailed review of these concepts.
For the purpose of this article, we have taken the definition of heuristics as, ‘A heuristic is a strategy
that ignores part of the information, with the goal of making decisions more quickly, frugally, and/or
accurately than more complex methods’ (Gigerenzer & Gaissmaier, 2011, p. 454). Thus, heuristics-
based decision-making is often quick and based on thumb rules.
Another closely related term is intuition. Intuition is defined as, ‘affectively charged judgments that arise
through rapid, non-conscious, and holistic associations’ (Dane & Pratt, 2007, p. 33). Hunch, gut-feel and
mystical insights are some of the other terms associated with intuition, and it is also a relatively fast
decision-making method. Rationality, on the other hand, has been defined as ‘concerned with a selection of
preferred behaviour alternative in terms of some system of values whereby the consequences of behaviour
can be evaluated’ (Simon, 1997, p. 84). Rationality is better understood, given a context, and when it is used
in conjunction along with an appropriate adverb, for instance, ‘objectively rational’ or ‘consciously rational’
(Simon, 1997). Rational decision-making is quite analytic and logical in nature as opposed to heuristics or
intuition. While, in classical economics, utility is often conflated with value, and rational actors end up
maximising utility, it is worth mentioning that in real life the economic actors often end up satisficing rather
than maximising their utilities, owing to limitations like bounded rationality (Simon, 1997).

The Current Scenario


Classical economics is premised on the rationality assumption of the economic actors, which essentially
meant that the economic actors optimise their utility. However, a number of instances have been recorded
under both experimental and real-life experiences, where individuals have not followed the rules of rational
decision-making. Decision-making under uncertainty has been of particular interest to scholars, which is
indicative of a large body of interesting puzzles and problems such as Russian roulette, St. Petersburg
paradox and other such gambling problems (see Bernoulli, 1954; Samuelson, 1977). The groundwork for
behavioural economics was carried out by Bernoulli who gave the solution to St. Petersburg paradox using
the basis of psychophysics and, in the process, introduced the concept of ‘maximizing the expected utility
as the basis for rational decision behavior under uncertainty’ (Floris Heukelom, 2007, p. 4). Neumann and
Morgenstern (2007) further equated monetary value and utility. Consequently, this simplistic assumption
led to economic actors maximising utility or, in other words, monetary value. Kahneman and Tversky
(1979) debunked the expected utility theory (EUT) through a series of experiments and demonstrated the
shortcomings of EUT. They came up with an alternative in the form—Prospect Theory. Their thesis was
that descriptive as well as normative models of economics had their shortcomings, and, consequently, they
proposed a normative–descriptive–prescriptive model.
In contemporary times, Behavioural Economics can be broadly divided into two strands branching
from the EUT. One strand is building on the strand on Kahneman and Tversky’s seminal work of Prospect
Theory. They did not completely discard the EUT albeit it was modified suitably to take into account the
behavioural aspects of decision-making. The other strand is building on the works of the economist
Friedman and mathematician Savage. Both of them originated in the Russel Sage Foundation, which
actually discarded the EUT completely. It is to be noted though that the focus of both the strands is
decision-making under uncertainty. Contrary to the popular perception about the focus of Behavioural
4 NHRD Network Journal

Economics predominantly being an academic pursuit, Nassim Nicholas Taleb and Dan Ariely have
demonstrated the fragilities of understanding about probability by individuals, especially that of decision-
making under uncertainty through their work, namely ‘Fooled by Randomness’ or ‘Predictably Irrational’.
AI as an academic discipline has its roots in one of conferences organised in Dartsmouth in 1956,
which brought together scholars from diverse fields with the sole aim of systematic inquiry on computer-
based learning and decision-making (Von Krogh & Roos, 1995). While the foundations were laid in the
1960s, till the 1990s, not much work was carried in this field. Prospects were revived with Deep Blue
defeating Gary Kasparov in a game of chess in 1997, and since then there has been no looking back. AI
can be thought of from a systems perspective as some set of inputs (data such as images/text/pictures),
which is processed through algorithms, leading to some outputs—typically decisions (Nilsson, 1971).
The key difference being that there is a feedback from the output, which enables the system to improve
upon its performance and learn on its own. Consequently, it might be akin to the human mind in its
ability to perform certain cognitive functions (Von Krogh, 2018).
AI can be classified into narrow AI—with a very limited application, general AI—ability to learn and
adapt and Super AI—akin to the human consciousness, which again is only a theoretical possibility
(O’Carol, 2017). While it is not within the scope of this article to trace the development of AI and its
applications, the reasons for the rapid adoption of AI in organisations can be summarised as exponential
growth in memory storage and computation abilities, advancements in the fields of cybernetics and
neural networks and the growth of Internet, which has also led to the growth of cloud-based services
(Von Krogh, 2018).
The confluence of AI and Behavioural Economics is the interesting area of decision-making under
uncertainty. Decision-making might be at an individual level or at the organisational level. For the sake
of brevity, this article focuses on decision-making by organisations under uncertainty. Scholars have
inquired into the decision-making of organisations from the viewpoint of rational normative, external
control, strategic choice and garbage can model to name a few (see Cohen et al., 1972; Hitt & Tyler, 1991
more details). The normative model posits that organisations assess the external and internal conditions
and arrive at an appropriate choice based on objective criteria. Garbage can model, on the other hand, is
for decision-making in organised anarchies, which consist of unclear technologies, ambiguous
preferences and varying degrees of participation by the members.

Human beings decisions are taken


based on

• Dispositional factors
• Heuristics/intuition
• Limited data Decision attributes

! • Speed
• Consistency
• Risk
AI decisions are taken based on • Efficiency

• Input data
• Complex algorithm
• Ability to store, retrieve and
process huge amount of
data

Figure 1. Figure Summarizing Decision-making Attributes and Decision Basis of Human Beings vis-à-vis AI.
Source: Prepared by the author based on automation/augmentation paradox by Raisch and Krakowski (2020).
Balasubramanian 5

From the perspective of the organisation, it is to be noted that the manager is often expected to make
rational, efficient, quick and consistent decisions, taking into considerations all possible variables often
leading to decision stress (see Toeffler, 1999 for Decision stress). Human decision-making is further
modelled as an organised and symbol-driven search activity (Masuch & LaPotin, 1989). Based on some
of the current applications in organisations, it is argued in the subsequent section that AI has the potential
to reduce this decision stress, consequently, enabling the organisations to make more efficient and
rational decisions. Figure 1 summarises the various attributes that are necessary in a decision in the
organisations and the factors on the basis of which decision-making is affected by humans vis-à-vis AI.

Speculating the Future


It took almost two centuries to use the steam engine and its principles to make a usable automobile, while
it took only two decades for the computers to evolve from bulky machines to almost all-pervasive
computing devices, and it took less than a decade for the mobile phones to become smart computing
devices (Makridakis, 2017). To quote Cornell Professor Block, ‘I don’t think there’s a task you can name
that a machine can’t do—in principle. If you can define a task and a human can do it, then a machine can,
at least in theory, also do it. The converse, however, is not true’. As envisaged in the book, The Future
Shock, in contemporary times, firms not only go for extreme customisation of products and services (for
instance, the Netflix menu of no two individuals are likely to be similar) but also bombardment of senses
with innumerable stimuli, often leading to confusion or decision paralysis, probably leading to decision
stress (Toeffler, 1999). Pre-programmed decisions often reduce stress—which builds a case for AI-based
interventions. This probably also explains the invasion of Siri/Cortana/Alexa in our personal lives as
well as organisational applications like assessment of risk before disbursing loans.
Rather than taking an adversarial approach towards AI, scholars have advocated adopting the
co-opting approach (Raisch & Krakowski, 2020). The central thesis is that AI essentially augments the
humans in their work. Traditional approach has been that routine and mundane/operational tasks were
separated from managerial tasks. Consequently, management scholars have paid meagre attention to AI,
but AI is pervasive in organisations, and therefore the managerial understanding and applications of AI
need to be inquired into systematically. Automation implies that machines take over the human task,
while augmentation means that humans collaborate with machines closely to perform a task. It is actually
an iterative task, wherein human intervention is needed till the process is mastered, and, subsequently, it
can be automated and human intervention can be reduced (Raisch & Krakowski, 2020).
The argument is that machines can be taught to make decisions, albeit machines are much more consistent
and have the ability to process huge amount of data, and they do not suffer from information overload. The
dominant understanding was that only routine and mundane tasks were automated; however, the AI has
given rise to a possibility that even complex processes can be automated through repeated iterations.
Decision-making is a complex process subject to a lot of constraints. It is a matter of time before
intelligent machines are designed/developed. The decision-making by such intelligent machines are
likely to be based on proper metrics/rubrics, and they are likely to be quick, consistent and efficient.
In 1996, Deep Blue famously challenged the then Chess grandmaster Gary Kasparov and was
defeated—unfortunately, Deep Blue was not self-learning. In a 1997 rematch, Deep Blue defeated Gary
Kasparov in a game of chess, which was considered a huge achievement at that point of time. While
critics might argue that Deep Blue was after all only a complex program based on complex if/then logics
and laborious instructions, the improvement over Deep Blue—Watson—could comprehend English and
also learn from its past mistakes (Kolbert, 2016).
6 NHRD Network Journal

AI has already been used to carry out some interesting activities in organisations. For instance,
JPMorgan Chase has developed Contract Intelligence (COiN), which is able to review and interpret
commercial loan agreements that is not only cost-effective and efficient but has also reduced nearly
400,000 person hours of mundane work—earning the sobriquet of ‘Robo-Banking’ (Imaginovation,
2019). Not surprisingly, technological firms such as Google, Microsoft and Amazon are investing in the
development of AI-based tools. The use of AI has found its way into applications such as trading, giving
out credit as well as recruitment. For instance, UBS in collaboration with Deloitte developed a system,
which scans the emails of investors and then takes a decision on allocation of funds for investment;
manually, this process would take about 45 min (as per the bank estimates), which was reduced to about
2 min. In another application, the AI combines the investor’s portfolio and the previous data on market
trends to develop an investment strategy for the client (Arnold & Newman, 2016). Another application
from the financial sector assesses creditworthiness. It is inherently a very complex process of sanctioning
credit and prone to human biases and follies (for instance, the sub-prime crisis). Despite the complexities,
the whole process of determining creditworthiness and eligibility for credit has been automated,
consequently, reducing the time taken for decision to allocate credit (Goudarzi et al., 2018).
Using AI has another advantage—human bias based on demographic profile or other such heuristics
can be reduced (Daugherty & Wilson, 2018). These applications are only the tip of the iceberg and offer
immense possibilities for the firms. Bulk of the resources of contemporary firms are spent on acquiring
talent or optimal allocation of the talent to their existing projects. It is, indeed, a Herculean task to sift
through so many résumés to narrow down the pool of interested applicants. Besides, there have been
instances of biases/discrimination based on demographics or stereotypes affecting recruitment and
selection of candidates, which can possibly be reduced by taking the help of AI (Hmoud & Varallyai,
2019). Not only will organisations save precious time and resources on acquiring talent, but they are also
likely to take more consistent, efficient and less biased decisions.
It is not to say that AI is a totally fool-proof tool—as evidenced by the usage of Amazon and which
they had to subsequently discontinue from using AI for their recruitment, as it was found to discriminate
against female applicants for technical jobs (Iriondo, 2018). One can definitely infer that the biased
learning is the result of noise-prone data; but, just as Rome was not built in a day, AI has made strides
from ENIAC to Deep Blue to Watson, so also these problems would be overcome, and more robust
applications and tools would be developed.
One of the shortcomings is the data set that is used to train the AI. Thus, efforts are being made in this
direction to have as representative data sets as possible in addition to having separate training and
validation data sets. Natural language processing probably holds the key to more advanced applications
of AI and may open up new and interesting avenues (Danilevsky et al., 2020). The other major limitation
of AI is the ability to replicate and understand the emotion and the ethical concerns of the decision of the
AI (Hagendorff, 2020)development and application of artificial intelligence (AI, which should hopefully
be overcome through breakthroughs in natural language processing.

Implications for Organisations with Respect to Behavioural Economics


and Artificial Intelligence
The foregoing discussion has set up the canvas for discussion about the future of work and the
organisations—especially decision-making under uncertainty. It is posited that organisations are
proceeding towards ‘hybrid organisation’—integration of intelligent machines and human beings to
Balasubramanian 7

form new organisations. The current pandemic scenario and the consequent change in the working
systems offer us a remarkable chance to speculate one of the very interesting possibilities of AI and
consequently on the decision-making of firms. The pandemic has definitely led to some amount of doubt
and uncertainty, owing to which many firms have either put their plans of expansion on hold, including
appraisals and increments, or offered relatively less levels of increment as compared to the previous
years (Basu, 2020).
The positive side, of course, is that this pandemic has also opened new business opportunities. For
instance, firms have developed contactless attendance systems or tracking systems for effective contact
tracing.3 Despite many contact tracing applications being commercially available, one of the really
potent use of the AI could be not only to assess the current situation with respect to the infection but also
to be able to predict with reasonable accuracy the trajectory of the pandemic, which will offer proper
basis for taking business decisions rather than relying on intuition and heuristics. For instance, use of
cluster information for better prediction of pandemic as proposed by Vaishya et al. (2020) or usage of 5G
technology to develop a mass surveillance system as proposed by Shamim Hossain et al. (2020).
The contemporary understanding and theories of management are largely based on organisations in
which machines are just another tool, but with the scenario changing and intelligent machines being
integrated into the organisations, we may need to revisit many of these theories and practices (Baum &
Haveman, 2020). For instance, researchers have studied the performance paradox in a call centre in
which the important factors were effectiveness, efficiency and courteousness (Clark et al., 2019). The AI
intervention would definitely have an impact not only on the rubric of performance measurement but
also on the organisation. Some trace of this is already visible in the form of chatbots being the first line
of customer interface before they talk to a call centre agent for assistance. The advantage of intelligent
machines is the increased work efficiency as well as ability to process immense amount of data to come
up with a decision, consequently reducing decision stress on individuals. Thus, in an integrated/hybrid
organisation in which AI is also a significant part, the following impact is envisaged along the HR value
chain:
•  he responsibility of HR as a function and department has increased manifold, as they need
T
to be prepared to navigate the complexities of this hybrid/integrated organisation. AI can be
used to design/customise policies and employment package, depending on the individual
preferences, risk profile, age profile and other personal characteristics of the employee, which,
in turn, has implications on a lot of other softer aspects like engagement and commitment.
For instance, the assistant Olivia developed by Paradox4 offers a specific product ‘Employee
Care’—which has facilities such as HR Help Desk akin to a call centre offering round-the-clock
assistance, fetch information from within the Intranet, deliver trainings, or information about
product, or professional development—as the tag line goes—‘Olivia delivers all the answers
your employees need, but don’t know where to find’.5
• While organisations can take help of intelligent machines to recruit the best fit and reduce/
eliminate bias in recruitment/talent acquisition, organisations will have to be agile, invest
heavily in training and development of their employees to keep them relevant—in other words,
resort to the cycles of augmentation and automation rather than focusing on automation alone.
For instance, a team of HR from JPMorgan Chase worked closely with AI to develop a system
to assess the candidates based on reliable firm-specific predictors. The system was made
robust iteratively to remove biases and stereotypes, and it was deployed after close to a year,
consequently, automating the candidate assessment (Riley, 2018).
• Humans have always been considered as a disturbance in the larger mechanistic system, and,
consequently, human intervention has always been sought to be reduced. Thus, many of the
8 NHRD Network Journal

processes in the organisations such as recruitment, performance management, diversity initiatives


and day-to-day administration are likely to be taken over by intelligent machines with minimum
human intervention. For instance, firms such as Pymetrics, ServiceNow and Reflektive offer
their products in some of these areas. This article also earlier made a reference to the automated
process of candidate assessment at JPMorgan Chase, which is an AI-based tool.
• Labour-intensive tasks are likely to be reduced/integrated with intelligent machines. This is
likely to have interesting implications to the already shrinking collectivism in firms. Hence,
collectives and trade unions may have to think of ways and means of integrating the intelligent
machines within their fold rather than opposing technology. For instance, chatbots and the
digital network allow employees to connect without barriers of geography and rally the much-
needed support.6 Additionally, service-based trade unions can actually think of employing some
of these technologies to bolster their service offerings (UNIEuropa, 2019).

Implications for Practitioners


AI most likely is the basis of the next industrial/digital revolution and consequent disruption. Practitioners
need to thoroughly enquire into the pros and cons of technology before adopting it into their organisations.
For instance, in 1996, when Dolly the first mammal was cloned, it was considered a breakthrough in
science at that point of time, and it was widely speculated that cloning of humans would be possible. But
even after almost three decades, we are still to have a documented evidence of a human being cloned,
albeit the abstraction of the cloning technology has been used for other purposes, notably the stem cell
research (Weintraub, 2016).
Organisations still exercise the discretion over the use of the technology. Practitioners need to
proactively get into the iterative cycle of augmentation—collaborate with AI to build a robust system for
a complex process and automation—subsequently automate the process and get into a different set of
complex processes. Figure 2 summarises the automation–augmentation (Raisch & Krakowski, 2020)
process. It is to be noted that although the process appears circular, it needs to be conceptualised as a
helix, as the starting point for the next round would be another complex problem.

Complex problem

Automation of the
AI and humans
process based on
work in tandem
the robust system

Iteration to make Creation of a


the system robust system

Figure 2. Augmentation to Automation Process Conceptually.


Source: Raisch and Krakowski (2020).
Balasubramanian 9

Conclusion
Both Behavioural Economics and AI are relatively recent disciplines, which have brought the
organisations at the cusp of a major disruption. The meeting point of both these disciplines is the quest
to understand/predict/model decision-making under uncertainty. The positives are that the AI can work
on complex tasks with speed, efficiency, effectiveness and consistency. AI as a technology is not bereft
of limitations. Limitations include possibility of bias creeping in as the data which it uses to learn might
be noisy and prone to random errors, for instance, the use of AI at Amazon was shelved, owing to the
inherent bias it had picked up (Dastin, 2018). This can, however, be addressed as one is aware about the
problem. We are still not clear on the moral and ethical dimensions of the decisions/alternatives presented
by the AI. While conceptually AI is able to take a rational and utility-optimising decision, there is less
clarity on whether AI will be able to take an ethically sound decision. In our personal lives, there are
many instances where we have relegated the decision-making to technology. However, in organisations,
the decisions often tend to have larger implications and affect a larger set of individuals. We still do not
have clarity on the aspects of accountability of the decisions taken by the AI.
To summarise, the key points of this article are that the days of decisions based on heuristics/intuition
are numbered courtesy AI-based interventions. With computation and storage capacities being enhanced,
AI also would complement individuals and organisations by giving suitable options based on proper
rubrics/metrics, thereby reducing the ‘decision stress’ among individuals. Besides, AI would also remove
the effect of dispositional factors on decision-making, leading to more efficient, consistent and quick
decisions. These technological developments are most definitely likely to affect the way business is
carried out. Hence, practitioners need to embrace AI, albeit after carrying out due diligence.
A related assertion is about the uniqueness of the organisation, and consequently the technology that
is adopted would need to be customised rather than blindly mimic ‘success stories’. Three types of AI
were mentioned earlier. The super intelligent AI (although yet to be created) gives rise to a distinct
possibility of the rise of meta-organisations analogous to the concept of meta-humans borrowed from
Deepak Chopra’s recent book by the same title (Chopra, 2019). This is a speculation that needs to be
revisited in the future.

Acknowledgement
The author would like to express gratitude to the editor of the special issue Dr Zubin Mulla, the anonymous referees
and the NHRDN team for their constructive comments and continuous support.

Declaration of Conflict of Interest


The author declares no potential conflict of interest with respect to the research, authorship and/or publication of
this article.

Funding
The author received no financial support for research, authorship and/or publication of this article.

Notes
1 This was first introduced in an International Cricket Council (ICC) tournament. The format was that in case the
match ended in a tie, then each of the teams would be given six balls to bowl (without batsman facing the ball)
and whoever hits the wickets maximum would be declared the winner—a modified form of penalty shootout
observed in football/hockey.
10 NHRD Network Journal

2 see https://www.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/ for more details.


3 See https://www.incubesence.com and www.trume.in for contactless attendance and www.tagbox.in for
employee tracking system.
4 See https://www.Paradox.ai
5 See https://www.paradox.ai/employee-care
6 See https://www.getjenny.com/blog/tradeunions

References
Ariely, D. (2008). Predictably irrational: The hidden forces that shape our decisions. Harper Collins.
Arnold, M., & Newman, L. (2016, July 7). Robots enter investment banks’ trading floors. The Financial Times, pp.
7–9.
Basu, S. (2020, August 25). Average increments in India fell to 3.6 % in 2020, from 8.6 % in 2019: Survey. The
Econom, August, pp. 1–5. https://economictimes.indiatimes.com/jobs/average-increments-in-india-fell-to-3-6-
in-2020-from-8-6-in-2019-survey/articleshow/77714655.cms
Baum, J. A. C., & Haveman, H. A. (2020). Editors’ comments: The future of organizational theory. Academy of
Management Review, 45(2), 268–272. https://doi.org/10.5465/amr.2020.0030
Beach, L. R. (1993). Broadening the definition of decision making: The role of prechoice screening of options.
Psychological Science, 4(4), 215–220. https://doi.org/10.1111/j.1467-9280.1993.tb00264.x
Bernoulli, D. (1954). Exposition of a new theory on the measurement of risk. Econometrica, 22(1), 23–36.
Chopra, D. (2019). Metahuman: Unleashing your infinite potential. Harmony.
Choudhury, S. R. (2018, September 5). A. I. and robotics will create almost 60 million more jobs than they destroy
by 2022, report says. CNBC, pp. 1–5. https://www.cnbc.com/2018/09/17/wef-machines-are-going-to-perform-
more-tasks-than-humans-by-2025.html
Clark, C. M., Tan, M. L., Murfett, U. M., Rogers, P. S., & Ang, S. (2019). The call center agent’s performance
paradox: A mixed-methods study of discourse strategies and paradox resolution. Academy of Management
Discoveries, 5(2), 152–170. https://doi.org/10.5465/amd.2016.0024
Cohen, M. D., March, J. G., & Olsen, J. P. (1972). A garbage can model of organizational choice. Administrative
Science Quarterly, 17(1), 1–25.
Dane, E., & Pratt, M. G. (2007). Exploring intuition and its role in managerial decision making. Academy of
Management Review, 32(1), 33–54. https://doi.org/10.5465/AMR.2007.23463682
Danilevsky, M., Qian, K., Aharonov, R., Katsis, Y., Kawas, B., & Sen, P. (2020). A survey of the state of explainable
AI for natural language processing. Section 5. http://arxiv.org/abs/2010.00711
Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.
reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
Daugherty, P., & Wilson, J. H. (2018). Human + machine: Reimagining work in the age of AI. Harvard Business
Review Press.
Douglas, B. (2003). Wheels for the world: Henry Ford, his company, and a century of progress, 1903–2003. Viking.
Espncricinfo. (2020). MS Dhoni announces international retirement. https://www.espncricinfo.com/story/_/
id/29666935/ms-dhoni-announces-international-retirement-reports
Floris Heukelom. (2007). Kahneman and Tversky and the origin of behavioral economics (No. 07-003/1). Tinbergen
Institute.
Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual Review of Psychology, 62, 451–482.
https://doi.org/10.1146/annurev-psych-120709-145346
Goudarzi, S., Hickok, E., Sinha, A., Mohandas, S., Bidare, P. M., Ray, S., & Rathi, A. (2018). AI in banking and
finance. The Centre for Internet and Society.
Griffin, A. (2017, October 26). Saudi Arabia grants citizenship to a robot for the first time ever. The Guardian.
https://www.independent.co.uk/life-style/gadgets-and-tech/news/saudi-arabia-robot-sophia-citizenship-
android-riyadh-citizen-passport-future-a8021601.html
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120.
https://doi.org/10.1007/s11023-020-09517-8
Balasubramanian 11

Hitt, M. A., & Tyler, B. B. (1991). Strategic decision models: Integrating different perspectives. Strategic
Management Journal, 12(5), 327–351.
Hmoud, B., & Varallyai, L. (2019). Will artificial intelligence take over human resources recruitment and selection.
Network Intelligence Studies, VII(13), 21–30.
Imaginovation. (2019). AI in banking: A JP Morgan case study & how your business can benefit. Medium. https://
www.imaginovation.net/blog/ai-in-banking-jp-morgan-case-study-benefits-to-businesses/
Iriondo, R. (2018, October). Amazon scraps secret AI recruiting tool that showed bias against women—Reuters.
Medium. https://medium.com/datadriveninvestor/amazon-scraps-secret-ai-recruiting-engine-that-showed-
biases-against-women-995c505f5c6f
Junji, N. (1995). The legacy of W. Edwards Deming. Quality Progress, 28(12), 35–42.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2),
263–292.
Kolbert, E. (2016, December). Our automated future: How long will it be before you lose your job to a robot. The
New Yorker, pp. 1–6. https://www.newyorker.com/magazine/2016/12/19/our-automated-future
Makridakis, S. (2017). The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms.
Futures, 90, 46–60.
Masuch, M., & LaPotin, P. (1989). Beyond garbage cans: An Al model of organizational choice. Administrative
Science Quarterly, 34(1), 38–67.
Neumann, J. Von, & Morgenstern, O. (2007). Theory of games and economic behavior. Princeton University Press.
Nilsson, N. J. (1971). Problem-solving methods in Artificial Intelligence (Vol. 1969, Issue November 1969).
McGraw-Hill.
O’Carol, B. (2017). What are the 3 types of AI? A guide to narrow, general, and super Artificial Intelligence.
Codebots. https://codebots.com/artificial-intelligence/the-3-types-of-ai-is-the-third-even-possible
Raisch, S., & Krakowski, S. (2020). Artificial Intelligence and management: The automation-augmentation paradox.
Academy of Management Review, 1–48. https://doi.org/10.5465/2018.0072
Riley, T. (2018, March 13). Get ready, this year your next job interview may be with an A.I. robot. CNBC. https://
www.cnbc.com/2018/03/13/ai-job-recruiting-tools-offered-by-hirevue-mya-other-start-ups.html
Samuelson, P. A. (1977). St. Petersburg paradoxes: Defanged, dissected, and historically described. Journal of
Economic Literature, 15(1), 24–55.
Shamim Hossain, M., Muhammad, G., & Guizani, N. (2020). Explainable AI and mass surveillance system-based
healthcare framework to combat COVID-I9 like pandemics. IEEE Network, 34(4), 126–132. https://doi.
org/10.1109/MNET.011.2000458
Simon, H. A. (1997). Administrative behavior: A study of decision making processes in administrative organizations
(4th ed.). The Free Press.
Sources, E. news. (2017, December 29). World’s first smart policing robot’ launched in Hyderabad. The New Indian
Express. https://www.newindianexpress.com/cities/hyderabad/2017/dec/29/worlds-first-smart-policing-robot-
launched-in-hyderabad-1739880.html
Taleb, N. N. (2001). Fooled by randomness: The hidden role of chance in life and in the markets. Penguin Books.
Toeffler, A. (1999). Future shock. NewScientist.
UNIEuropa. (2019). November 2019 UNI Europa ICTS position on Artificial Intelligence. https://www.uni-europa.
org/wp-content/uploads/2019/12/AIUniEuropaWeb_en.pdf
Vaishya, R., Javaid, M., Haleem Khan, I., & Haleem, A. (2020, July–August). Artificial Intelligence (AI) applications
for COVID-19 pandemic. Diabetes & Metabolic Syndrome, 14(4), 337–339.
Von Krogh, G. (2018). Artificial Intelligence in organizations: New opportunities for phenomenon based theorizing.
Academy of Management Discoveries, 4(4), 404–409. https://doi.org/10.5465/amd.2018.0084
Von Krogh, G., & Roos, J. (1995). Organizational epistemology (1st ed.). Palgrave Macmillan. https://doi.
org/10.1007/978-1-349-24034-0
Weintraub, K. (2016, July). 20 years after Dolly the Sheep led the way—Where is cloning now? Scientific American, 1–5.
https://www.scientificamerican.com/article/20-years-after-dolly-the-sheep-led-the-way-where-is-cloning-now/
12 NHRD Network Journal

Bio-sketch
Girish Balasubramanian is associated with the Indian Institute of Management Lucknow (IIM L). Before
working for IIM L, he was associated with Xavier University Bhubaneswar. He is an electrical engineer
from NIT Surat. He has completed Fellow Programme in Management (FPM) from XLRI, Xavier
School of Management (AACSB accredited) with a specialisation in Human Resource Management. His
thesis was on understanding union revitalisation in India. His research interests include industrial
relations, strategy, compensation management, diversity and inclusion, and sports analytics. He has
presented his research in notable international peer-reviewed conferences. He has published his research
in notable peer-reviewed journals. He facilitates courses on Industrial Relations, Labour Law,
Compensation Management, Human Resource Management and HR Analytics. He has also developed
short-term courses and programmes for HR Analytics. Before completing FPM, he has worked with
multinational firms from the telecom and energy sector.

You might also like