Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

Masters Programmes: Individual Assignment Cover

Sheet

Student Number: U2267239

Module Code: IB9KPL

Module Title: Digital Transformation

Submission Deadline: 24/04/2024

Date Submitted: 24/04/2024

Word Count: 2246 Of 2500

Number of Pages: 7

Question Attempted: Task: Drawing on theories and concepts


(question number/title, or description of
covered in the module as well as in your
extended reading,
assignment)
write an up to 2500-word blog that
demonstrates expert knowledge and thought
leadership in an area
or application of digital transformation of your
choice for an organization of your choice.
Definition of Digital Transformation: As part of
this Digital Transformation module, we define
digital
transformation as building on the triad of ‘data –
technology – people’ to shape the strategic
approach
to digital transformation. We encourage you to
develop unique value propositions and
innovative
business models to find solutions and to
capture and to create value.
As part of the individual assessment, we aim to
encourage you to strengthen your digital
transformation
and digital leadership expertise and online
presence. We recommend that you critically
evaluate one
area of digital transformation. With one area we
mean that you do not need to cover all aspects
of the
aforementioned triad of ‘data – technology –
people’ in depth. Instead, you could focus on a
multi-sided
platform application or an aspect of a
technology, e.g., Machine Learning or
Blockchain, and then decide
on a possible use case, problem that you are
going to solve, or business opportunity for an
organisation
of your choice. You will need to analyse the
proposed use case, including potential hurdles
that need to
be overcome to achieve the impact that you are
aiming for, by referring to academic literature.
Please
note this is not an organizational behavior-type
module, while we recognise the importance of
people
management, this assignment should not focus
on people management.
Some further background information:
The blog needs to be presented as ‘ready to
post’, including, for example, images, hashtags
and approx.
time for reading. Please ensure that you only
use images, diagrams, graphs and tables for
which you do
not infringe copyright and/or have requested
permission. The blog should display your
candidate
number rather than your name and should at the
time of submission not yet be ‘live’ on the
internet.
The ideas / content that the blog conveys need
to be convincing and should demonstrate
critical thinking
grounded in literature (with WBS Harvard
referencing). The blog should be thought-
provoking rather
than explaining concepts in a textbook-style-
writing. The blog should be seen as written by a
subject
matter expert and driver for change in a
specific field of digital transformation whose
opinions matter.
The language used should be formal, yet
appropriate for a blog, i.e. more informal than
an academic
journal article. You might want to write the blog
using a first-person positioning (e.g. ‘A recent
WBS MBA
module on Digital Transformation made me
consider…’ or ‘I suggest…’), yet most sentence
will not
include any writing in the first person (e.g.
‘Miller (2021) raised the point of….’ Or X means
Y (Jones,
2020).
We expect to see an engaging ‘hook’, i.e. an
abstract that makes the reader want to read on,
possibly
achieved by asking thought-provoking
questions, a clear structure in your blog which
is aligned to
academic essay writing without using the same
headings. For example, you might start in larger
lettersPage 2 of 3
the ‘hook’, then show an image. You could then
introduce the topic (without calling it
introduction) and
provide a review of existing material (without
calling it literature review). You then might
structure your
main body in different sections using content-
related titles in bold before having your
concluding
thoughts and acknowledging others. You could
then show your list of references / bibliography
under a
heading such as ‘Useful articles in the domain
of X’ and adding hashtags to relevant areas.
Here a link to sample blogs of students who
published their blogs last year (note students
came from
different modules, please align yourself with the
outline above rather than the sample blogs):
https://blogs.warwick.ac.uk/wjett/entry/
scholarly_blogs_part/
Have you used Artificial
Intelligence (AI) in any part of this yes
assignment?
Academic Integrity Declaration
We're part of an academic community at Warwick. Whether studying, teaching, or researching, we’re all taking part in an expert conversation
which must meet standards of academic integrity. When we all meet these standards, we can take pride in our own academic achievements, as
individuals and as an academic community.

Academic integrity means committing to honesty in academic work, giving credit where we've used others' ideas and being proud of our own
achievements.

In submitting my work, I confirm that:


 I have read the guidance on academic integrity provided in the Student Handbook and understand the University regulations in relation to
Academic Integrity. I am aware of the potential consequences of Academic Misconduct.
 I declare that the work is all my own, except where I have stated otherwise.
 No substantial part(s) of the work submitted here has also been submitted by me in other credit bearing assessments courses of study
(other than in certain cases of a resubmission of a piece of work), and I acknowledge that if this has been done this may lead to an
appropriate sanction.
 Where a generative Artificial Intelligence such as ChatGPT has been used I confirm I have abided by both the University guidance and
specific requirements as set out in the Student Handbook and the Assessment brief. I have clearly acknowledged the use of any generative
Artificial Intelligence in my submission, my reasoning for using it and which generative AI (or AIs) I have used. Except where indicated the
work is otherwise entirely my own.
 I understand that should this piece of work raise concerns requiring investigation in relation to any of points above, it is possible that other
work I have submitted for assessment will be checked, even if marks (provisional or confirmed) have been published.
 Where a proof-reader, paid or unpaid was used, I confirm that the proof-reader was made aware of and has complied with the University’s
proofreading policy.

Upon electronic submission of your assessment you will be required to agree to the statements above
Table of Contents
Masters Programmes: Individual Assignment Cover Sheet...........................................................1

The Ethical Dilemma: Balancing AI Innovation and Responsibility.................................................5

Removing the Ambiguity: A Principled Framework for Responsible AI.....................................................8

Taking the Risk and Making It Your Friend: A Principled Approach to AI Risk Management......................9

Linking back: A Holistic Approach to Operationalising Responsible AI......................................................9

Conclusion: Shaping the Future of Responsible AI........................................................................11

References..................................................................................................................................12

Appendix.....................................................................................................................................13
The Ethical Dilemma: Balancing AI Innovation and
Responsibility
This blog provides thought leadership on the crucial need for a principled, responsible approach to AI
implementation, drawing upon academic literature and real-world case studies. I will propose a
comprehensive framework to navigate this complex landscape by critically evaluating the challenges and risks
inherent in AI adoption, fostering trust and transparency while unlocking AI's transformative power.

I was recently presented with a scenario wherein AI could


replace human workers in financial tax prediction. This
scenario encapsulates the ethical dilemma at the heart of
this discourse. While the business case for cost savings
through AI implementation is compelling, we must grapple
with the profound implications of displacing human labour
with machines.

As Whittlestone et al. (2019) cautioned, the ethical risks of


AI extend far beyond the immediate economic
ramifications. Unchecked AI systems can perpetuate
biases, infringe privacy, and exacerbate societal
Figure 1 - Image generated using Bing Copilot and DALL-E AI
inequalities (Barocas & Selbst, 2016; O'Neil, 2016).
(Mehrabi et al., 2021) underscores the pressing need to
address bias and fairness in machine learning models, lest
we inadvertently entrench existing disparities.

However, rather than viewing AI as an adversary to human labour, we must reframe our perspective. By
embracing responsible AI practices, we can unlock a future where humans and machines coexist
symbiotically, augmenting and enhancing each other's capabilities. I will be implementing the framework at
my workplace to test the theory. I also hope others can use this framework for their AI Digital
Transformation.

Embracing Responsible AI: A Principled Approach to Digital Transformation


“It is uncertain whether history precisely replicates its events, yet there appears to be a rhythmic similarity. Pattern management is
the most crucial skill of management. This involves the discernment required to navigate complexity and identify fundamental truths
reminiscent of those encountered in disparate scenarios. Leading to how evolution sees disruption, leading to mass extinctions and the
rise of new.” - (Siebel & Rice, 2019)

Figure 2 - Congruence Model (Nadler & Tushman, 1979).

Taking this scenario through Nadler-Tushman’s Congruence Model (CM), there are the right inputs, a strategy that needs
to be formalised, a scoped task, and the right level of interactions with the culture, people, and organisation. The issue is
enabling the business to implement AI within tight regulations. Further, there is ambiguity with current laws and rapid
changes in how society perceives AI. The push for change in the scenario is evident, and there is a valid business case to
implement such change. This leads to the need for a strategy, which is missing in the above scenario due to the early
phase of AI. The ADROIT Framework (Mithas et al., 2020) is what is needed to drive a change in mindset in the
business to adopt AI responsibly, an acronym for:
1. Add Revenue: While a Tax improvement does not seek to add revenue, reducing taxes improves business
growth and revenue (Jackson, 2000).
2. Differentiation: The fundamental issue is understanding and knowledge. Changing the narrative of let’s take a
risk to implement AI to discover what is possible within our constraints using a small project such as Tax. Here
is the differentiator to get the willingness to proceed from the right stakeholders.
3. Reduce costs: The noticeable cost reduction is the change from a salary to a machine cost. The added cost
reduction is the building block for implementing AI in other parts of the business.
4. Optimise Risk: This is the main crux. Can human risk be reduced by automating, as Lean Six Sigma teaches
(Snee, 2010), or does removing the human pose an ethical risk? Though the significant risk is competitive risk,
BCG reported that 78% of companies are struggling with the risk of using AI. Risk optimisation is the north star
of this CM strategy.
5. Innovating: The innovation department is the driving force behind running the hackathon and showing the
value of using AI.
6. Transforming: Implementing tax should not be the vision; this is a Minimum Viable Project/Product. The
fundamentals and learning of a low-priority, low-risk initiative such as the tax enables what it would mean to
implement AI across the business.

The ADROIT Framework is a valuable tool, and when combined with Agile, operations management, innovation and
leadership frameworks, it drives changes more dynamically. Tata used the ADROIT Framework to transform themselves
digitally, notably their InnoVista program, where they award innovation within their subsidiaries. They proved an
upward trend in participation and outcomes by using this framework.
Figure 3 - Trends in Tata InnoVista participation. DTT: dare to try, PI: promising innovations, TLE-PT: the leading edge-proven
technologies, DH: design honour (Mithas & Arora, 2015).

Figure 4 - Five cross-sector principles for responsible AI (Gallo & Nair, 2024).

The innovation department is not the governing body. The hackathon can be classified as an informal organisation
looking at implementing AI. The governing bodies are the Financial Conduct Authority, Prudential Risk Authority, and
the EU AI Act. The UK has formed the Digital Regulation Co-operation Forum (DRCF) and AI and Digital Hub pilot
with a 2024/25 workplan to support innovators and implementors of AI. This brings five principles to attention, as shown
in Figure 4. This culture brings in multiple formal organisations looking at Risk. Legal, Governance Risk Compliance
(GRC), consumer duty, and procurement exist to resolve business risk. With AI, we include the Innovation team's
optimism and the Risk team's reservation. The congruence model concludes that this would be a candidate for a
successful digital transformation. The diagnosed misalignment is due to knowledge gaps within financial services and
external ambiguity of the usage of AI within the European region.
Removing the Ambiguity: A Principled Framework for Responsible AI
The scope involves introducing policies and standards to clarify the
business, focusing on multiple risk functions. External knowledge will
debunk AI fears, aiming not just to save costs but also to build an AI
centre of excellence. AI predates Chat-GPT, with Geoffrey Hinton's
backpropagation algorithm (Plaut & Hinton, 1987) leading to Deep
Neural Networks (DNNs) used in Large Language Models (LLM), of
which Chat-GPT is a product. AI encompasses Generative and
Predictive forms, plus automation, like predicting outcomes and
executing actions based on those predictions. Sampath &
Khargonekar’s (2018) Socially Responsible Automation (SRA) Four-
Level Model highlights a broader view of automation’s role.

Figure 5 - The Orville, copywrite Hulu/NBC


Level 0 – Cost-Focused Automation: The researchers noted that
focusing on cost predominantly fails to derive the benefits a business seeks. The truth is that when looked at through the
Congruence, it only touches the Task and strategy; the people, culture, and organisation are not considered. The average
corporate tax analyst’s salary is around £30,000 (Glassdoor’s, 2024) , and Workday's Google Document AI is priced at
$60 for 2000 pages of invoices per month or a $720 annual fee (Google, 2024).

Level 1 – Performance-Driven Automation: Driven by the business metrics, it can include the people, culture, and
organisation. But it falls short of being human-centred. Take Amazon; their warehouse worker or drivers are at the edge
of their efficiency demands. The warehouse staff “gather, pack, and store” goods using robots to transport the goods
within the warehouse. Their delivery uses predictive AI to plan the best routes for delivering the most packages (Wu et
al., 2022). For the scenario above, the benefit is 24/7 and near real-time processing of these documents.

Level 2 –Human (Worker)-centred Automation: This type of automation exemplifies the human knowledge needed to
monitor and control the AI system. Here, the analyst's job can be moved to an auditing function of monitoring the AI and
reducing the hallucination risk associated with generative AI and the data output quality of predictive AI. Thus, the
human is the observer and the AI the doer.

Level 3 –Socially responsible automation: The movie Hidden Figures shows how an organisation can implement
automation responsibly. The movie depicts how NASA implemented IBM’s mainframes to displace workers who
performed math calculations. However, one of these workers saw it as an opportunity to reskill and become a mainframe
programmer. Similarly, when implementing this tax system, workers can be educated on the types of AI and how to
interact with them. At this level, the culture will be changed to enable AI.

Taking the Risk and Making It Your Friend: A Principled Approach to AI Risk
Management
Figure 6 - A systematic approach to identifying AI risk examines each risk category in each business context (Buehler et al., 2021).

AI and its potential risks are pressing topics that require attention beyond ethics. Risks can surface at any stage of
development or live usage as presented in Exhibit 2 of Cheatham et al. (2019) report. Eloquently put, the three core
pillars of AI risk management when using the six-by-six framework are Clarity - the use of a structured identification
approach to pinpoint the most critical risks; Breadth - instituting robust enterprise-wide controls, and Nuance -
reinforcing specific controls depending on the nature of the risk (Cheatham et al., 2019). Proper policies and standards
can help us navigate these challenges effectively. The six-by-six framework and a structured identification approach can
pinpoint the most critical risks and institute robust enterprise-wide controls. Laws such as GDPR, the FCA handbook, the
PRA handbook, PCI DSS, the Data Protection Act 2018, the Equality Act 2010, and the Human Rights Act 1998 were
utilised to form the decision tree in the spreadsheet attached. The six-by-six framework must be continuously improved,
and each risk and context must be re-evaluated at an appropriate cadence of daily, weekly, monthly, quarterly, or yearly.

Linking back: A Holistic Approach to Operationalising Responsible AI


Implementing responsible AI requires a holistic, cross-functional effort that permeates every aspect of an organisation's
operations. A comprehensive approach to operationalising responsible AI, using the scenario corporate tax scenario:
1. Leadership Commitment: Using the framework described, I can create a better use case for the stakeholders
and get their commitment. This commitment should be codified in a mission statement and guiding principles
that inform all AI initiatives.
2. Governance and Risk Management: what has been produced in the spreadsheet starts the risk management
analysis on how the AI will interact with the aspects of the CM.
3. Technological Enablers: The framework here gives an enabler and shows that the next stage will align with
corporate standards and governance. Here, the outputs of a strategy runway and standards and policies on how
AI interacts with the business will be produced, allowing others in the industry to implement AI safely and
ethically.
4. Cross-Functional Collaboration: While most of this blog was done through research, the next steps will be to
run the spreadsheet through other AI use cases. I will collaborate with technical teams, risk management, legal,
ethics, and compliance functions to ensure a holistic approach to responsible AI implementation.
5. Stakeholder Engagement: I will actively engage with diverse stakeholders, including employees, customers,
regulators, and civil society organisations, to gather feedback, address concerns, and foster trust and
transparency.
6. Continuous Learning and Adaptation: As the UK considers publishing more information on its AI
regulations, new learnings and adaptations will be made to mitigate unknown risks.

Workday uses Google Document AI to perform these Tax Document Optical Character Recognition OCR and Tax
matching systems. I get a positive outcome when using the six-by-six framework on Workdays Document AI. The key
takeaways were that they used Retravel Augmented Generation (RAG) to reduce hallucinations, ensure calculations are
always the same, and keep a knowledge bank for tax knowledge (Stratton, 2023). Workday is open to discussing how
they have implemented the NIST AI Risk Management Framework (Sauer, 2023). Workday uses a “human-in-the-loop”
approach for its AI lifecycle (Trindel, 2022) and champions transparency and fairness with its ethical AI initiatives
(Cosgrove, 2019). This, with Data Protection Impact Assessment (DPIA) questions, builds a broader. In Workdays and
Google documentation, there is a concrete understanding they want to project when customers use their AI and how
customer data will be used to train the models in isolation to that customer tenant. These answer the data, privacy,
culture, and transparency when answering the questions in the impact assessment attached.

By adopting this holistic approach, as Workday and Google have, organisations can navigate the complexities of
responsible AI implementation, fostering stakeholders' trust and confidence while unlocking these technologies'
transformative potential.
Conclusion: Shaping the Future of Responsible AI
I took the congruence model and six-by-six framework and returned to my department to start the discussion; the best
outcome was how we had already used machine learning and how governance and ethics came from that implementation.
The most important output is how the business changed the T&Cs to ask customers if they want their data to be used in
analytics. Fundamentally, many of these risks have been covered and mitigated; adding AI only becomes a paragraph
amendment to existing governance.

The hardest part is changing the culture and narrative of using AI with ambiguous legislation. I did this post while I got
through the challenge of transforming the business to implement AI; I felt the need to help others in the same situation
without having much guidance. I hope to test the hypothesis once I have implemented this at my workplace. Embracing
responsible AI practices is not merely an ethical imperative but a strategic necessity for long-term success and
sustainable growth. By adopting the principled approach above that prioritises ethical governance, privacy, fairness,
explainability, and continuous improvement, your organisation can position itself at the forefront of the AI revolution
while mitigating risks and fostering stakeholder trust.

#ResponsibleAI #DigitalTransformation #AIEthics #TrustAndTransparency #FairnessAndAccountability #AIFATE


#FairnessAccounabilityTransparencyEthics #FinancialService #Banking #WBS #ThoughtLeaders
References
Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. Calif. L. Rev., 104, 671.
Buehler, K., Dooley, R., Grennan, L., & Singla, A. (2021). Getting to know—and manage—your biggest AI risks.
Mckinsey and Company.
Cheatham, B., Javanmardian, K., & Samandari, H. (2019). Confronting the risks of artificial intelligence. McKinsey
Quarterly, 2(38), 1-9.
Cosgrove, B. (2019). Workday’s Commitments to Ethical AI. https://blog.workday.com/en-us/2019/workdays-
commitments-to-ethical-ai.html
Gallo, V., & Nair, S. (2024). The UK’s framework for AI regulation. https://www2.deloitte.com/uk/en/blog/emea-
centre-for-regulatory-strategy/2024/the-uks-framework-for-ai-regulation.html
Glassdoor’s. (2024). Corporate Tax Analyst Salaries in United Kingdom.
https://www.glassdoor.co.uk/Salaries/corporate-tax-analyst-salary-SRCH_KO0,21.htm
Google. (2024). Document AI Pricing. https://cloud.google.com/document-ai/pricing
Jackson, A. (2000). Tax Cuts: The Implications for Growth and Productivity. Can. Tax J., 48, 276.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine
learning. ACM computing surveys (CSUR), 54(6), 1-35.
Mithas, S., & Arora, R. (2015). Lessons from Tata's Corporate Innovation Strategy. IT Professional, 17(2), 2-6.
https://doi.org/10.1109/MITP.2015.26
Mithas, S., Murugesan, S., & Seetharaman, P. (2020). What is Your Artificial Intelligence Strategy? IT Professional,
22(2), 4-9. https://doi.org/10.1109/MITP.2019.2957620
Nadler, D. A., & Tushman, M. L. (1979). A congruence model for diagnosing organizational behavior. Organizational
psychology: A book of readings, 442-458.
O'Neil, C. (2016). Weapons of math destruction: how big data increases inequality and threatens democracy. Allen
Lane. https://go.exlibris.link/lL4k5fyW
Plaut, D. C., & Hinton, G. E. (1987). Learning sets of filters using back-propagation. Computer Speech & Language,
2(1), 35-61. https://doi.org/https://doi.org/10.1016/0885-2308(87)90026-X
Sampath, M., & Khargonekar, P. P. (2018). Socially responsible automation: A framework for shaping the future. Nat.
Acad. Eng. Bridge, 48(4), 45-52.
Sauer, R. (2023). Responsible AI Governance at Workday. https://blog.workday.com/en-us/2023/responsible-ai-
governance-workday.html
Siebel, T. M., & Rice, C. (2019). Digital transformation: survive and thrive in an era of mass extinction. RosettaBooks.
https://go.exlibris.link/LvrrsKbW
Snee, R. D. (2010). Lean Six Sigma - getting better all the time. International Journal of Lean Six Sigma, 1(1), 9-29.
https://doi.org/https://doi.org/10.1108/20401461011033130
Stratton, J. (2023). How Workday Is Leading the Enterprise Generative AI Revolution.
https://blog.workday.com/en-us/2023/how-workday-leading-enterprise-generative-ai-revolution.html
Trindel, K. (2022). Workday’s Continued Diligence to Ethical AI and ML Trust.
https://blog.workday.com/en-us/2022/workdays-continued-diligence-ethical-ai-and-ml-trust.html
Whittlestone, J., Nyrup, R., Alexandrova, A., & Cave, S. (2019). The role and limits of principles in AI ethics: Towards a
focus on tensions. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society,
Wu, C., Song, Y., March, V., & Duthie, E. (2022). Learning from drivers to tackle the amazon last mile routing research
challenge. arXiv preprint arXiv:2205.04001.
Appendix

You might also like