Professional Documents
Culture Documents
IB9KPL-Digital-Transformation-Individual-Assignment - U2267239
IB9KPL-Digital-Transformation-Individual-Assignment - U2267239
Sheet
Number of Pages: 7
Academic integrity means committing to honesty in academic work, giving credit where we've used others' ideas and being proud of our own
achievements.
Upon electronic submission of your assessment you will be required to agree to the statements above
Table of Contents
Masters Programmes: Individual Assignment Cover Sheet...........................................................1
Taking the Risk and Making It Your Friend: A Principled Approach to AI Risk Management......................9
References..................................................................................................................................12
Appendix.....................................................................................................................................13
The Ethical Dilemma: Balancing AI Innovation and
Responsibility
This blog provides thought leadership on the crucial need for a principled, responsible approach to AI
implementation, drawing upon academic literature and real-world case studies. I will propose a
comprehensive framework to navigate this complex landscape by critically evaluating the challenges and risks
inherent in AI adoption, fostering trust and transparency while unlocking AI's transformative power.
However, rather than viewing AI as an adversary to human labour, we must reframe our perspective. By
embracing responsible AI practices, we can unlock a future where humans and machines coexist
symbiotically, augmenting and enhancing each other's capabilities. I will be implementing the framework at
my workplace to test the theory. I also hope others can use this framework for their AI Digital
Transformation.
Taking this scenario through Nadler-Tushman’s Congruence Model (CM), there are the right inputs, a strategy that needs
to be formalised, a scoped task, and the right level of interactions with the culture, people, and organisation. The issue is
enabling the business to implement AI within tight regulations. Further, there is ambiguity with current laws and rapid
changes in how society perceives AI. The push for change in the scenario is evident, and there is a valid business case to
implement such change. This leads to the need for a strategy, which is missing in the above scenario due to the early
phase of AI. The ADROIT Framework (Mithas et al., 2020) is what is needed to drive a change in mindset in the
business to adopt AI responsibly, an acronym for:
1. Add Revenue: While a Tax improvement does not seek to add revenue, reducing taxes improves business
growth and revenue (Jackson, 2000).
2. Differentiation: The fundamental issue is understanding and knowledge. Changing the narrative of let’s take a
risk to implement AI to discover what is possible within our constraints using a small project such as Tax. Here
is the differentiator to get the willingness to proceed from the right stakeholders.
3. Reduce costs: The noticeable cost reduction is the change from a salary to a machine cost. The added cost
reduction is the building block for implementing AI in other parts of the business.
4. Optimise Risk: This is the main crux. Can human risk be reduced by automating, as Lean Six Sigma teaches
(Snee, 2010), or does removing the human pose an ethical risk? Though the significant risk is competitive risk,
BCG reported that 78% of companies are struggling with the risk of using AI. Risk optimisation is the north star
of this CM strategy.
5. Innovating: The innovation department is the driving force behind running the hackathon and showing the
value of using AI.
6. Transforming: Implementing tax should not be the vision; this is a Minimum Viable Project/Product. The
fundamentals and learning of a low-priority, low-risk initiative such as the tax enables what it would mean to
implement AI across the business.
The ADROIT Framework is a valuable tool, and when combined with Agile, operations management, innovation and
leadership frameworks, it drives changes more dynamically. Tata used the ADROIT Framework to transform themselves
digitally, notably their InnoVista program, where they award innovation within their subsidiaries. They proved an
upward trend in participation and outcomes by using this framework.
Figure 3 - Trends in Tata InnoVista participation. DTT: dare to try, PI: promising innovations, TLE-PT: the leading edge-proven
technologies, DH: design honour (Mithas & Arora, 2015).
Figure 4 - Five cross-sector principles for responsible AI (Gallo & Nair, 2024).
The innovation department is not the governing body. The hackathon can be classified as an informal organisation
looking at implementing AI. The governing bodies are the Financial Conduct Authority, Prudential Risk Authority, and
the EU AI Act. The UK has formed the Digital Regulation Co-operation Forum (DRCF) and AI and Digital Hub pilot
with a 2024/25 workplan to support innovators and implementors of AI. This brings five principles to attention, as shown
in Figure 4. This culture brings in multiple formal organisations looking at Risk. Legal, Governance Risk Compliance
(GRC), consumer duty, and procurement exist to resolve business risk. With AI, we include the Innovation team's
optimism and the Risk team's reservation. The congruence model concludes that this would be a candidate for a
successful digital transformation. The diagnosed misalignment is due to knowledge gaps within financial services and
external ambiguity of the usage of AI within the European region.
Removing the Ambiguity: A Principled Framework for Responsible AI
The scope involves introducing policies and standards to clarify the
business, focusing on multiple risk functions. External knowledge will
debunk AI fears, aiming not just to save costs but also to build an AI
centre of excellence. AI predates Chat-GPT, with Geoffrey Hinton's
backpropagation algorithm (Plaut & Hinton, 1987) leading to Deep
Neural Networks (DNNs) used in Large Language Models (LLM), of
which Chat-GPT is a product. AI encompasses Generative and
Predictive forms, plus automation, like predicting outcomes and
executing actions based on those predictions. Sampath &
Khargonekar’s (2018) Socially Responsible Automation (SRA) Four-
Level Model highlights a broader view of automation’s role.
Level 1 – Performance-Driven Automation: Driven by the business metrics, it can include the people, culture, and
organisation. But it falls short of being human-centred. Take Amazon; their warehouse worker or drivers are at the edge
of their efficiency demands. The warehouse staff “gather, pack, and store” goods using robots to transport the goods
within the warehouse. Their delivery uses predictive AI to plan the best routes for delivering the most packages (Wu et
al., 2022). For the scenario above, the benefit is 24/7 and near real-time processing of these documents.
Level 2 –Human (Worker)-centred Automation: This type of automation exemplifies the human knowledge needed to
monitor and control the AI system. Here, the analyst's job can be moved to an auditing function of monitoring the AI and
reducing the hallucination risk associated with generative AI and the data output quality of predictive AI. Thus, the
human is the observer and the AI the doer.
Level 3 –Socially responsible automation: The movie Hidden Figures shows how an organisation can implement
automation responsibly. The movie depicts how NASA implemented IBM’s mainframes to displace workers who
performed math calculations. However, one of these workers saw it as an opportunity to reskill and become a mainframe
programmer. Similarly, when implementing this tax system, workers can be educated on the types of AI and how to
interact with them. At this level, the culture will be changed to enable AI.
Taking the Risk and Making It Your Friend: A Principled Approach to AI Risk
Management
Figure 6 - A systematic approach to identifying AI risk examines each risk category in each business context (Buehler et al., 2021).
AI and its potential risks are pressing topics that require attention beyond ethics. Risks can surface at any stage of
development or live usage as presented in Exhibit 2 of Cheatham et al. (2019) report. Eloquently put, the three core
pillars of AI risk management when using the six-by-six framework are Clarity - the use of a structured identification
approach to pinpoint the most critical risks; Breadth - instituting robust enterprise-wide controls, and Nuance -
reinforcing specific controls depending on the nature of the risk (Cheatham et al., 2019). Proper policies and standards
can help us navigate these challenges effectively. The six-by-six framework and a structured identification approach can
pinpoint the most critical risks and institute robust enterprise-wide controls. Laws such as GDPR, the FCA handbook, the
PRA handbook, PCI DSS, the Data Protection Act 2018, the Equality Act 2010, and the Human Rights Act 1998 were
utilised to form the decision tree in the spreadsheet attached. The six-by-six framework must be continuously improved,
and each risk and context must be re-evaluated at an appropriate cadence of daily, weekly, monthly, quarterly, or yearly.
Workday uses Google Document AI to perform these Tax Document Optical Character Recognition OCR and Tax
matching systems. I get a positive outcome when using the six-by-six framework on Workdays Document AI. The key
takeaways were that they used Retravel Augmented Generation (RAG) to reduce hallucinations, ensure calculations are
always the same, and keep a knowledge bank for tax knowledge (Stratton, 2023). Workday is open to discussing how
they have implemented the NIST AI Risk Management Framework (Sauer, 2023). Workday uses a “human-in-the-loop”
approach for its AI lifecycle (Trindel, 2022) and champions transparency and fairness with its ethical AI initiatives
(Cosgrove, 2019). This, with Data Protection Impact Assessment (DPIA) questions, builds a broader. In Workdays and
Google documentation, there is a concrete understanding they want to project when customers use their AI and how
customer data will be used to train the models in isolation to that customer tenant. These answer the data, privacy,
culture, and transparency when answering the questions in the impact assessment attached.
By adopting this holistic approach, as Workday and Google have, organisations can navigate the complexities of
responsible AI implementation, fostering stakeholders' trust and confidence while unlocking these technologies'
transformative potential.
Conclusion: Shaping the Future of Responsible AI
I took the congruence model and six-by-six framework and returned to my department to start the discussion; the best
outcome was how we had already used machine learning and how governance and ethics came from that implementation.
The most important output is how the business changed the T&Cs to ask customers if they want their data to be used in
analytics. Fundamentally, many of these risks have been covered and mitigated; adding AI only becomes a paragraph
amendment to existing governance.
The hardest part is changing the culture and narrative of using AI with ambiguous legislation. I did this post while I got
through the challenge of transforming the business to implement AI; I felt the need to help others in the same situation
without having much guidance. I hope to test the hypothesis once I have implemented this at my workplace. Embracing
responsible AI practices is not merely an ethical imperative but a strategic necessity for long-term success and
sustainable growth. By adopting the principled approach above that prioritises ethical governance, privacy, fairness,
explainability, and continuous improvement, your organisation can position itself at the forefront of the AI revolution
while mitigating risks and fostering stakeholder trust.