Professional Documents
Culture Documents
Failed IT Projects: Policing Information Management System (I6)
Failed IT Projects: Policing Information Management System (I6)
Failed IT Projects: Policing Information Management System (I6)
1. According to Audit Scotland auditor’s report from 2016, there were disagreements
with the parts since the beginning of the relationship. They spent 18 months discussing
the scopes of the project and even after that time they couldn’t reach an agreement
that kept both parts happy. SPA didn’t think that the development carried out by
Accenture would cover what they needed.
2. As a consequence of the initial disagreement, the development team lost the support
of the stakeholders (loss of trust from SPA to Accenture) and therefore, there were
communication issues.
3. Both parts didn’t correctly measure the risks associated with the projects, mostly those
associated with the customization of the existing software.
4. The tests created for the team to prove their work were inaccurate or weren’t enough
since after those the developers delivered the final project it didn’t perform the way
SPA needed (These could also prove that the initial problem didn’t cover what the
client needed for the first place)
Source: https://calleam.com/WTPF/?p=9150
1. Communication channels were not properly set, since there were problems of missing
key milestones reported during a period of time before the decision of ending the
contract.
2. There must have been problems in the control phase. The BHO notified about missing
key milestones and the team who managed the project should have seen this before
since they ought to have monitored the execution of the plan.
3. In any project there can be deviations, but teams have to be prepared to act when they
occur, so in the initial plan there should have been identified the signals of missing key
milestones and the team ought to have taken action in those moments.
This American company operates in the global financial market. In 2012, it was the leading
trader in US equities with a market share of close to 17%. The NYSE was planning to take into
action a new retail liquidity program that was to offer improved pricing to retail investors with
the aid of retail brokers and the launch date was set to August 1, 2012.
To prepare for this operation, KCG updated their high-speed, automated algorithmic router.
When the new code was updated one of Knight’s technicians did not copy the new code to one
of the eight SMARS computer servers. Knight did not have a second technician review this
deployment and no one at Knight realized that a particular code had not been erased from the
eighth server, nor the new code added.
Knight had no written procedures that required such a review. On Wednesday 1st of August in
2012, the markets opened in the morning. This day started like an average Wednesday. KCG´s
trading platform started to process orders from broker-dealers and handle them for the new
Retail Liquidity Program.
The problems started when the seven correctly working servers had operated their orders and
it was the eighth server turn. In the morning people started to notice something was wrong.
The whole disaster took 45 minutes. During that time, KCG tried to stop these trades several
times. Sadly, there was no kill-switch or any documented info about how to react in this kind of
situation. This led to a situation where they had to diagnose and solve the issue in a live
production environment.
1. The software was manually deployed which left room for human error.
2. There were no control measures put in place that ensure review of work done. There
was also a strong case of poor transparency and communication.
3. KFG did not have a written procedures manual that outlines steps on how to carry out
updates that would have mitigated the risk that was present due to manual
deployment.
4. There was no kill-switch or any documented information on how to react, which meant
diagnosis and corrections done in a live environment.
RESULT: When all was over, the company suffered $460 million losses and almost had to close
the doors for good.
LESSONS LEARNT:
1. Even if the software has been written and tested well it is not enough if it is not
delivered properly to the market. The company had not thought about the deployment
process in the level required for the risk they were facing.
2. Processes did not take into account human actions and without any safety backups.
Mistakes could happen so easily in this kind of process, it could be just
misunderstanding of instructions, which cause wrong execution. To prevent these kinds
of disasters, a couple of points should be taken into consideration.
Source:
https://www.theseus.fi/bitstream/handle/10024/158644/Jaaksola_Mikko.pdf?sequence=1