Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

Western Mindanao State University

COLLEGE OF ENGINEERING
Department of Computer Engineering

CPE 127 - PROJECT MANAGEMENT


1st Semester, SY 2020-21

Name: Luis Roberto P. Soliman


Course & Year: BS CPE 2A

SUMMARY PAPER
Information technology is one of the most innovative yet dynamic projects
available in any industry. Yet, the ever-changing environment also begets a very unstable
environment. A lot of projects, though possessing remarkable innovation and vision
would meet an untimely end in failure because of mismanagement or disorganization.
These ends may be brought on by multiple reasons, the failure may be of lack of
cohesion or organization. The failure to come to a common ground or simply the project
may be too ambitious for the time. A lot of projects fail because of mismanagement or
poor organization. IT Project managers must learn to be dynamic and flexible to succeed
in the project that will be made and turn a project into a reality. I have found some
articles pertaining to IT projects that regardless of potential or innovative value, it ended
up in smoldering failure.

IT projects are chaotic as things can get, multiple minds working on different
ideas and parts of the program. The program chock full of bugs and errors, these can
cause the program to come tumbling down like a stack of dominos. Software bugs are
the usual culprit, but sometimes even tried and tested hardware components give way
and breakdown. This kind of failure is what exactly transpired in 2007, at the Los Angeles
International Airport. The suspect was a faulty network card that kept sending incorrect
data which brought everything across the network to a halt, causing a whopping 17,000
passenger delays in the transport hub. The lesson here is that a project manager should
not cheap out on mission critical equipment, sure its fine to use cheap mousepads, but
please do not cheap out on important hardware components. Having a backup system
doesn’t hurt especially if the primary system starts acting up.

Communication is important, its up on par in importance with compatibility. These


two dictate the effectiveness of practically everything, from projects, relationships and a
whole lot more. The most valuable resource any project manager could have is time. The
project managers of the Airbus A380, one of the world’s largest aircraft failed to keep
compatibility, communication and time in mind. Software compatibility caused the
electronic systems of the aircraft to fail in communicating with the aircraft’s other half.
This failure caused a massive delay in the project amounting to an entire year of project
postponement. Project managers are akin to quartermasters, they are also in charge of
making sure that the equipment is in tiptop shape and are compatible with each other.
Communicate with your team members to ensure that their tools are all uniform to
minimize compatibility issues. Multinational endeavors can cause setbacks especially in
culture and language difference, ideal project managers should anticipate and prepare
countermeasures to minimize the setbacks that may arise from this.

The 80’s and 90’s babies may be quite familiar with this iconic IT project mishap,
the Y2K Problem way back in 1999. It was believed back then that the software could not
recognize the year ‘2000’ as it was designed to recognize the last two digits only and
would fail to differentiate 2000 from 1900. Extremely expensive and frenetic work was
done in order to avert this disaster that would cause the technology infrastructure of the
world to reset. No one thought ahead of this trivial and mundane detail that later caused
a massive headache to everyone around the world. Ideally, aspiring PMs should possess
a meticulous eye for detail that seeks to account from the most critical aspect to the
most trivial and basic one. After all, one wants a Y3K bug, now right?

Military business is a very serious business. These involve lives and a lot of
precious irreplaceable assets. War is carefully pondered upon by multiple experts before
being declared. First world superpower nations often have intricate and complex defense
systems built to deter against attack from other rival nations. These mechanisms are
comprised of top of the line hardware and the best software coded by the brightest
minds. However, these are not immune to design flaws, as anything man made is. One
nerve wracking flaw surfaced in 1983 where a Soviet warning system erroneously
detected an American ballistic missile launch. Thanks to immense good fortune that a
rational and level headed duty officer was on post during that day, correctly deduced that
the information given is erroneous and thus vetoed a Soviet ‘counterattack.’ Stanislav
Petrov is an unsung hero that saved this world from a Mad Max future. Unfortunately,
men like Petrov are not in abundance and project managers assigned to military or
defense projects should always bear in mind that human input may help save the world.
An autonomous system is good, but a blend of the two is better. Mechanisms having
human input integrated into machines that can literally cause the apocalypse is not a bad
idea.

Cellular communications became quite the staple of human life, economic


centered activities from business, commerce, trade and manufacturing rely on this. Not
to mention the average dude calling home every day. It has simply become an invaluable
communications asset. Imagine a cellular blackout, a moment where calls, messages,
emails can’t be sent, as if someone pressed a global mute button. This is what exactly
happened to AT&T in 1990. A whopping 75 million phone calls failed to connect to the
other end, causing an unimaginable number of headaches to its staff. It was caused by a
single faulty switch at one facility that caused a chain reaction that ended in shutting
down the entire grid, along with a faulty line of code. These two are the primary suspects
of the blackout and not the hackers as some people would speculate. The faulty code
had been added during a massive system update. A backup system wouldn’t hurt the
company in preventing occurrences like this from happening. In a company whose
system is literally the backbone of nearly every communications media today, it’s a good
idea to implement these two have redundant backup components especially when it
pertains to keeping the body open as much as possible.

Basketball is considered to be one of the most famous sports worldwide, almost


every country has diehard basketball fans. To all basketball fans out there, have you
watched or played in a game where a delay was caused, of all things, a Windows update.
On March 13, 2015, the Paderborn Baskets, a second division German basketball team,
was relegated to a lower division for starting a game late, due to a necessary 17-minute
Windows update to the scoreboard’s laptop.

The game between the Chemnitz Niners and the Paderborn Baskets was set to
begin as normal, when Paderborn connected its laptop to the scoreboard. According to
Paderborn Baskets general manager Patrick Seidel, as reported in the Die Zeit journal,
the laptop was connected by 6:00 p.m. (meaning 1 hour and 30 minutes before the game),
and was set “as usual.” However, according to Seidel, “As both teams warmed up, the
computer crashed. When we booted it again by 7:20 p.m., it started downloading updates
automatically.” When the computer finished downloading and installing all the updates,
the game finally began at 7:55 p.m.

Well, its pretty clear that the event managers should have checked their devices
and made sure every device is ready and updated to avoid fiascos like this. Albeit
harmless, mishaps like this must not be allowed to occur often, for the sake of the
production team’s professionalism.

When the State Government of Queensland introduced a new payroll system in


2006, the project seemed like it was going to be straightforward. Five years later,
however, the project’s reviewing commission called it “the worst failure in public
administration in Australia’s history.”

In late 2007, the state of Queensland selected IBM Australia to set up a new
payroll system for the 80,000 employees of Queensland Health (QH). The initial contract
was budgeted for around $6 million, and was expected to go live after six months.

However, it did not turn out as expected. Indeed, the system did not go live until
late 2010, with major defects and an additional cost of nearly $25 million. Unfortunately,
even though the core system was functioning, QH had to hire another 1,000 employees
to manually undertake the payroll, adding $1.15 billion over eight years.

When the commission completed its report, fault was found at every stage of the
project, including the procurement process, the planning of the contract schedules, and
the vendor’s management of the project.
In 1956, a group of computer scientists at IBM set out to build the world's fastest
supercomputer. Five years later, they produced the IBM 7030 -- a.k.a. Stretch -- the
company's first transistorized supercomputer, and delivered the first unit to the Los
Alamos National Laboratory in 1961. Capable of handling a half-million instructions per
second, Stretch was the fastest computer in the world and would remain so through
1964.

Nevertheless, the 7030 was considered a failure. IBM's original bid to Los Alamos
was to develop a computer 100 times faster than the system it was meant to replace, and
the Stretch came in only 30 to 40 times faster. Because it failed to meet its goal, IBM had
to drop Stretch's price to $7.8 million from the planned $13.5 million, which meant the
system was priced below cost. The company stopped offering the 7030 for sale, and only
nine were ever built.

That wasn't the end of the story, however. "A lot of what went into that effort was
later helpful to the rest of the industry," said Turing Award winner and Stretch team
member Fran Allen at a recent event marking the project's 50th anniversary. Stretch
introduced pipelining, memory protection, memory interleaving and other technologies
that have shaped the development of computers as we know them.

Don't throw the baby out with the bathwater. Even if you don't meet your project's
main goals, you may be able to salvage something of lasting value from the wreckage.

The Knight-Ridder media giant was right to think that the future of home
information delivery would be via computer. Unfortunately, this insight came in the early
1980s, and the computer they had in mind was an expensive dedicated terminal.

Knight-Ridder launched its Viewtron version of videotex -- the in-home


information-retrieval service -- in Florida in 1983 and extended it to other U.S. cities by
1985. The service offered banking, shopping, news and ads delivered over a custom
terminal with color graphics capabilities beyond those of the typical PC of the time. But
Viewtron never took off: It was meant to be the the "McDonald's of videotex" and at the
same time cater to upmarket consumers, according to a Knight-Ridder representative at
the time who apparently didn't notice the contradictions in that goal.

A Viewtron terminal cost $900 initially (the price was later dropped to $600 in an
attempt to stimulate demand); by the time the company made the service available to
anyone with a standard PC, videotex's moment had passed.

Viewtron only attracted 20,000 subscribers, and by 1986, it had been canceled. But
not before it cost Knight-Ridder $50 million. The New York Times business section wrote,
with admirable understatement, that Viewtron "tried to offer too much to too many people
who were not overly interested."
Nevertheless, BusinessWeek concluded at the time, "Some of the nation's largest
media, technology and financial services companies ... remain convinced that someday,
everyday life will center on computer screens in the home." Can you imagine?
Sometimes you can be so far ahead of the curve that you fall right off the edge.

Two Western states spent the 1990s attempting to computerize their departments
of motor vehicles, only to abandon the projects after spending millions of dollars. First
was California, which in 1987 embarked on a five-year, $27 million plan to develop a
system for keeping track of the state's 31 million drivers' licenses and 38 million vehicle
registrations. But the state solicited a bid from just one company and awarded the
contract to Tandem Computers. With Tandem supplying the software, the state was
locked into buying Tandem hardware as well, and in 1990, it purchased six computers at
a cost of $11.9 million.

That same year, however, tests showed that the new system was slower than the
one it was designed to replace. The state forged ahead, but in 1994, it was finally forced
to abandon what the San Francisco Chronicle described as "an unworkable system that
could not be fixed without the expenditure of millions more." In that May 1994 article, the
Chronicle described it as a "failed $44 million computer project." In an August article, it
was described as a $49 million project, suggesting that the project continued to cost
money even after it was shut down. A state audit later concluded that the DMV had
"violated numerous contracting laws and regulations."

Regulations are there for a reason, especially ones that keep you from doing
things like placing your future in the hands of one supplier.

Meanwhile, the state of Washington was going through its own nightmare with its
License Application Mitigation Project (LAMP). Begun in 1990, LAMP was supposed to
cost $16 million over five years and automate the state's vehicle registration and license
renewal processes. By 1992, the projected cost had grown to $41.8 million; a year later,
$51 million; by 1997, $67.5 million. Finally, it became apparent that not only was the cost
of installing the system out of control, but it would also cost six times as much to run
every year as the system it was replacing. Result: plug pulled, with $40 million spent for
nothing.

In 1993, FoxMeyer Drugs was the fourth largest distributor of pharmaceuticals in


the U.S., worth $5 billion. In an attempt to increase efficiency, FoxMeyer purchased an
SAP system and a warehouse automation system and hired Andersen Consulting to
integrate and implement the two in what was supposed to be a $35 million project. By
1996, the company was bankrupt; it was eventually sold to a competitor for a mere $80
million.

The reasons for the failure are familiar. First, FoxMeyer set up an unrealistically
aggressive time line -- the entire system was supposed to be implemented in 18 months.
Second, the warehouse employees whose jobs were affected -- more accurately,
threatened -- by the automated system were not supportive of the project, to say the
least. After three existing warehouses were closed, the first warehouse to be automated
was plagued by sabotage, with inventory damaged by workers and orders going unfilled.

Finally, the new system turned out to be less capable than the one it replaced: By
1994, the SAP system was processing only 10,000 orders a night, compared with 420,000
orders under the old mainframe. FoxMeyer also alleged that both Andersen and SAP
used the automation project as a training tool for junior employees, rather than assigning
their best workers to it.

In 1998, two years after filing for bankruptcy, FoxMeyer sued Andersen and SAP
for $500 million each, claiming it had paid twice the estimate to get the system in a
quarter of the intended sites. The suits were settled and/or dismissed in 2004. No one
plans to fail, but even so, make sure your operation can survive the failure of a project.

In January 2007, the Los Angeles Unified School District flipped the switch on a
$95 million system built on SAP software customized by Deloitte Consulting. The system
was intended to replace a mishmash of outdated technology with a streamlined system
for tracking earnings and issuing paychecks for 95,000 teachers, principals, custodians
and other district employees.

But it was doomed from day one, done in by technology glitches, inaccurate and
often conflicting data from the old system, inadequate employee training, and infighting
and lack of internal oversight within the district, among other problems.

The trouble was apparent from the first month the new software went live. Some
teachers were underpaid, some overpaid and some had their names completely erased
from the system. It took a year and another $37 million in repairs for the school district to
work out the kinks. In November 2008, the district and Deloitte settled a dispute over the
work, with the contractor agreeing to repay $8.25 million and forgive $7 million to $10
million in unpaid invoices to put the matter to rest.

According to the article, HR IT projects have a pretty high rate of failure. Michael
Krigsman, who writes the IT Project Failures blog for ZDNet, says, "Depending on the
statistics you read, 30 percent to 70 percent of these projects will be late, over budget or
don't deliver the planned scope."

The Workforce article outlined several reasons for the specific failure of the above
project. The reasons included not having a high-level executive with IT experience
dedicated to the project (the district's original point person was a COO with little
computer experience) and the old system was riddled with errors. Also, HR IT projects
are very complex.
But Krigsman made an interesting point when talking about such projects. He said
the reasons for the problems are usually not technical; they're "organizational, political
and cultural in nature in almost every case."

I agree with that statement; also, I know that organizational, political, and cultural
issues are the hardest to pinpoint and to fix. In my experience, poor communication
among stakeholders and project principles is the underlying disease of all bad projects.

Projects of all industries, types and scales are all prone to failure. It is the bane of
human design, as everything man made is susceptible to failure. The role of the project
manager is to plan out measures, responses and plans to counter or prevent failures
from occurring, or minimize the impact should it inevitably transpire. We manage
projects so that it gets finished in due time with no sacrifice to quality or workmanship.

REFERENCES:
https://www.liquidplanner.com/blog/4-top-project-disasters-time-lessons/
https://www.liquidplanner.com/blog/4-top-project-disasters-time-lessons/
https://www.zdnet.com/article/the-top-10-it-disasters-of-all-time/
https://www.zdnet.com/article/minnesota-healthmatch-a-perfect-storm-for-it-failure/
https://www.exoplatform.com/blog/2017/08/01/5-of-the-biggest-information-technology-
failures-and-scares/
https://www.infoworld.com/article/2609011/the-worst-it-project-disasters-of-2013.html
https://www.techrepublic.com/blog/career-management/it-projects-fail-most-often-due-to-
organizational-issues/
https://www.computerworld.com/article/2533563/it-s-biggest-project-failures----and-what-
we-can-learn-from-them.html

You might also like