Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

W28614

FINRA (A): MOVING FINANCIAL REGULATION TO THE CLOUD

Mike Chapple and Sharif Nijim wrote this case solely to provide material for class discussion. The authors do not intend to illustrate
either effective or ineffective handling of a managerial situation. The authors may have disguised certain names and other identifying
information to protect confidentiality.

This publication may not be transmitted, photocopied, digitized, or otherwise reproduced in any form or by any means without the
permission of the copyright holder. Reproduction of this material is not covered under authorization by any reproduction rights
organization. To order copies or request permission to reproduce materials, contact Ivey Publishing, Ivey Business School, Western
University, London, Ontario, Canada, N6G 0N1; (t) 519.661.3208; (e) cases@ivey.ca; www.iveypublishing.ca. Our goal is to publish
materials of the highest quality; submit any errata to publishcases@ivey.ca. i1v2e5y5pubs

Copyright © 2022, Ivey Business School Foundation Version: 2022-11-01

On a hot July morning in 2013, Steve Randich sat in his car in the Rockville, Maryland, campus parking
lot of the Financial Industry Regulatory Authority (FINRA), letting the air conditioning keep him cool. As
he looked out over the suburban office complex’s retention pond, he wondered whether his next move
would be heralded as visionary or lambasted as a strategy too risky for the organization’s conservative
culture. After recently suffering an embarrassing failure to meet the investigatory demands of the US
Securities and Exchange Commission (SEC), Randich believed that FINRA needed to adopt a cloud
computing solution to avoid a recurrence of similar problems. Before he could do that, he needed to
convince FINRA’s board of directors to go along with his plan.

STEVE RANDICH

In July 2013, Randich had only served in his role as FINRA’s chief information officer (CIO) for less than
four months. He came to the regulator with a strong pedigree, having previously filled the roles of co-CIO
for Citibank NA (Citibank) and CIO of Nasdaq Inc. (Nasdaq). During his time at Nasdaq, Randich worked
closely with Rick Ketchum, who would later become FINRA’s chief executive officer. Ketchum had tried
to recruit Randich to FINRA for years. After a long, successful run as Citibank’s CIO, Randich finally
answered Ketchum’s call and joined FINRA’s executive team.

Nobody doubted that Randich’s credentials had prepared him for the role, but his background alone would
not be enough to make it through his upcoming conversation with the board, during which he needed to sell
the strong business case and risk analysis that his team had spent the last three months putting together.
When he attended his first board meeting in April, Randich had still been enjoying the honeymoon phase
of his new role. The board welcomed Randich’s strong technical background and industry experience and
quickly accepted his proposal to stay the course on the organization’s technology architecture, preserving
its existing investment in agency-owned data centres located in New York City and Ashburn, Virginia.
Randich expected today’s meeting to go differently, as he sought approval to take FINRA’s technology
operations in a new direction. He was about to suggest that the agency scrap the approach that he had
advocated three months ago and instead move one of their most critical business systems to servers operated
by Amazon Web Services Inc. (AWS). He knew this was the right move for FINRA, but he was unsure
whether he would be able to convince the board members.

This document is authorized for use only in Prof Shailaja Jha's 334-5-PGPM:Transformation through cloud computing at S P Jain Inst of Mgmt and Res (SPJIMR) from Jun 2024 to Dec 2024.
Page 2 W28614

Randich’s concern turned out to be well founded. As he sat in his car, his phone beeped with a text message
from a long-time board member offering some candid insight:

Look, Steve, I just read through the meeting materials, and I’m not totally comfortable with the
cloud idea. I trust you. You’re a career IT [information technology] guy, and you have a good
reputation. Rick speaks highly of you and what you did at Nasdaq, but I’ve got to tell you, I’m
completely uncomfortable with what you’re proposing.1

FINRA’S ROLE IN THE MARKETS

FINRA regulated securities brokers and brokerage firms in the United States. As a private, not-for-profit
self-regulatory organization, FINRA oversaw the operations of member firms under the oversight of the
SEC, the industry’s government regulatory authority.2 The SEC required that most firms trading securities
maintained membership in FINRA and participated in the authority’s regulatory programs. As of December
2020, FINRA regulated the activities of 3,435 member firms and 617,549 registered representatives.3

FINRA’s mission was to protect investors and ensure the integrity of the US financial markets under the
supervision of the SEC, at no cost to the taxpayer.4 To carry out its mission to protect market integrity,
FINRA engaged in the following five steps: (1) deter misconduct by writing and enforcing the rules; (2)
discipline those who broke the rules; (3) detect and prevent wrongdoing in the US markets; (4) educate and
inform investors; and (5) resolve securities disputes.5

In carrying out these responsibilities, FINRA handled approximately three thousand investor complaints each
year and oversaw a significant and increasing transaction volume (see Exhibits 1 and 2). FINRA’s oversight
activities resulted in hundreds of disciplinary actions against its members annually. These actions covered a
variety of misconduct, ranging from illegal market manipulation to fraud among regulated entities.6

Enforcing market regulations required a massive surveillance operation capable of regularly processing one
hundred billion or more market transactions each day.7 To put this number into perspective, the Visa
payment card network processed 138 billion transactions in 2019.8 FINRA’s systems had to be capable of
processing a daily transaction volume equivalent to the total annual Visa transaction volume. To carry out
this complex mission, FINRA employed approximately 3,400 individuals spread across twenty offices.9

THE FLASH CRASH

On May 6, 2010, global financial markets experienced one of the most significant short-term shocks in their
history. Over the course of fewer than ten minutes, the Dow Jones Industrial Average lost more than six
hundred points, only to recover that loss over the next six minutes (see Exhibit 3). The market had seen far
worse days, such as the infamous Black Monday crash of 1987 as well as the early days of the 2020 COVID-
19 pandemic, but those events were correlated with other world events—specifically, a global liquidity
crisis and a global pandemic, respectively.10 The Flash Crash was far more frightening because there was
no obvious cause for this sudden and violent market fluctuation. Traders, investors, and regulators had all
been caught by surprise, and their inability to explain this sudden market turmoil left the financial industry
with an uneasy feeling.

In the aftermath of the Flash Crash, regulators and academic researchers put forth several theories that sought
to explain the events leading up to the crash. The Commodity Futures Trading Commission and the SEC

This document is authorized for use only in Prof Shailaja Jha's 334-5-PGPM:Transformation through cloud computing at S P Jain Inst of Mgmt and Res (SPJIMR) from Jun 2024 to Dec 2024.
Page 3 W28614

released a joint report on September 30, 2010, that laid the blame for the crash at the feet of “a large
fundamental trader (a mutual fund complex) [that] initiated a sell program to sell a total of 75,000 E-Mini
contracts (valued at approximately [US]$4.1 billion) as a hedge to an existing equity position.”11 They cited
the high-frequency trading algorithm used to execute this trade as the trigger for the crash. Data provided by
FINRA served as one of the sources for this report, and the time required to gather data from all the various
sources played a significant role in the four-month delay between the crash and the release of the report.12

Researchers and other regulators disputed this explanation for the crash. The true cause of the crash became
a subject of debate in the financial and academic communities over the next few years. In 2015, the US
Department of Justice filed criminal charges against Navinder Singh Sarao, a futures trader residing in the
United Kingdom. They accused him of engaging in deliberate acts of market manipulation over the
preceding five years, including activity that “was particularly intense in the hours leading up to the Flash
Crash.”13 In January 2020, Sarao was sentenced to a year of home imprisonment and time served after being
extradited to the United States and found guilty on related charges.14

This post–Flash Crash environment was where Randich found himself as he walked into his new role at
FINRA in March 2013. SEC market regulators recognized that the four months required to gather data for
the Flash Crash investigation was far too long for events in modern markets. In an early conversation with
regulators, Randich found himself pushed to deliver market data instantaneously during future market
events. “We can’t wait four months for data ever again,” SEC regulators told him. “We have to come up
with a systematic way to gain real-time access to data when something happens in the market.”15

CLOUD COMPUTING IN 2013

With cloud computing, IT services were provided on a pay-per-use basis, similar to the way electricity was
consumed from an available on-demand grid. Electricity was a seemingly infinite service, delivered to
businesses by utility companies, which employed metered billing to charge only for what was actually used.
Consumers benefited from the scale of the supply grid in terms of quality and time saved. Instead of the
mechanics of power generation, consumers had access to electricity.

The physical technology resources of the cloud were similarly abstracted, accessed via a network connection
to remote data centres. In 2013, the major cloud platform providers included AWS, Google Cloud Platform
(GCP), HP Cloud, IBM Cloud Services, and Microsoft Azure. Cloud computing allows customers to acquire
resources at any time and for capacity to be added or removed as business needs evolved.

INFRASTRUCTURE MODELS

There were four main infrastructure models for delivering IT services. Each model was differentiated by
the potential to scale, service delivery speed, level of capital investment, and the people required. When
determining the most appropriate approach for delivering an IT system, the cost model and workload
characteristics needed to be taken into account.
The on-premises operating model used computing equipment that was purchased and operated locally. This
represented a significant commitment in terms of time and resources. Conceptually, this was similar to
building a classroom for every class taught at a university. This approach required land, a purpose-built
facility to house the physical equipment, electricity for power and cooling, and dedicated staff to run and
maintain both the equipment and the facility. This was the least scalable of the operating models, as it was
constrained by fixed, locally available resources.

This document is authorized for use only in Prof Shailaja Jha's 334-5-PGPM:Transformation through cloud computing at S P Jain Inst of Mgmt and Res (SPJIMR) from Jun 2024 to Dec 2024.
Page 4 W28614

The private cloud model was based on the premise of hardware virtualization. Hardware virtualization
allowed multiple logical computers to exist on a single physical server, extracting greater efficiency and
value from a fixed resource. This was conceptually similar to having multiple individual classes use the
same classroom during a given day. As virtualization grew in popularity during the early 2000s, it allowed
for denser, more efficient use of the underlying hardware. Private clouds still required land, a building,
electricity, and a dedicated staff. These factors necessarily limited the scaling potential of a private cloud.

The public cloud model was based on the premise that virtual computing resources were made available over
the Internet. The scale of the public cloud was such that available computing resources were virtually limitless.
This allowed for the creation of systems that could dynamically adjust computing power up or down,
mirroring changes in demand. The staff required to use a cloud platform performed less physical work, with
no need to touch computers or networking equipment in order to deliver an IT service. The public cloud
vendors provided a choice and scale of technology services unmatched by any other infrastructure model, all
using metered monthly billing. There was no long-term commitment in the public cloud; that said, public
cloud vendors did offer substantial discounts in exchange for long-term spending commitments.

The hybrid cloud model combined local virtualization and public cloud platforms. This offered the benefit
of local resources and the near-infinite computing available in the public cloud. For example, many
universities adopted a hybrid approach to education, combining the traditional classroom model with
teaching online. In this model, the same physical needs in terms of land, a structure, electricity, and staff
were required to operate the local infrastructure. The local portion typically represented a small percentage
of an organization’s overall IT infrastructure, with a commensurate reduction in physical needs.

GAINING STEAM

In 2013, cloud computing was gaining traction in the market. Businesses started to realize that the cloud
provided increased IT flexibility and reduced time to market for new services. At the same time, a
considerable amount of skepticism remained among business and technology leaders. Generally speaking,
technology companies quickly embraced the cloud, and most new technology ventures considered
themselves “born in the cloud.” Cloud adoption across other industries varied, with more conservative
industries, including health care and the financial sector, taking a slower approach driven by security,
privacy, and regulatory concerns.

MARKET REGULATION SCALE CHALLENGE

The order audit trail system (OATS) was the primary computer system that FINRA used to record
information about orders, quotes, and trade-related data.16 At the end of each day, equities trading firms
were required to submit OATS reports to FINRA that explained their market transactions for that trading
day. Each OATS report includes the date and time of the transaction, the securities involved, and the identity
of the parties involved in the transaction, among other details.

The reports received by OATS were the basis for FINRA’s market surveillance activities. Each night, the
agency ran hundreds of automated searches against the OATS database to detect known patterns of market
manipulation and fraud. For example, market spoofing was a practice where a trader placed orders to buy
or sell a security at a price not supported by the current market; this was done in the hopes of sending a
signal to other traders that the price was moving. The trader then entered an opposite order, seeking to
benefit from the artificial move they had just created in the market before cancelling the original order.17

This document is authorized for use only in Prof Shailaja Jha's 334-5-PGPM:Transformation through cloud computing at S P Jain Inst of Mgmt and Res (SPJIMR) from Jun 2024 to Dec 2024.
Page 5 W28614

By seeking out orders that resembled the characteristics of spoofing orders and were later cancelled, FINRA
could uncover this illegal market manipulation, as it left a clear pattern in the data; however, this could only
be done after other profit-taking trades had been executed successfully.

When an OATS surveillance pattern uncovered suspicious activity, the system triggered an alert to an
analyst in FINRA’s Market Regulation department. The analyst assigned to each alert was responsible for
digging into the details of the transactions involved and deciding whether the alert should trigger a formal
investigation or whether it was a false positive triggered by normal market activity. The triaging performed
by the market regulation analysts involved judgment and discretion that was critical to managing a balance
between maintaining fair markets and overwhelming regulatory capacity.

One of an analyst’s first actions after receiving an OATS alert was to pull additional information from the
system about the orders that triggered the alert and any related orders from the same trader or for the same
security. Analysts used a combination of predefined queries that pulled routinely requested information and
ad hoc queries that they had designed based on their contextual knowledge. These routine and ad hoc queries
often required substantial computing power, and it was not unusual for queries to take three minutes or
longer to complete on a typical trading day.

Complex investigations, such as the investigation into the Flash Crash, could require cross-referencing
millions of transactions across many different traders. These complex queries were beyond the ad hoc
capabilities of OATS and required dedicated systems, known as data marts, to process. These data marts
were subsets of the OATS dataset, with processing power dedicated to answering a specific question. These
subsets allowed analysts to conduct their investigations more efficiently, as the analysts were only
manipulating the data relevant to the investigation at hand. The queries to create data marts were complex,
involving many records, and could require six hours to complete.18

In some cases, exceptionally large data mart queries simply never completed. After consuming hours of
computing time, they simply ended with an error, never returning the data requested by the analyst and
setting an investigation back by hours or days. These errors and delays frustrated analysts and were a source
of many complaints that Randich received from other FINRA leaders. The performance limitations of
OATS were already inhibiting the Market Regulation department’s ability to perform the types of analyses
necessary for FINRA to fulfill its regulatory mission.

With yearly trade volumes on the rise, FINRA regularly needed to augment the hardware footprint of OATS
to cope with the associated increase in load. OATS was built around storage appliances designed to support
petabyte-scale databases. Even though FINRA was running multiple instances of the largest appliances
available, daily operations constantly bumped up against storage limitations. In order to perform routine
market surveillance activities, FINRA technologists were forced to move active data from production
servers to archival tape storage to clear capacity for query processing. This work was labour-intensive and
required FINRA to maintain a staff of sixty operators just to keep things running.19

FINRA’s technology team found themselves ordering additional hardware on a regular basis to keep ahead
of market volumes. In 2013, procuring a new server could take between thirty and forty-five days.20 This
long lead time made it essential to forecast overall trade volumes months in advance—a difficult
undertaking in a notoriously unpredictable market. FINRA had several close calls during periods of peak
market activity, but the technology team had thus far managed to keep OATS running, never losing
transactions due to system limitations.21

This document is authorized for use only in Prof Shailaja Jha's 334-5-PGPM:Transformation through cloud computing at S P Jain Inst of Mgmt and Res (SPJIMR) from Jun 2024 to Dec 2024.
Page 6 W28614

ANALYZING THE FLASH CRASH

During the Flash Crash, per-minute order volumes spiked to eight times the daily average. Both the daily
trading volume and the number of trades more than doubled the average from the previous three days.22
This led to a commensurate spike in recorded transactions, putting additional strain on OATS. While there
was considerable confusion in the financial industry on the day of the Flash Crash, for FINRA, that day
marked the beginning of months of extraordinary investigative effort.23

In order to help determine the root cause of the Flash Crash, FINRA and SEC analysts had to dig deeply
into a series of related, high-volume trades. To assist with the SEC’s investigation, FINRA analysts needed
data that was fragmented across various applications, siloed by business interest and departments. Using
the SEC’s request for information as an imperative, analysts were able to overcome access-related
challenges within FINRA.24 Apart from the cultural challenges, there were also technical obstacles, as there
was no way that FINRA’s existing hardware would be capable of supporting this unprecedented analysis.
The team needed to order, receive, and install new hardware appliances to support Flash Crash–related
analytic efforts.25

The computational aftermath of the Flash Crash made one thing clear to Randich and his team: something
needed to change—and in a dramatic fashion. As trading volumes continued to increase, Randich and his
team could virtually hear OATS creaking under the strain.

EXPLORING THE OPTIONS

In the aftermath of the Flash Crash, Randich and his team began to explore options for how to scale
FINRA’s systems to stay ahead of market volume. The agency had considerable expertise in designing and
operating massively parallel systems in local data centres. Leveraging this expertise, a coalition within
FINRA believed that the best path forward was to continue to invest in and extend the existing physical
environment. Meanwhile, a small skunk-works group of engineers had started experimenting with the
resources available in the public cloud and came up with the radical notion that cloud computing represented
the most viable path. A culture war was brewing.26

TAKING THE PROVEN PATH

FINRA had successfully operated OATS in its own data centres since 1998.27 As trade volumes increased
over time, they continued to add computing and storage capacity to meet the rising demand. This work
required significant investments in people, physical environments, and partnerships.
FINRA had grown an excellent infrastructure and operations staff with a demonstrated ability to build
analytics environments at scale. With over a decade of deep experience with the products being used, the
team knew how to build, operate, and grow. Every aspect of OATS—from performance to security and
scalability—was well understood.
From a financial standpoint, the team was able to clearly articulate the commitment that was needed to
continue to scale its systems. Adopting the on-premises model—a model well understood for acquiring and
depreciating capital investments—promised to leverage existing data centre expenditures. In order to house
its systems, FINRA had two data centres: one in New York City, the other in Rockville, Maryland. These
facilities were separated by 480 kilometres and contained approximately three thousand logical compute
nodes. Some employees within the organization felt that pursuing an alternative strategy would be wasteful
in light of FINRA’s existing investment.

This document is authorized for use only in Prof Shailaja Jha's 334-5-PGPM:Transformation through cloud computing at S P Jain Inst of Mgmt and Res (SPJIMR) from Jun 2024 to Dec 2024.
Page 7 W28614

From a technical perspective, FINRA had the knowledge, expertise, and vendor relationships in place to
pursue an on-premises computing strategy. FINRA’s massive analytic environment required substantial
financial investments and, as such, merited leadership attention from top-tier technology vendors. At one
point, FINRA was the largest client of its platform provider.28 With that kind of commitment came access
to the vendors’ top technical talent—a critical asset when considering the expansion of the on-premises
OATS platform.
Culturally, the advocates of taking the proven path were confident that the status quo for operating OATS
was the best path forward. Remaining on the premises was viewed as an extension of existing
responsibilities and an affirmation of existing leadership in running highly scalable systems.29 These
traditionalists valued the stability that came with proven technologies; as such, public cloud options were
met with extreme skepticism. The public cloud was only beginning to gain traction in the financial industry,
and many members of FINRA’s engineering team felt that there was no way that a public cloud provider
could offer the same level of reliability and security as FINRA could.
Confident and capable, the on-premises contingent had garnered great respect among FINRA’s technical
leaders, and it was their proposal that Randich brought to the table the first time that he attended a FINRA
board meeting in April 2013.

MOVING TO THE PUBLIC CLOUD

FINRA was committed to encouraging its talent to stay abreast of technological advancements. It was
through that commitment to professional development that the public cloud first came to FINRA. As the
technology industry began to quickly adopt cloud computing platforms, Greg Wolff, FINRA’s long-time
architect for market regulation technology, took notice and began to encourage his team to explore the
cloud. The team did not have any plans to move FINRA’s technology to the cloud, but Wolff wanted the
team to stay engaged with new technologies. He encouraged his team to use their corporate credit cards to
rent time in AWS, a leading provider of public cloud services.30
As individuals gained cloud knowledge through this experience, they started to wonder what role cloud
computing might play in FINRA’s future. The agency was a data-driven organization, and Wolff deeply
understood that, at its core, FINRA relied on databases to perform its regulatory obligations.
Wolff spent time reflecting on the three core components of those databases: data storage, indexing, and
computational power. When he looked at the cloud, he saw virtually limitless storage and processing power,
which would allow FINRA to focus on the indexing. The cloud might allow them to shift the work of
maintaining the infrastructure to a third-party provider, enabling FINRA’s team to focus on the specialized
analytics of market regulation.31
As Wolff and his colleagues experimented more and more with cloud services, they became increasingly
convinced that cloud computing was the only viable future path for the OATS infrastructure. Over the
course of a few months, they went from being casual cloud experimenters to advocates for cloud computing.
They concluded that to successfully scale to meet growing demand, FINRA would have to either establish
a private cloud infrastructure or start using the public cloud.
In parallel, storm clouds were on the horizon for FINRA’s existing technology. FINRA’s technology vendors,
with whom the organization had enjoyed strong relationships, informed FINRA that they would be discontinuing
support for some of the organization’s older hardware, as it had reached its end of life and needed to be replaced.32
It became clear that the status quo was no longer viable; FINRA needed to make a move.
While there was a core group that had a vision in which the cloud was integral to FINRA’s ability to keep
pace with market volumes, senior management was understandably skeptical.33 The cloud approach was

This document is authorized for use only in Prof Shailaja Jha's 334-5-PGPM:Transformation through cloud computing at S P Jain Inst of Mgmt and Res (SPJIMR) from Jun 2024 to Dec 2024.
Page 8 W28614

radically different from what had been successful to date. Wolff knew that getting his supervisor on board
was critical. The opportunity presented itself when his supervisor needed to catch a flight from Washington
Dulles International Airport. Wolff knew that the trip from Rockville to Dulles took about an hour, so he
volunteered to drive his supervisor to the airport. Wolff had two reasons for offering to drive: the first was
that he and his supervisor were on good terms, and Wolff lived relatively close to Dulles; the second, and
far more important reason, was that he knew this would be a rare opportunity to get his boss’s undivided
attention for the hour that they would be in the car together.
As they wound south from Rockville along Interstate 270 to Interstate 495, Wolff was able to go into great
detail about his emerging vision for FINRA’s future in the cloud. Wolff spoke methodically, clearly, and in
great detail about how an application designed for the cloud, instead of adapted to run in the cloud, could scale
to FINRA’s current needs and beyond. His supervisor, a seasoned technology executive with a master’s degree
in electrical engineering, picked up on Wolff’s passion and started to see the logic in his reasoning.
When Wolff’s boss returned from his trip, he was still thinking about the conversation he had had with
Woolf in the car, and he convened a series of follow-up meetings with Wolff and other technologists. By
the time Randich assumed the CIO role in March 2013, Wolff and the other technologists were convinced
that the only viable path forward for FINRA was to move to the cloud.34

THE ELEPHANT IN THE ROOM

Randich’s first day in the office was greeted by discordant perspectives on FINRA’s future technical
direction for market regulation technology.35 FINRA’s Infrastructure and Operations department was
putting the finishing touches on the presentation for the board of directors. The cloud-centric technologists
were diametrically opposed to the approach being proposed by the Infrastructure and Operations team.
Knowing that he needed to act quickly, Randich called a meeting to determine FINRA’s technical direction
shortly after presenting to the board in April. Randich was adamant that people would not leave the meeting
until directional consensus was reached.
Peter Boyle, a technology executive who worked closely with Wolff, believed strongly that FINRA could
not handle future demand without moving to the cloud.36 As they prepared for the direction-setting meeting
with Randich, Boyle and Wolff got together to establish a list of values that were important to FINRA.
Understanding that the cloud conversation was a point of contention, they were careful to craft the values
in terms of business capabilities, regardless of technology choice (see Exhibit 4).
When the day of the meeting came, the conversation quickly became heated. Some argued vehemently
against going with unproven architectures, while others started to debate the technical merits of moving to
the cloud. Wolff had decades of experience influencing technology decisions, and he believed that in order
to resolve conflict, the conversation had to be refocused on value statements and away from the technical
details. Wolff believed that when people agreed on the values that they held dear, the technical decisions
became fairly obvious.37
The assembled vice-presidents worked through the list of values and came to an agreement on the order in
which they were ranked. But just as the conversation was on the precipice of devolving into a debate on
which technical approach was best, Wolff interrupted the gathering with a crystallizing question: “Let’s
talk about the elephant in the room: are we going to build this thing on the premises, or are we going to
move it to the cloud?”
As Randich sat in his car before the July board meeting, the “elephant in the room” meeting was fresh in his
mind. He now needed to take the spirit of that conversation and confidently convey it to the board. He knew that
some members were already familiar with the cloud and were using it in their own organizations, whereas others
would be skeptical of putting FINRA’s crown jewels in AWS’s hands. How would he make his case?

This document is authorized for use only in Prof Shailaja Jha's 334-5-PGPM:Transformation through cloud computing at S P Jain Inst of Mgmt and Res (SPJIMR) from Jun 2024 to Dec 2024.
Page 9 W28614

EXHIBIT 1: COMPLAINTS DIRECTLY REPORTED BY INVESTORS TO FINRA

Note: FINRA = Financial Industry Regulatory Authority.


Source: “Statistics,” FINRA, accessed August 31, 2022, https://www.finra.org/media-center/statistics.

EXHIBIT 2: FINRA DAILY REGULATED TRANSACTION VOLUME

Note: FINRA = Financial Industry Regulatory Authority.


Source: Company files.

This document is authorized for use only in Prof Shailaja Jha's 334-5-PGPM:Transformation through cloud computing at S P Jain Inst of Mgmt and Res (SPJIMR) from Jun 2024 to Dec 2024.
Page 10 W28614

EXHIBIT 3: THE DOW JONES INDUSTRIAL AVERAGE ON THE FLASH CRASH, MAY 6, 2010

Source: Frank Holmes, “Regulators Dole Out Minor Penalties for Gold Market Manipulation,” Forbes, June 29, 2015,
https://www.forbes.com/sites/greatspeculations/2015/06/29/regulators-dole-out-minor-penalties-for-gold-market-
manipulation/?sh=b71ac2d39691.

This document is authorized for use only in Prof Shailaja Jha's 334-5-PGPM:Transformation through cloud computing at S P Jain Inst of Mgmt and Res (SPJIMR) from Jun 2024 to Dec 2024.
Page 11 W28614

EXHIBIT 4: FINANCIAL INDUSTRY REGULATORY AUTHORITY’S CORE TECHNOLOGY VALUES

Core Value Description


Reuse Business Domain I want to reuse my existing business domain knowledge so that I can
Knowledge deliver an effective and comprehensive solution.
I want to build and maintain a reliable, automated, and fault-tolerant
Operational Excellence processing environment so that meeting our service-level requirements is
an everyday baseline occurrence.
I want to propose a complete and believable solution and schedule that
Schedule are within the required timeline so that I can convince the customer that I
will be able to deliver on that timeline.
I want the ability to dynamically scale, both up and down, my systems
capabilities (storage, processing, number of nodes, etc.) so that I can
Scalability
respond to changing market conditions and regulator requests in a timely
and cost-effective manner.
I want to be able to rapidly and easily incorporate new technologies,
Flexibility financial products, and market volumes so that I can adapt to dynamic
market conditions and regulatory needs.
I would like to have clear abstractions and separations between the
infrastructure for disaster recovery, storage management, data
Disaster Recovery
processing, and reporting so that I can leverage best-of-breed tools and
focus our development resources on our core business competencies.
I want to build and maintain a secure system so that I obtain the trust of
Trust
the industry.
I want the system to be easy to integrate with and submit data to so that
Industry Cost
the barriers to adoption will be minimized.
I want to choose the lowest total cost solution that meets my goals so that
Total Cost of Ownership
the solution will be cost-effective for the industry.
I want to focus on industry-proven companies and technologies so that I
Long-Term Viability have a highly reliable, available, maintainable, and extendable solution
for a five-year system life cycle.
I want to reuse my existing technology and architectures so that I can
Reuse Technology leverage my current knowledge and implementations as I build the
system.

Source: Company files.

This document is authorized for use only in Prof Shailaja Jha's 334-5-PGPM:Transformation through cloud computing at S P Jain Inst of Mgmt and Res (SPJIMR) from Jun 2024 to Dec 2024.
Page 12 W28614

ENDNOTES
1
Steve Randich (chief information officer, Financial Industry Regulatory Authority), in discussion with the case authors, March
2, 2020.
2
“What We Do,” Financial Industry Regulatory Authority, accessed July 1, 2020, https://www.finra.org/about/what-we-do.
3
“Statistics,” Financial Industry Regulatory Authority, accessed June 15, 2021, https://www.finra.org/media-center/statistics.
4
“What We Do,” Financial Industry Regulatory Authority.
5
“Five Steps to Protecting Market Integrity,” Financial Industry Regulatory Authority, accessed July 1, 2020,
https://www.finra.org/about/what-we-do/five-steps-protecting-market-integrity.
6
Financial Industry Regulatory Authority, Disciplinary and Other FINRA Actions, November 2010,
https://www.finra.org/sites/default/files/DisciplinaryAction/p122448.pdf.
7
“Technology,” Financial Industry Regulatory Authority, accessed July 1, 2020, https://www.finra.org/about/technology.
8
Visa, Annual Report 2019, accessed July 1, 2020, 2, https://s1.q4cdn.com/050606653/files/doc_financials/2019/ar/Visa-Inc.-
Fiscal-2019-Annual-Report.pdf.
9
Financial Industry Regulatory Authority, 2018 FINRA Annual Financial Report, 2018, accessed July 1, 2020, 8,
https://www.finra.org/sites/default/files/2019-06/2018_Annual_Financial_Report.pdf.
10
Donald Bernhardt and Marshall Eckblad, “Stock Market Crash of 1987,” Federal Reserve History, November 22, 2013,
https://www.federalreservehistory.org/essays/stock_market_crash_of_1987; Richard Partington and Graeme Wearden,
“Global Stock Markets Post Biggest Falls since 2008 Financial Crisis,” The Guardian, March 9, 2020,
https://www.theguardian.com/business/2020/mar/09/global-stock-markets-post-biggest-falls-since-2008-financial-crisis.
11
US Commodity Futures Trading Corporation and the US Securities and Exchange Commission, Findings regarding the
Market Events of May 6, 2010, September 30, 2010, 2, https://www.sec.gov/news/studies/2010/marketevents-report.pdf.
12
Mike Dillon (senior vice-president, Enterprise Delivery Services, Financial Industry Regulatory Authority), in discussion with
the case authors, March 2, 2020.
13
United States District Court for the Northern District of Illinois, United States of America v. Navinder Singh Sarao—Criminal
Complaint, February 11, 2015, https://www.justice.gov/sites/default/files/opa/press-
releases/attachments/2015/04/21/sarao_criminal_complaint.pdf.
14
Kadhim Shubber and Claire Bushey, “‘Flash Crash’ Trader Avoids Further Jail Time,“ Financial Times, January 28, 2020,
https://www.ft.com/content/ea94b64c-41e1-11ea-a047-eae9bd51ceba.
15
Randich, discussion.
16
“Order Audit Trail System (OATS),” Financial Industry Regulatory Authority, accessed July 1, 2020,
https://www.finra.org/filing-reporting/market-transparency-reporting/order-audit-trail-system-oats.
17
Álvaro Cartea, Sebastian Jaimungal, and Yixuan Wang, “Spoofing and Price Manipulation in Order Driven Markets,” Applied
Mathematical Finance 27, no. 1/2 (2020): 67–98.
18
Greg Wolff (technology enterprise software Architect, Financial Industry Regulatory Authority), in discussion with the case
authors, March 2, 2020.
19
Wolff, discussion.
20
Dillon, discussion.
21
Dillon, discussion.
22
Andrei Kirilenko et al., “The Flash Crash: High-Frequency Trading in an Electronic Market,” Journal of Finance 72, no. 3
(June 2017): 967–998, https://doi.org/10.1111/jofi.12498.
23
Peter Boyle (technology executive, Engineering and Digital Workplace Service, Financial Industry Regulatory Authority), in
discussion with the case authors, March 2, 2020.
24
Dillon, discussion.
25
Wolff, discussion.
26
Dillon, discussion.
27
“Order Audit Trail System,” Financial Industry Regulatory Authority.
28
Randich, discussion.
29
Dillon, discussion.
30
Wolff, discussion.
31
Wolff, discussion.
32
Wolff, discussion.
33
Randich, discussion.
34
Wolff, discussion.
35
Dillon, discussion.
36
Wolff, discussion.
37
Wolff, discussion.

This document is authorized for use only in Prof Shailaja Jha's 334-5-PGPM:Transformation through cloud computing at S P Jain Inst of Mgmt and Res (SPJIMR) from Jun 2024 to Dec 2024.

You might also like