Risk and Complexity-On Complex Risk Management.

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

The current issue and full text archive of this journal is available on Emerald Insight at:

https://www.emerald.com/insight/1526-5943.htm

Risk and complexity – on Risk and


complexity
complex risk management
Jan Emblemsvåg
Norwegian University of Science and Technology, Ålesund, Norway
37
Received 3 September 2019
Abstract Revised 12 January 2020
Purpose – Industries lament the current situation of approaches that have resulted in huge losses in the face Accepted 27 January 2020
of complex risks. The purpose of this study is therefore to review complexity theory in the context of risk
management so that it is possible to research better approaches for managing complex risks.
Design/methodology/approach – The approach is to review complexity theory and highlight those
aspects of complexity theory that have relevance for risk management. Then, the paper ends with a
discussion on what direction of research that will be most promising for the aforementioned purpose.
Findings – The paper finds that the most challenging aspect is to identify the weak signals, and this implies
that the current approaches of estimating probabilities are not going to produce the desired results. Big data
may hold a solution in the future, but with legislation such as the General Data Protection Regulation, this
seems impossible to implement on ethical grounds. Hence, the most prudent approach is to use a margin of
safety as advocated by Graham roughly 70 years ago. Indeed, the approach may be to assume that a disaster
will take place and use risk management tools to estimate the impact for a given object.
Research limitations/implications – The literature review is a summary of a much larger work, and
in so doing, the resulting simplification may run the risk of missing out on important details. However, with
this risk in mind, the review holds rich enough discussion on complexity to be relevant for research about
complex risk management.
Practical implications – The current implication for practice is that the paper strongly supports the
notion of using a margin of safety as advocated by Graham and his most famous disciple Warren Buffet. This
comes from the fact that because context is king, risk management approaches must be applied in their right
domain. There is no one right way. In the future, the goal is to develop a quantitative approach that can help
the industry in pricing complex risks.
Originality/value – The main contribution of the paper is to bring complexity theory more into the
domain of risk management with sufficient details that should allow researchers to get conceptual ideas about
what might work or not concerning complex risk management. If nothing else, it would be a significant
contribution of the paper if it could help increasing the interest in complexity theory.
Keywords Uncertainty, Emergence, Insurance industry, Enterprise risk management,
Margin of safety
Paper type General review

1. Introduction
The insurance industry, and in particular the reinsurance industry, have learned the lessons
of complex risks harder than most. In their paper about managing complex risks, Sachs and
Wade (2013) from MunichRe discuss events that “[. . .] showed how quickly generally
accepted model assumptions and business practices can be overtaken by the complexity of
reality”.
The first event is the 9/11 terrorist onslaught of the World Trade Center in New York in
2001. This event they describe as a watershed event for the reevaluation of risk. Not only The Journal of Risk Finance
was the scale of the losses surprising – US$32bn – but the aftermath led to insured losses in Vol. 21 No. 1, 2020
pp. 37-54
almost all classes of business. Furthermore, over the past 10 years, only 30 per cent of © Emerald Publishing Limited
1526-5943
catastrophe losses were covered while the balance of US$1.3tn was borne by individuals, DOI 10.1108/JRF-09-2019-0165
JRF corporations and governments[1]. Furthermore, The Economist (2019a) states that the
21,1 central reason for this cautiousness is that the industry is afraid of regulators who may
punish them for taking on bad risks.
The second event is the subprime financial crisis of 2008, which clearly illustrated the
risks of interdependencies in a global world. As a classic result found in complex systems,
which will be discussed later, a set of factors – each being significant, but not disastrous –
38 interplayed producing a combined effect that far superseded what traditional risk
assessments would indicate. These events are what Taleb (2007) labels as Black Swans, or
high-impact, low probability (HILP) events in a more technical parlance. These events, and
many more, has led The World Economic Forum (2016) and others to comment on the “[. . .]
increasing volatility, complexity and ambiguity of the world”.
While the literature is scant on managing complex risks, there is some abstract-, high-
level- but impractical advice offered, as Sachs and Wade (2013) lament. Advice such as
“simplify,” “prepare for the next surprise” or “think the unthinkable” is arguably difficult to
operationalize. An exception is Nason (2017) who discusses complexity and its impact on
risk management. He rightfully argues that the holistic approach of enterprise risk
management (ERM) is an improvement over the more traditional, functional-oriented
approaches. ERM is discussed more, later.
One area where there has been some movement concerns the so-called environmental,
social and governance (ESG) risks, which are arguably complex in nature. Despite over
US$3tn in institutional assets track ESG scores, these scores suffer from inconsistencies and
incomparability to the extent that Hester Pierce of the US Securities and Exchange
Commission (SEC) described it as “Labelling based on incomplete information, public
shaming, and shunning wrapped in moral rhetoric,” according to The Economist (2019b).
Hence, finding a reliable way of handling complex risks is growing in importance, not only
to prevent disasters but also in regular investing.
Therefore, in this paper, the current state-of-the-art method is reviewed. Because the core
of the challenge lies in understanding complexity, the relationship between risks and
complexity is the main focus. Until this is properly understood, it is unlikely that any useful
improvements in risk management tools and approaches can be developed.
Before continuing, it should be noted that using the term complex risk management
(CRM) does not come without possible confusion. The confusion arises because a complex
management of risk is not the same as managing complex risks – indeed, it is the very
antithesis. Complex management is easy to achieve, but it is risky. Indeed, “Complex
systems almost always fail in complex ways,” as the Board that investigated the loss of the
Columbia space shuttle noted by the National Commission on the BP Deepwater Horizon Oil
Spill and Offshore Drilling (2011). After conducting a thorough review of the Deepwater
Horizon disaster, the commissioning concluded that:
The blowout was not the product of a series of aberrational decisions made by rogue industry or
government officials that could not have been anticipated or expected to occur again.
The report points to the fact that “[. . .] the root causes are systemic and, absent significant
reform in both industry practices and government policies, might well recur.” What we need
is effective management of complex risks, labeled here as CRM.

2. Complex risk management – state-of-the-art


The state-of-the-art can probably be best summarized by Sachs and Wade (2013) when they
in the context of the major events mentioned initially in this paper state that these events
“[. . .] showed how quickly generally accepted model assumptions and business practices
can be overtaken by the complexity of reality.” Since 2013, not much has happened on Risk and
improving the methods. However, much has been written about the problem. In fact, on the complexity
financial crisis alone, there has been written a great deal:
Dozens of books and hundreds of articles in esteemed academic journals have been written about
the causes of the financial crisis. The competing interpretations of the causes reflect the growing
difficulty of identifying cause and effect in complex systems (Goldin and Kutarna, 2017).
Furthermore, The Economist (2018) asserts that out of a decade-long process of writing and 39
thinking only very crude understanding of complexity and risk have been developed and
implemented, and the financial industry has only become modestly safer as not all lessons
have been learned. Recurrence is therefore a real risk.
Clearly, there is significant potential for improved risk management in financial
institutions. Unfortunately, financial risks can easily be triggered by outside events such as
terrorism and so on (Goldin and Kutarna, 2017), which means that we need a broad
approach when managing complex risks. This shows that addressing issues outside the
narrow domain of finance is important.
Indeed, there are two simple and effective ways of addressing some of these financial
risks – without either directly managing the risks or entangling regulators into huge and
administratively complex and expensive codes (which may have loopholes anyway):
 Limit the size of financial institutions. The Glass-Steagall Act of 1933 did this
but it was removed during the years of deregulation by President Reagan.
Indeed, Johnson and Kwak (2010) suggest limiting the size of commercial banks
to maximum 4 per cent of GDP and 2 per cent of GDP for investment banks. This
would reduce the size of the biggest banks and prevent them from becoming “too
big to fail.” In comparison, the banking sector in the UK constituted almost 10
per cent of GDP at the onset of the financial crisis, Goldin and Kutarna (2017)
estimate.
 Limit the leverage of financial institutions and hence increase their ability to absorb
losses before failing. In fact, if bank leverage were capped at 3:1 so that equity
constituted 25 per cent of total liabilities, then it would have no impact on banks
abilities to make loans while vastly improving the stability of the financial system,
according to Admati and Hellwig (2013).

So much for finance, is the state-of-the-art any better in other domains such as engineering?
Unfortunately, it is not as argued by Emblemsvåg (2008). Repeatedly we can read about
cases where professionals have greatly underestimated probabilities and/or impact. Then
we have the problems associated with qualitative approaches as discussed by Emblemsvåg
and Endre Kjølstad (2002), and exemplified by Backlund and Hannu (2002), who provide a
telling case where three different consulting companies were hired to assess the risks of the
same object and reaching three very different conclusions. There are some remedies to this
as discussed by Emblemsvåg (2010); however, the case of complex risks remains unsolved.
The fact is that all the traditional risk management approaches – qualitative,
quantitative/probabilistic and statistical alike – were derived from a linear conception of
cause and effect. Here, risk is merely the combination of probability and impact as found in
early guidelines such as the 1992 COSO report in the USA (COSO, 1992) and the 1999
Turnbull Report in the UK (ICAEW, 1999) and in guidelines to this very day.
Building on the 1992 COSO report, however, COSO introduced in 2004 their ERM
framework, with the underlying philosophy that:
JRF [. . .] value is maximized when management sets strategy and objectives to strike an optimal
balance between growth and return goals and related risks, and efficiently and effectively deploys
21,1 resources in pursuit of the entity’s objectives (COSO, 2004a).
This is perhaps one of the first frameworks that try to take a holistic view of risk and
therefore moves itself away from the simple, linear conception of risk to one more aligned
with the complex nature of many risks. Despite this framework has been widely recognized
40 as a respected authority on ERM, as Pierce and Goldstein (2016) and others claim, COSO
found in their 2010 survey that corporations are still immature concerning ERM (Beasley
et al., 2010). In 2017, COSO (2017) issued an amended framework in response to the
increasing complexity of risks more clearly connecting ERM with strategy and anchoring it
to the context of a corporation’s performance.
The Turnbull report is another ERM framework made at the request of the London Stock
Exchange (LSE), and it became a mandatory requirement for all corporations listed at the
LSE before 2001, Jones and Sutherland (1999) write. These guidelines were critical events in
the standardization and reconceptualization of internal control as risk management,
according to Power (2007). The interested reader is referred to Power (2007) who provides an
excellent overview and discussion of COSO, Turnbull and more.
They have certainly resulted in improvements because they take a holistic view
promoting a corporate-wide approach to risk (Nason, 2017), but given that all the
aforementioned disasters in this paper took place well after these publications, it is clear that
there is much to amend. One reason is that these guidelines and frameworks are of high level
and rely on traditional ways of assessing risk (COSO, 2004b), which have the shortcomings
discussed earlier.
This means that there exists currently no approach that can estimate or manage complex
risks reliably well – it is much more art than science. As Smith and Irwin (2006) note:
[. . .] in dealing with the implications of complex systems, risk management will need to move
beyond a linear, cause and effect approach [. . .] [. . .] incorporating the complexity literature in the
process.
Hence, a first step in our quest to amend the situation must be to review complexity and risk,
which is done next Section 3. Based on this discussion, it will become evident what is
theoretically possible and what is realistically achievable. Future research must be to find
the best fit between possibility and realism, as discussed in Section 4.

3. On complexity and risk


To understand complexity and risk, we must not forget that uncertainty must also be
tackled. Thus, we must identify effective definitions of all three terms – complexity, risk and
uncertainty. Because complexity is an inherent property, it makes sense starting there.
“Complexity” is a word we use with great sloppiness. In fact, Kelly (1995) claims that
“We are so ignorant of complexity that we haven’t yet asked the right question about what it
is.” True, and we have great problems in distinguishing “complex” from “complicated” in a
rational fashion. Even Webster’s Encyclopedic Dictionary of the English Language falters
here. Furthermore, in the scholar literature on complexity, definitions abound. Horgan (1995)
has recorded more than 30 different definitions. The bulk of definitions can, however, be put
into two general clusters of definitions, Maguire and McKelvey (1999) claim:
 those that relate complexity to information; and
 those that emphasize “time” and “space.”
This proliferation of definitions is not advantageous as it becomes difficult to discuss Risk and
complexity. complexity
In this paper, we therefore start from its roots. In Latin, the word complexus means
something like “entwined” or “twister together.” Thus, complexity implies that many
elements interact to produce a common output. However, the elements are not one element –
it is a network with nodes like Smith and Irwin (2006) and many more note. Thus, there are
no rigid or strong cause-and-effect relations between the elements which would turn them
into one. However, there are enough couplings between the elements to produce a common 41
output. The term “coupling” was coined by Karl E. Weick in the context of activities of one
part of an organization as being more or less independent of the activities of the other parts
of that organization (Weick, 1976).
Here, an important point should be made. In traditional theories, cause-and-effect
relations (or causations in short) imply that “if A takes place, then B will happen, which in
turn will result in C” – a chain of events ruled by mechanistic relations, as it were. Hence, it is
believed that if we know the initial state of a system, then we can obtain the next state given
that we know how the various variables of the system interact. In complexity theory,
however, a coupling is not a relation that produces a clear chain of events. There is a
network of events, often circular so that they feed back into each other, and that together
produce the final output. Capra (1996) offers an intuitive- and non-technical discussion.
Consequently, the final state cannot be known accurately.
In this paper, this distinction between causations and couplings is merged into one by
simply speaking of the strength of the couplings. Thus, loose couplings will result in a weak
causation while strong couplings will result in strong causations, which is similar to the
traditional cause-and-effect relationship.
The essence of complexity lies in the strength of these couplings. Complexity is therefore
a property that depends on the varying strengths of the couplings within the system
resulting in completely different behavior. Therefore, it cannot be, like some argue, that
complexity essentially arises because of lack of information. It is the other way around.
Complexity is an inherent property of real-life systems which leads to lack of information
because of the uncertainty associated with the possible outputs from such systems.
Therefore, precision and complexity are incompatible with respect to providing meaningful
statements about something. The mathematician Lotfi A. Zadeh formulated this fact in a
theorem called the Law of Incompatibility:
As complexity rises, precise statements lose meaning and meaningful statements lose precision.
This theorem, as quoted by McNeill and Freiberger (1993), has vast implications for
information and knowledge about systems and processes that are complex. However, an
equally crucial side effect of complexity is variation. This variation is not because of lack of
information but an inescapable result of the loose couplings in the system. We can have
complete information about a process, but we cannot completely eliminate the variation in
its output.
Furthermore, from the Law of Incompatibility, we understand that there are limits to how
precise decision-support both can and should be (to avoid deception). In fact, increasing the
uncertainty in decision-support material to better reflect the true uncertainty will lower the
actual risk as shown by Emblemsvåg (2001, 2003).
There are many more implications to understand when going from a traditional outlook
to one based on complexity theory, as exemplified in Table I. Naturally, we cannot discuss
all these in this paper but some will be discussed that are relevant for the purpose of the
paper.
JRF Traditional outlook Complexity-based outlook
21,1
Reductionism Holism
Linear causality Mutual causality
Objective reality Perspective/subjective reality
Observer outside the observation Observer in the observation
Determinism Indeterminism
42 “Survival of the fittest” Adaptation/coevolution
Linear relationships Nonlinear relationships
Either/or thinking Degree of thinking
Focus on directives Focus on feedback
Newtonian physics perspectives Quantum physics perspectives
Modern Postmodern
Hierarchy Networks
Prediction Understanding/sensitivity analysis/explanation
Language as action Language as representation
Logic Paradox
Equilibrium, stability, structure Dynamic disequilibrium, pattern, organization
Averages Variation
Local control Global control
Behavior specified from top to down Behavior emerges from below
Focus on results or outcomes Focus on ongoing behavior
Specialists Generalists
Narrowly applicable theory Widely applicable theory
Reversible time Irreversible time
Generation of symbols Transmission of symbols
Material over immaterial Immaterial over material
Table I.
Changes in outlook Source: Adapted from Dent (1999)

So far, we have looked at complexity solely as an inherent property of certain systems, but in
our daily lives, complexity also refers to a type of behavior that we can witness. Some complex
processes often exhibit simple behavior whereas many simple systems exhibit complex and
even chaotic behavior Ilachinski (1996). Recall, for example, the Mandelbrot fractal; it is very
simple by definition but exhibits extreme complex/chaotic patterns (Mandelbrot, 1980). A sail
boat in contrast is a complex system but usually exhibit fairly simple behavior.
Because complexity is both a property and a type of behavior, distinguishing between
the two may be confusing in our daily language. To reduce this confusion, Figure 1 attempts
to illustrate the main types of complexity. Complexity as a property refers to the strengths of
the couplings between the various elements that constitute the system, whereas complexity
as a behavior essentially refers to our degree of ignorance about the output from the system.
With this in mind, it becomes useful to define four types of systems where each type refers
to a certain type of behavior according to its inherent complexity.
The simplest case is complicated systems in which the strength of the couplings is so strong
that one might even consider the system as not complex at all – it is merely complicated. Such
systems can be analyzed very accurately and hence the degree of ignorance is very low. Such
systems are “ordered” because the behavior is predictable. A prototypical example is
mechanical clockwork in which there is a huge amount of tiny parts but all the couplings
between the parts are known accurately and the output becomes predictable.
The next type of system is complex systems, that is, systems where the couplings are
loose. Maneuvering a sailboat is an example of a complex activity. The reason is that
Risk and
complexity

43

Figure 1.
The four Cs of
complexity

numerous factors such as wind, waves, currents and angles interplay in a loose manner so
the result cannot be known accurately. However, the couplings are not as loose as in chaotic
systems, or systems of chaos. As a concept, chaos was born in 1892 with Henri Poincare’s
discovery that certain orbits of three or more interacting celestial bodies can exhibit
unstable and unpredictable behavior, but the term was coined in 1975 by Li and Yorke to
denote random output of the so-called “deterministic mappings” Ilachinski (1996).
A technical note is required because of the different usage of terms in the literature. Note
that chaos in the literature has a very technical interpretation that does not fit into
complexity theory – “technical” chaos derives from successive differential changes that are
deterministic whereas chaos in this paper refers to very loose couplings. Technical chaos is
therefore a special case of complicated systems, whereas chaos in Figure 1 refers more to
chaos as most people will intuitively describe it. Hence, the difference between complex
systems and chaotic systems in this paper is a matter of degree.
A significant finding in the literature is that chaotic systems in a “technical” sense tend to
be intrinsically sensitive to initial conditions of the system. In this paper, this means that
complicated systems can turn chaotic under certain circumstances. Another intriguing
finding noted by Ilachinski (1996) is that a complex system can turn chaotic if the system is
not allowed to relax between events.
Furthermore, as a rule of thumb, complex systems constitute many connected elements
but exhibit simple, stable and quite predictable behavior, while chaotic systems typically
consists of only a very limited number of elements but produce an unpredictable behavior in
the long run. In the short run, however, “Given sufficient data, time series analysis permits
one to make short-term predictions about a system’s behavior, even if the system is chaotic.
Moreover, these predictions can be made even when the underlying dynamics is not
known,” according to Ilachinski (1996). The success of The Weather Company is a
testimony to this – by using Big Data, it has become big business, according to The
Economist (2019a). Most corporations and social systems are complex because they consist
of a large amount of elements yet exhibit quite stable – albeit a somewhat uncertain –
behavior.
Note that “unpredictable” in Figure 1 refers to a state, or condition, in which the lower
level constituent knows an output will take place but have no idea about the type of output
and/or the magnitude. “Uncertain,” on the other hand, refers to a state for which we lack
information about the type of output and the magnitude, but we know an output will take
place and we have some ideas about the type and the magnitude. With these definitions of
uncertainty and unpredictability, we see that there is a major implication for the
JRF management of such systems. From the definition of Joia (2000) that “Information is data
21,1 with attributes or relevance and purpose,” we understand that although we may have data
from chaotic systems, their unpredictability renders them useless for management.
The last type of system – chance systems – is systems that produce output randomly, or by
chance. This occurs because the couplings are so weak – or simply do not exist – that they do not
produce any measurable effect on macro level. A good example is the static noise in radios. For
44 this paper, we therefore focus on complex and chaotic systems, while keeping in mind that both
complicated and chance systems occur in reality and may be necessary to consider at times.
It is crucial to realize that the properties of the systems discussed above are intrinsic
properties of the systems and do not arise because of lack of information. Lack of
information, beyond what is feasible, will further reduce our understanding of these systems
and make them harder to manage. Having as much information as possible is therefore
beneficial, but recall Zadeh’s Law of Incompatibility – there are limits to what can be
actually known regardless of the amount of data we have.
This brings us to complex adaptive systems. According to Ilachinski (1996), a complex
adaptive system is a “macroscopic collections of simple (and typically nonlinearly)
interacting units that are endowed with the ability to evolve and adapt to a changing
environment.” With reference to Figure 1, this means that complex adaptive systems are
essentially open, higher level complex systems, where “open” means that it adapts to the
environment. Coleman (1999) finds that all corporations are complex adaptive systems. For
simplicity, the distinction between complex systems and complex adaptive systems is not
made in this paper.
The properties of complex systems produce some “side effects” that are imperative to
understand in the context of risk management. These include the following:
 Predictions are possible.
 Long-term trends exist.
 Linearity is rare.
 Everything is connected.
 Context is king.
 Adaptation and coevolution is the norm.
 Behavior is both emergent and resulting.

Obviously, the allotted space in a paper is far too little to discuss all this in detail. However, a
brief discussion of these “side effects” is provided in the next seven sections, but recall that
many of them interrelate. Furthermore, the whole system (macro state) impacts the
constituents of the system (micro states) and vice versa. Therefore, Ilachinski (1996) asserts
that in dealing with complex systems, we must always think of the whole as well as the
pieces – anything less is self-deception. Of course, thinking about everything is not possible
so judgment is called for in striking the right balance between holism and synthesis, on the
one hand, and reductionism and analysis, on the other hand.

3.1 Predictions are possible


Since time immemorial, humans have tried to foresee the future and make plans, and this is
in many ways what predictions are all about as exemplified by practices such budgeting
and strategic planning. Unlike many writers that use complexity theory (Wheatley, 1999), to
condemn such practices, here it is argued that with the right context these approaches can
still be useful.
In relatively complicated systems, such as public administration and mature industries, Risk and
predictions make sense because the system is relatively stable, and the time-frame of these complexity
predictions can therefore be relatively long. Despite the fact that an “[. . .] average billion-
dollar company spend as many as 25.000 person-days putting together the budget” (Loren,
2003), such predictions are only ballpark right and often more or less wrong in the details.
They, therefore, become quickly politicized with a multitude of negative side effects (Jensen,
2001) for an excellent discussion.
More reliable approaches are to identify alternative futures or what the future will not be. 45
For that, life-cycle costing is a well-known approach (Emblemsvåg, 2003), and various types
of financial analyses involve Monte–Carlo simulations to handle the uncertainty
(Emblemsvåg and Bras, 1998). Such approaches are less susceptible to get trapped in the
number game – they become more risk-oriented and they do not prescribe a “right” answer
but rather a range of possibilities. An approach that can take an even grander outlook is
scenario planning.
The success of Royal Dutch Shell in anticipating the nature (if not the timing) of the 1973
Oil Crisis by using scenario planning (Wack, 1985) as a strategic tool has been important in
bringing scenario planning forth to a wider audience as a sound tool for strategic thinking
(Schwartz, 1997) and ultimately a source for learning (De Geus, 1988). In fact, surveys in the
early 1980s report that about half of the largest US- and European-based corporations were
incorporating scenario planning in their strategy process (Linneman and Klein, 1983) and
the practice has since proliferated (Hodgkinson et al., 1999; The Economist, 2004).
The most practical way of implementing such predictions in a corporate environment is
by using statistical tools, which has been the choice in financial risk management. However,
complex risks make this approach difficult to apply because of both complexity and lack of
data. From this discussion, we understand that short-term predictions are possible, but that
predictions with longer time frames are time-consuming and difficult to achieve.

3.2 Long-term trends exist


While it may seem no surprise that complicated and complex systems exhibit long-term
trends, it may be surprising to learn that that the same is true for chaotic systems. Chaos
theory, or complexity theory for that matter, has devised a phenomenon called attractors
that can answer questions like that. Unfortunately, there are no universally accepted
definitions for attractors, according to Ilachinski (1996), but an attractor can be thought of as
the long-term behavior of a system. As we shall see, all social systems have attractors. A
physical interpretation of this is that most social systems tend to gravitate toward certain
states for a given context.
There are three types of attractors:
(1) fixed point attractors;
(2) periodic attractors; and
(3) strange attractors – also known as fractal attractors.

Fixed point- and periodic attractors can serve as rough approximations for some phenomena
in social systems, such as the long cycles in the world economy, but they are rare in real life.
Therefore, strange attractors are the most interesting for this paper. There is nothing
strange per se about these attractors – they just happen to appear in situations we might
consider strange. For example, in chaotic systems such as the weather, we can identify a
behavior that repeats itself – albeit with variations each time. Long-term trends therefore
exist.
JRF Strange attractors also appear in systems that are not chaotic, but merely complex.
21,1 Usually, strange attractors look very geometrically complicated can take on incredible
shapes, such as the Mandelbrot fractal. However, the really interesting fact concerning this
paper is that they “they tend to be of very low dimensionality, even in a high-dimensional
phase space” Capra (1996), where phase space is understood as a mapping of all possible
states the system can have. This means that even though very complex systems consist of
46 many elements, their behavior typically revolve around long-term trends that are governed
by only a few variables.
A very good example of this is how the invention of containerization revitalized the
marine transportation industry – it reduced the port downtime from 12 days to 12 h
(Flaherty, 1999). Arguably, the whole industry was transformed because of one single
variable – the enormous cut in the unproductive downtime in ports. This story also
exemplifies that dramatic shifts can take place in the phase space. These points are
referred to as bifurcation points, and in real life, they can be occurring because of, for
example, major innovations or terrorist actions. These events are usually not driven by
what seems obvious at first sight, because small changes in variables, that the complex
system is sensitive to, can result in enormous changes. In other words, by carefully
understanding the social system, we can reduce the probability for unanticipated
changes but looking for “the next big thing” is futile. This is obvious because the next
big thing is likely to be initially small and often overlooked by those who concentrate
on the “big picture”.
Unfortunately, this means that trying to predict complex risk events over long term is
impossible because their roots are normally small things that we ignore. We can at best find
their drivers.

3.3 Linearity is rare


Scores of management theories and scientific theories are either based directly on linear
approximations or implicitly based on linear approximations. As Bertrand Russell so
eloquently expressed it: “Although this may seem a paradox, all exact science is dominated
by the idea of approximation.” Thus, regardless of how attractive linearizations are, they are
only valid for small ranges. In fact, when we extend linearity outside those ranges, “[. . .]
policies often bring about exactly the effects against which they were trying to guard”
(Begun, 1994), and we face the law of unintended consequences or even abuse.
Hence, linear thinking is potentially very dangerous unless we have a thorough
understanding for what range the linearization is valid for. The problem with linear
thinking is that it essentially “flattens” the possible outcomes found in reality – the extremes
are ignored and complex relations are oversimplified. In general, useful linearization are
those that involve practices where “the more, the merrier,” while the potentially harmful
linearizations typically require a more careful approach – not too little, too much, too late,
too soon or too anything.
There is also another aspect of linear thinking which is highly damaging. There are well-
known concepts in decision-making theories (March, 1994). It turns out that:
Decision makers tend to exaggerate their control of their environment, overweighting the impacts
of their actions and underweight the impact of other factors, including chance. They believe
things happen because of their intentions and their skills (or lack of them) more than because of
contributions from the environment. This tendency is accentuated by success.
Furthermore, there is usually a big gap in between what gets measured and rewarded and
what actually drives performance. In fact, Jonathan Low demonstrates (Chatzkel, 2001) that
70 per cent of the CEOs he surveyed actually admitted that this gap indeed was the case. Risk and
Linear thinking is essentially an arrogant approach toward reality and it will never work, by complexity
definition, as long as social systems are complex or chaotic.

3.4 Everything is connected


That “everything is connected” is a worn-out phrase, but it true. It is worn out because
it has been used too many times in too many contexts and emphasized too much while
often actually being ignored in decision-making. All too often, decision-makers are
47
forced to take the short view whereby it is almost impossible to identify complex
risks.
Vivid examples are found in the financial industry. On average, US equities in 2002
were held for less than one year before they were sold, whereas in 1960, the average
holding period was eight years, according to André Perold, quoted by The Economist
(2003). Furthermore, the average tenure of an American CEO is also getting shorter
(Pollitt, 2003; The Economist, 2001). Finally, out of 401 CFOs surveyed, 78 per cent
admit that they would sacrifice economic value to achieve “smooth earnings” (Smith,
2004).
Another interesting aspect of interconnectedness is that:
[. . .] all actors are isomorphic [. . .] [which] does not mean that all actors have the same size but
that a priori there is no way to decide the size since it is the consequence of a long struggle,
according to Callon and Latour (1981).
Thus, what matters in complex systems is not the apparent size of an element, but the
importance of the role the element plays. A small, important element may therefore have
greater impact and risks, than a large, unimportant element.
Failing to notice the interconnectedness of social systems can also be the product of
ignorance and linear thinking, as previously discussed. However, we should always
remember that “everything” is interconnected and that minute changes somewhere can
cause major upheavals elsewhere. Thinking about the big picture is therefore to pay great
attention to the right details that shape the big picture. Too often, however, thinking about
the big picture becomes synonymous with overly aggregated financial numbers and
speculations about cause-and-effect.

3.5 Context is king


According to Davenport et al. (1998), knowledge can be defined as “information
combined with experience, context, interpretation and reflection.” Hence, knowledge
in social systems is contextual. There is nothing in a social system that can be
understood without their context. Nothing exists by itself – everything depends on a
context. The old worn-out statement that “context is king” is true. While this sounds
obvious and trivial – it is in fact often ignored partially or even completely. The
examples are endless, but to illustrate, we can look at two common issues that are
context-dependent – quality and time.
Concerning quality, the Japanese quality-guru Genechi Taguchi “defines quality as
consistency of performance around the target (nominal or average value when
available)” (Roy, 2001); not as “the best,” “perfect” or any extreme measure like that.
Quality can therefore not be judged in isolation – a reference is needed, i.e. context.
When time is the scarcest resource, it is more important to do something than to do the
right thing. As a crisis manager who has salvaged companies on the brink of disaster for
almost 30 years, James (2002) has seen what ignoring the time factor can lead to:
JRF In fact, not taking a decision is worse than making the wrong one because it is often easier to
manage your way out of a bad decision than to recover from the consequences of delay.
21,1
Again, context is king.

3.6 Adaptation and coevolution is the norm


All complex systems rely on feedback to change and adapt (Ilachinski, 1996). Unlike
48 competition, complexity theory points at a concept called coevolution or co-adaptation. The
difference is that adaptation and coevolution occur in an environment where everybody tries
to adapt to each other as well as the environment, whereas in competition, the situation is
much more static in the sense that everybody has a set of characteristics and the “strongest”
wins. The whole idea of competition is not only static, but even erroneous. In nature, often
the surviving species over time are not the strongest but those most responsive to change –
the agile ones.
This is also the case in society – the fuel is learning and acceptance for change.
Adaptation and coevolution require learning and change, and we relate to each other is more
coevolution than sheer competition because we constantly learn from each other and change.
For example, when new technologies arrive, all the corporations in an industry tend to
follow the same herd. This herding-mentality has become clearer in recent years as we have
entered an age of imitation, according to Bonabeau (2004). The imitation seems to be driven
by four factors:
(1) the need for safety;
(2) the need to conform;
(3) the belief that others know better; and
(4) greed.

It has always been like this, but in an increasingly interconnected world, and therefore a
world with an increasing proliferation and effective feedback loops, our ability to imitate has
become much more effective. Consequently, the static situation of two athletes competing to
finish first occurs, but not as often as we are led to believe and definitively not as
independently as we may believe.
When corporations try to deliver innovative products first, the true winner is rarely
the one that finishes first in the sense of time. In fact, according to Allen Weiss, a
marketing professor at USC’s Marshall School of Business (Bird, 2001), studies show
that 47 per cent of all market pioneers fail and out of those that do not fail, only 11 per
cent maintain a market leader position several years later. Indeed, in some cases, there
is a latecomer advantage. In the steel industry [for example, Gerschenkron (1962)], a
latecomer has the advantage of not having to invest in less productive facilities and
obsolete technology, which pioneers must do to build the industry from scratch. The
latecomer can therefore leapfrog directly to the state-of-the-art in economically sized
facilities. This process has been confirmed by others later in other industries and
countries (Shin, 1995). In some cases, there are clear first-mover advantages, however.
What makes first-mover advantages real are the dynamics of the industry and the
possibility to protect Intellectual Property Rights (IPR). If the industry requires critical
mass of some sort and there are substantial switching costs, it can be advantageous to
be a first-mover provided that the critical mass is obtained and switching costs persist,
such as Google and Amazon.
Adaptation and coevolution can also be fostered within corporations. Agile
corporations typically “[. . .] move from economies of scale to economies of scope”
(Sharifi and Zhang, 1999). Toyota, for example, has done this for years; “[. . .] Risk and
workplaces are surrounded by charts showing how their output compares with that of complexity
other plants and competitors” (Maguire and McKelvey, 1999). This approach is
classified as adaptive tension, but all adaptation, coevolution and learning are full of
tension. In fact, Edgar Schein views the whole process of learning as filled with “guilt
and anxiety,” “fundamentally coercive” and “[. . .] that learning only happens when
survival anxiety is greater than learning anxiety” (Coutu, 2002). The logical 49
consequences of this are many, including the popular notion of learning being fun – it is
fundamentally flawed.
The purpose here is not to dispel the idea of competition, but to put it in a more correct
context. Head-on competition occurs, but adaptation and coevolution is the norm. The major
lesson for CRM is that we adapt and coevolve with the risks, making them even harder to
identify.

3.7 Resultant or emergent?


“Results” must be one of the most frequently used words in society, and it relates to the
technical term “resultant.” Emergent is another such technical term, for which there is no
societal equivalent as of today. The term emergent is, however, not derived from complexity
theory but was coined by the English philosopher G. H. Lewes in 1875 (Goldstein, 1999). By
building on John Stuart Mill’s earlier differentiation of types of causations, Lewes explained
the difference between a resultant and his new concept of an emergent in the context of
chemistry as:
[. . .] although each effect is the resultant of its components, we cannot always trace the steps of
the process, so as to see in the product the mode of operation of each factor. In the latter case, I
propose to call the effect an emergent. It arises out of the combined agencies, but in a form which
does not display the agents in action [. . .] Every resultant is either a sum or a difference of the co-
operant forces [. . .] [and] is clearly traceable in its components [. . .] the emergent [. . .] cannot be
reduced either to their sum or their difference.
The process of bringing forth an emergent is called emergence, which is defined as “the
arising of novel and coherent structures, patterns, and properties during the process of self-
organization in complex systems” (Goldstein, 1999). Emergence is a hot topic in the
literature, with an entire journal – Emergence – dedicated to the topic. Typically, the more
old-fashion thinking about results is discarded because in complex systems, most
phenomena are emergent.
The importance of this is hard to overstate when it comes to CRM. Complex risks are by
nature emergent, and these risks lurk below our sensory horizon, as it were. Their
emergence comes from many interactions of small elements that in isolation can be almost
harmless, producing weak signals, but their interaction in sum at the right time can be
cataclysmic.
From this discussion, we are now able to make some remarks concerning CRM and how
research about developing satisfactory methods should proceed to take into account
complexity, and are provided in the next section.

4. Some closing thoughts about managing complex risks


It is clear that complexity theory must be the starting point if we are to effectively establish
CRM. Traditional approaches to risk management will not suffice. The insurance industry,
which trades in risks, has so far not been inventive, if we are to believe (The Economist,
JRF 2019a). No insurer ranks among the top 1,000 public corporations by amount invested in
21,1 research and development, but reinsurers have been better inventing “parametric
insurance.” Instead of compensating for losses after an event, policies pay out when certain
thresholds for certain parameters have been reached similar to derivatives. This has been
available since the 1990s but still there have been huge losses, so there is a room for
improvement.
50 Fresh thinking is required, and one of the most challenging aspects of complex risks is
that they can emerge unnoticed. This means that we must identify the weak signals, which
is one of the areas where risk management fails today. This is because such approaches rely
on probabilities and impact guesstimates/estimates. The result is that they tend to put
premium on what is clear and identifiable. Weak signals will probably never appear on such
lists. Thus, we need a different approach.
One promising approach in this regard is the scenario planning approach developed by
Royal Dutch Shell and applied with great success in the early 70s. They were the only oil
major prepared for the energy crisis of 1973. The strength of this approach is that it focuses
on trends instead of concrete signals of possible events. Trends are normally easier to
forecast, whereas timing of possible events is impossible to forecast. Furthermore, such
events can be HILP events, and understanding the low probabilities of such events are very
difficult also for experts as Kunreuther et al. (2001) assert. Unfortunately, with low
probabilities being so difficult to comprehend, there is a real risk that the scenario planning
approach also will fail.
To allow pricing of risks, as the insurance industry requires, there are certain
requirements irrespectively of whether we want to analyze trends or possible events. Indeed,
there is only one language with absolute correctness and that is mathematics (Vygotsky,
1988). So quantitative measures of risk and uncertainty are required for pricing. As argued
in the paper, none of the current quantitative approaches are sufficiently reliable concerning
complex risks, and the qualitative approaches give too many different answers, as
exemplified by the ESG discussion.
It is difficult to see how this can be remedied. With only weak signals to work with, the
data will be either short of relevance, or they have to be truly massive – we are talking big
data capturing large parts of society. While this will probably become technically possible in
relatively few years, the EU General Data Protection Regulation (GDPR) legislation and
similar will stand in the way. Therefore, any development of quantitative approaches must
seriously look into data security.
Qualitative approaches have the advantage of allowing human intuition, knowledge,
experience, etc. and they can be assisted to ensure logical consistency (Emblemsvåg, 2010).
This approach can serve well in decision-making because the important distinction there is
not quantitative versus qualitative analyses but between absolute and relative measures.
This brings us to the realization that currently, there are no suitable approaches for
managing complex risks. Worse, it is difficult to comprehend a workable approach that can
be developed from the current base of risk management tools. The great thing about this is
that there is room for innovation.
In the meanwhile, the most prudent approach seems to be to learn another lesson
from nature – use buffers. Indeed, one of the preeminent investors of all time – Warren
Buffet – claims that the three most words in investing is “margin of safety.” This is the
insight he got from another great investor, Benjamin Graham, according to Hagstrom
(2005); see his seminal books such as Graham (2005) and Graham and Dodd (1951).
Thus, it might be that the whole approach of trying to estimate probabilities of
occurrence is fundamentally flawed for complex risks, as demonstrated in one HILP
case (Emblemsvåg, 2008). Maybe a more useful approach would be to simply assume Risk and
that major, destructive events will happen, and then use risk management approaches complexity
to analyze what will happen thereafter? This will essentially be a type of stress-testing
whatever is to be insured.

Note
1. According to Swiss Re Institute, Sigma 2/2019.
51

References
Admati, A.R. and Hellwig, M. (2013), The Bankers’ New Clothes: What’s Wrong with Banking and What
to Do about It, Princeton University Press, Princeton, NJ.
Backlund, F. and Hannu, J. (2002), “Can we make maintenance decisions on risk analysis results?”,
Journal of Quality in Maintenance Engineering, Vol. 8 No. 1, pp. 77-91.
Beasley, M.S., Branson, B.C. and Hancock, B.V. (2010), COSO’s 2010 Report on ERM: Current State of
Enterprise Risk Oversight and Market Perceptions of COSO’s ERM Framework, NC State
University, Raleigh, NC.
Begun, J.W. (1994), “Chaos and complexity: frontiers of organization science”, Journal of Management
Inquiry, Vol. 3 No. 4, pp. 329-335.
Bird, J.B. (2001), “First-mover advantage: myth or reality?”, The McCombs School of Business
Magazine, The McCombs School of Business at the University of Texas at Austin, Austin,
TX.
Bonabeau, E. (2004), “The perils of the imitation age”, Harvard Business Review, Vol. 82 No. 6,
pp. 45-54.
Callon, M. and Latour, B. (1981), “Unscrewing the big leviathan”, in Knorr-Cetina, K.D. and Mulkay, M. (Eds),
Advances in Social Theory and Methodology, Routledge and Kegan Paul, London, pp. 275-303.
Capra, F. (1996), “The web of life”, First Anchor Books Edition ed, Anchor Books, Doubleday,
New York, NY.
Chatzkel, J. (2001), “A conversation with Jonathan low”, Journal of Intellectual Capital, Vol. 2 No. 2,
pp. 136-147.
Coleman, H.J. Jr. (1999), “What enables self-organizing behavior in business”, Emergence, Vol. 1 No. 1,
pp. 33-48.
COSO (1992), “Internal control – integrated framework (2 volumes)”, available at: www.aicpastore.com:
Committee of Sponsoring Organizations of the Treadway Commission (COSO).
COSO (2004a), “COSO enterprise risk Management – integrated framework”, available at: www.
aicpastore.com, Committee of Sponsoring Organizations of the Treadway Commission (COSO).
COSO (2004b), “COSO enterprise risk management – integrated framework: application techniques”,
available at: www.aicpastore.com, Committee of Sponsoring Organizations of the Treadway
Commission (COSO).
COSO (2017), “COSO enterprise risk management – integrating with strategy and performance”,
available at: www.aicpastore.com, Committee of Sponsoring Organizations of the Treadway
Commission (COSO).
Coutu, D.L. (2002), “Edgar H. schein: the anxiety of learning”, Harvard Business Review, Vol. 80 No. 3,
pp. 100-106.
Davenport, T.H., Delong, D.W. and Beers, M.D. (1998), “Successful knowledge management projects”,
Sloan Management Review, Vol. 39 No. 2, pp. 43-57.
De Geus, A.P. (1988), “Planning as learning”, Harvard Business Review, Vol. 66, pp. 70-74.
JRF Dent, E.B. (1999), “Complexity science: a worldview shift”, Emergence, Vol. 1 No. 4, pp. 5-19.
21,1 Emblemsvåg, J. (2001), “Activity-based life-cycle costing”, Managerial Auditing Journal, Vol. 16 No. 1,
pp. 17 -27.
Emblemsvåg, J. (2003), Life-Cycle Costing: Using Activity-Based Costing and Monte Carlo Methods to
Manage Future Costs and Risks, John Wiley and Sons, Hoboken, NJ.
Emblemsvåg, J. (2008), “On probability in risk analysis of natural disasters”, Disaster Prevention and
52 Management: An International Journal, Vol. 17 No. 4, pp. 508-518.
Emblemsvåg, J. (2010), “The augmented subjective risk management process”, Management Decision,
Vol. 48 No. 2, pp. 248-259.
Emblemsvåg, J. and Endre Kjølstad, L. (2002), “Strategic risk analysis – a field version”, Management
Decision, Vol. 40 No. 9, pp. 842-852.
Emblemsvåg, J. and Bras, B. (1998), “Financial analysis, critical assumption planning and uncertainty
in product development”, 1998 International Conference on Achieving Excellence in New Product
Development and Management, Atlanta, GA.
Flaherty, J.E. (1999), Peter Drucker: Shaping the Managerial Mind, Jossey-Bass, San Francisco.
Gerschenkron, A. (1962), Economic Backwardness in Historical Perspective, Harvard University Press,
Cambridge, MA.
Goldin, I. and Kutarna, C. (2017), “Risk and complexity”, Finance and Development, pp. 46-49.
Goldstein, J. (1999), “Emergence as a construct: history and issues”, Emergence, Vol. 1 No. 1, pp. 49-72.
Graham, B. (2005), The Intelligent Investor, HarperCollins Publishers, New York, NY.
Graham, B. and Dodd, D.L. (1951), Security Analysis: Principles and Technique, 3rd ed., McGraw-Hill
Book Company, London.
Hagstrom, R.G. (2005), The Warren Buffett Way, John Wiley and Sons, Hoboken, NJ.
Hodgkinson, G.P., Bown, N.J., Maule, A.J., Glaister, K.W. and Perman, A.D. (1999), “Breaking the frame:
an analysis of strategic cognition and decision making under uncertainty”, Strategic
Management Journal, Vol. 20 No. 10, pp. 977-985.
Horgan, J. (1995), “From complexity to perplexity”, Scientific American, Vol. 272 No. 6, pp. 104-109.
ICAEW (1999), Internal Control: Guidance for Directors on the Combined Code (Turnbull Report),
Institute of Chartered Accountants in England and Wales (ICAEW), London.
Ilachinski, A. (1996), Land Warfare and Complexity, Part I: Mathematical Background and Source Book
(U), Center for Naval Analyses, Alexandria, VA.
James, D.N. (2002), “The trouble I’ve seen”, Harvard Business Review, Vol. 80 No. 3, pp. 42-49.
Jensen, M.C. (2001), “Corporate budgeting is broken – let’s fix it”, Harvard Business Review, Vol. 79
No. 10, pp. 94 -101.
Johnson, S. and Kwak, J. (2010), 13 Bankers: The Wall Street Takeover and the Next Financial
Meltdown, Pantheon E-books, New York, NY.
Joia, L.A. (2000), “Measuring intangible corporate assets: linking business strategy with intellectual
capital”, Journal of Intellectual Capital, Vol. 1 No. 1, pp. 68 -84.
Jones, M.E. and Sutherland, G. (1999), Implementing Turnbull: A Boardroom Briefing, The Center for
Business Performance, The Institute of Chartered Accountants in England and Wales (ICAEW),
City of London.
Kelly, K. (1995), “Out of control: the new biology of machines”, Social Systems, and the Economic World,
Perseus Books, New York, NY.
Kunreuther, H., Novemsky, N. and Kahneman, D. (2001), “Making low probabilities useful”, Journal of
Risk and Uncertainty, Vol. 23 No. 2, pp. 103-120.
Linneman, R. and Klein, H. (1983), “The use of multiple scenarios by US international companies - a
comparison study”, Long Range Planning, Vol. 16 No. 6, pp. 94-101.
Loren, G. (2003), Why Budgeting Kills Your Company, BetterManagement.com. Risk and
McNeill, D. and Freiberger, P. (1993), Fuzzy Logic, Simon and Schuster, New York, NY. complexity
Maguire, S. and McKelvey, W. (1999), “Complexity and management: moving from fad to firm
foundations”, Emergence, Vol. 1 No. 2, pp. 19-61.
Mandelbrot, B.B. (1980), “Fractal aspects of the iteration of z->lamda z(1-z) for complex lambda and z.
non linear dynamics”, Annals of the New York Academy of Sciences, Vol. 357 No. 1, pp. 249-259.
March, J.G. (1994), A Primer on Decision Making: How Decisions Happen, The Free Press, New York, 53
NY.
Nason, R. (2017), It’s Not Complicated: The Art and Science of Complexity in Business, University of
Toronto Press, Toronto.
National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling (2011), Deep Water:
The Gulf Oil Disaster and the Future of Offshore Drilling - Report to the President, United States
Government Printing Office, Washington, DC.
Pierce, E.M. and Goldstein, J. (2016), “Moving from enterprise risk management to strategic risk
management: examining the revised COSO ERM framework”, 14th Global Conference on
Business and Economics, Said School of Business, Oxford.
Pollitt, D. (2003), “How top executives can hit the ground running: making the most of the first 90 days”,
Human Resource Management International Digest, Vol. 11 No. 7, pp. 34-36.
Power, M. (2007), Organized Uncertainty: Designing a World of Risk Management, Oxford University
Press, Oxford.
Roy, R.K. (2001), Design of Experiments Using the Taguchi Approach: 16 Steps to Product and Process
Improvement, John Wiley and Sons, New York, NY.
Sachs, R. and Wade, M. (2013), Managing Complex Risks Successfully, Münchener Rückversicherings-
Gesellschaft (Munich Re), Munich.
Schwartz, P. (1997), The Art of the Long View, John Wiley and Sons, New York, NY.
Sharifi, H. and Zhang, Z. (1999), “A methodology for achieving agility in manufacturing
organizations: an introduction”, International Journal of Production Economics, Vol. 62
Nos 1/2, pp. 7-22.
Shin, J.-S. (1995), Catching up, Technology Transfer and Institutions: A Gerschenkronian Study of Late
Industrialization from the Experience of Germany, Japan and South Korea with Special Reference
to the Iron and Steel Industry and the Semi-Conductor Industry, Darwin College, Cambridge
University.
Smith, E.B. (2004), What Puts the CFO in the Middle of Scandals?, USA TODAY, p. 5.
Smith, D. and Irwin, A. (2006), “Complexity, risk and emergence: elements of a ‘management’ dilemma”,
Risk Management, Vol. 8 No. 4, pp. 221-226.
Taleb, N.N. (2007), The Black Swan: The Impact of the Highly Improbable, Allen Lane, London.
The Economist (2001), “Chief executives: churning at the top”, The Economist, Vol. 358 No. 8213,
pp. 75-77.
The Economist (2003), Tough at the Top: A Survey of Corporate Leadership, The Economist, London.
The Economist (2004), Living Dangerously: A Survey of Risk, The Economist, London.
The Economist (2018), “Briefing; the financial crisis”, The Economist, Vol. 428 No. 9108, pp. 19-21.
The Economist (2019a), “Run for cover; innovation in insurance”, The Economist, Vol. 432 No. 9152,
pp. 55-56.
The Economist (2019b), “Poor scores; ESG investing”, The Economist, Vol. 433 No. 9172, pp. 63-64.
The World Economic Forum (2016), The Global Risks Report 2016, The World Economic Forum, Geneva.
Vygotsky, L.S. (1988), Thought and Language, Kozulin, A. (Ed.). The MIT Press, Cambridge MA.
JRF Wack, P. (1985), “Scenarios: unchartered waters ahead”, Harvard Business Review, Vol. 63 No. 5,
pp. 73-89.
21,1
Weick, K.E. (1976), “Educational organizations as loosely coupled systems”, Administrative Science
Quarterly, Vol. 21 No. 1, pp. 1-19.
Wheatley, M.J. (1999), Leadership and the New Science: Discovering Order in a Chaotic World, Berret-
Koehler Publishers, San Francisco.
54
Corresponding author
Jan Emblemsvåg can be contacted at: emblemsvag@yahoo.com

For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: permissions@emeraldinsight.com

You might also like