Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Complexity measures in decomposable structures

Francesca Gino1

Sant’Anna School of Advanced Studies


Piazza Martiri della Libertà, 33, 56127 Pisa, Italy
fgino@sssup.it

First Draft

Paper prepared for the IInd EURAM (European Academy of Management) Conference on “Innovative Research in
Management”, May 9-11, 2002, Stockholm, Sweden

Abstract

In this paper we present some measures of complexity of decomposable structures. The literature on
complexity is reviewed and then the distinction among different measures of complexity is discussed. At
first, a broad definition of complex systems is provided and the main characteristics of such systems are
described briefly. In particular, a complex system is defined as a network of many highly interactive and
interrelated elements, each performing its own functions. The elements are combined in such a way that
each contributes to the behaviour of the structure. Moreover, the system components are characterised by
interdependencies that make the contribution of each element to the overall performance (or behaviour)
dependent on the contribution of the others. In general, systems are characterised by different degrees of
complexity and also by different degree of decomposability. We also discuss the possibility of decomposing
such complex systems, recalling the distinction that Simon (1969) made between decomposable, non-
decomposable and nearly decomposable structures. Then, two particular kinds of measures are considered.
The first group is based on information theory and deals with the concept of entropy. This index refers to the
amount of information that we need in order to understand what is happening within a system. We claim
that the entropy measure focuses only on the information complexity of a system without taking account of
the interdependencies existing among the components. The second group of indicators, instead, explicitly
considers this feature. Indeed, the group is composed of indexes - which we call fitness measures - based on
the number of relevant relationships within a system. The basic features of Kauffman’s NK model are briefly
reviewed. Finally, we discuss the usefulness of both entropy and Kauffman’s K value as appropriate
measures of the complexity of decomposable structures.

1 We thank Michele Berardi, Roberto Gabriele, André Lorentz and Sandro Sapio for useful comments and suggestions on

earlier drafts. All remaining errors are the author’s responsibility.

1
1 Introduction

The complexity concept can be found in various fields among which both the natural systems (biological,
physical, and chemical ones) and man-made ones (computerised systems or organisational structures).
The aim of this paper is to illustrate the indicators that can be used in order to measure the degree of
complexity of decomposable structures. We will also justify the use of such measures in different domains.
For example, in manufacturing, defining a measure of the complexity of a production system is aimed at
taking account of aspects like interdependencies, co-ordination, specialisation and flexibility. This is
worthwhile both as a guideline in the design of production systems and as an index of the level of order that
characterises a production process.
The paper is organised as follows. In Section 2 a broad definition of complex systems is provided and the
main features that make the control of such systems difficult are then briefly described. Indeed, to give a
definition of complexity is essential before posing the question regarding how to manage and to measure it
(Axelrod, 2000). Section 3 discusses the possibility of decomposing such complex systems, recalling the
distinction that Simon (1969) made between decomposable and nearly decomposable structures. “By
separating interdependent elements into different sub-problems, a problem whose complexity is far beyond
available computational resources is reduced to smaller sub-problems which can be handled, but it generally
fails to provide the optimal solution” (Frenken et al., 1999, pg. 3). Section 4 briefly explores the origins of the
study of complexity. Then, in Section 5 two particular kinds of measures are considered. The first group is
based on information theory and deals with the concept of entropy. The latter refers to the amount of
information that we need in order to understand what is happening within a system. We claim that the
entropy measure focuses only on the information complexity of a system without taking account of the
interdependencies existing among the components. The second group of measures, instead, explicitly
considers this feature. It is composed of indexes - which we call fitness measures - based on the number of
relevant relationships within a system. In presenting this second ‘family’ of indicators, we briefly review the
basic features of Kauffman’s NK model and we discuss its main limits and advantages. Despite some
cautions, “Kauffman’s work and complexity models more generally have properties that deserve the
attention of economists interested in organisational issues” (Westhoff et al., 1996, pg. 22). By mean of some
examples, we shed light on the possible applications of such model, referring in particular to production
systems. Finally, some conclusions are discussed in Section 6.

2 The main features of complexity

2
A complex system can be defined as a network of many highly interactive and interrelated elements, each
performing its own functions.2 The elements are combined in such a way that each contributes to the
behaviour of the structure and to its performance. “In such systems, the whole is more than the sum of the
parts, not in an ultimate, metaphysical sense, but in the important pragmatic sense that, given the properties
of the parts and the laws of their interaction, it is not a trivial matter to infer the properties of the whole.”
(Simon, 1962, pg. 468).
In general, systems are characterised by different degrees of complexity. There are structures consisting of
only few components that interact either minimally or linearly (Bechtel, 1993). We call them simple structures.
An intuitive example is the following one. Imagine a production system in which the co-ordination structure
is imposed by technical or organisational constraints and one controls the flow. In this case, we can describe
the events occurring within the system straightforwardly: first we look at what is done by one part and then
how this affects the next. However, there are also much more complex structures in which one component
may affect and be affected by several others. As an example, imagine a production system in which the
structure of the connections among the units is variable. In this case, as the level of complexity of the system
increases it may become more difficult to highlight interdependencies among components. “In such cases,
attempting to understand the operation of the entire machine by following the activities in each component
in a brute force manner is liable to be futile” (Bechtel, 1993, pg. 18).
Let us now illustrate briefly which are the aspects that make the control of complex systems difficult, making
examples that refer mainly to complex production processes (Gaio et al., forthcoming). We have to take
account of the following features.
Interdependence. The units or the subsystems are connected, that is the effect of a control action on a
unit depends on the simultaneous actions taken by other units for controlling their own output. Managing
interdependencies can be either difficult or easy since different types of interdependencies exist. A relatively
simple case is the one in which the relations between a unit and a group of others are additive and separable,
for example with a linear relationship. In such cases, the output of the considered unit depends on the
algebraic sum of the levels of a series of input units, each multiplied by a certain coefficient. The end product
changes proportionally according to the share with which every input enters into the finished good.
Therefore, the (activity of the) last production unit in the process depends on the (activity of the) units that
produce components. This linear relation is not difficult to represent. In other cases the relationships of
interdependence among the units are more difficult and non-linear. A frequent case in manufacturing is the
one in which the product of a unit depends on the input that enters in the process with the smallest amount.
This could happen, for example, in an assembly line in which fixed proportions of input must enter into the
output. For the sake of simplicity, imagine that an input unit is necessary in order to obtain the assembled
product. In such a case, it would not be possible to produce a number of finished goods greater than the
smallest quantity among the ones of each component that enters into the process. The initiative of a

2 A complete definition of what complex systems are is provided by Langlois (1999), who recalls the meaning of

complexity according to both Hayek and Simon.

3
production unit to increase individually the product obtained would be meaningless if not co-ordinated with
the one of all the others.
Number of states that can be assumed by the system units. The higher the number of states that can
be assumed by the system units is, the more difficult to represent the system itself is and to build a model for
it. Two aspects of the states of the units are often distinguished (Di Bernardo, Rullani, 1990): the variety,
when a production unit is predisposed in order to produce various types of output, and the variability,
when the unit can be predisposed in order to produce various amounts of the same output. These different
aspects of the number of the system states can be useful in order to represent the space and the ways in
which it adapts to certain environmental conditions. By associating interdependence to the number of the
states, it turns out that the intervention on a variable controlled by a unit or a subsystem can lead, through a
network of interdependencies, to states which can also be very far from the initial one. This difficulty to
represent the effects of an intervention on systems that can assume several configurations is one of the
greater problems that must be faced in the management of complex systems.
Uncertainty. Every system produces answers to external conditions taking into account its own state.
In complex production systems, in most cases the external conditions to which the system must answer and
the states themselves are not exactly predictable. Indeed, it is difficult to know exactly either the market
demand or the machines states before orders are actually collected or the absence of breakdowns has been
verified. In other words, it is not always possible to represent precisely all inputs that cannot be
manipulated, but some of them are not even predictable. As a consequence, a complex system is
characterised not only by a great number of interdependencies and possible configurations but also by the
fact that it is not easy to know which configuration is to be preferred before non-manipulable variables show
their own values.
Irreversibility. This last feature means that a cost connected to the change of state exists. To modify
the operating conditions by acting on manipulable inputs has a cost: for example, to equip a machinery in
order to complete certain operations requires time, specialised workers and often it determines also reject
items when machines are started up again. The role of irreversibility can be better understood if associated
to uncertainty. When there is no irreversibility, that is when times and costs of changing states are null, a
production system could wait till it observes the value of all relevant non modifiable variables before acting
on its own control variables. In such a case, if the model of the system were not able to give a complete
representation of the system itself, we would still have some elements of complexity. For example, a
complete representation may not be given because of the interdependencies and the difficulty in knowing
the effects of changes in the state of one unit. Nevertheless, the situation would be surely easier than the one
in which waiting till we observe the variable at the entrance involves a cost that sometimes is very high.
Therefore, many decisions have to be made before knowing the conditions to which the system must answer
(that is before discovering the constraints the system has to face), so as to cover a certain time horizon (called
irreversibility horizon). This let the system answer in time. However, an incorrect prediction of the external
conditions would lead the system to be predisposed in a wrong way and would imply costs in order to bring

4
the system back (or to adjust it) to the desired conditions. Obviously, the higher is the irreversibility, the
greater are the discontinuities in the performance of a production system in presence of uncertainty.3
Degree of decomposability. As Frenken et al. (1999) rightly state, the complexity of a system is also
related to its degree of decomposability, which in turn depends on the architecture of the system.
Complexity measures should take account also of this aspect and, as a consequence, relate to the
decomposability of the specific architecture of the system. The paradigm of decomposability is illustrated in
details in the next section. As for the measures of complexity that have been proposed in literature with
respect to this feature, in one of his works, Page (1994) presented two indicators (which he called cover and
ascent size) based on the decomposability of the search problem (of maxima) on a fitness landscape.4

3 Decomposable and nearly decomposable structures

The need for decomposition


Decomposition can be defined as a way to manage complexity (Langlois, 2000) that is a way to govern
interdependencies among parts within a system.5 It allows the subdivision of the overall structure (or task)
so that the latter becomes manageable and the system intelligible. Through decomposition reciprocal
influence among components (or decision variables if we refer to the case of a complex problem) are
reduced. This result can be obtained directly by separating “sets of variables with the higher internal
interdependence and the lower connection with other system control variables: isolated subsystems can be
left to specialised decision units without affecting the final outcome of the system. This is a process of
decomposition in the sense of Simon (1996)” (Devetag and Zaninotto, 2002). In general, the difficulty of
isolating independent contributions of several system components from their co-ordinated output is due to
the interaction among them.
Von Hippel (1990) stated the problem of decomposition (or task partitioning, as he called it) in the case of
innovation process, where problem solving has to be attributed to several specialised research teams in a
way that reduces interdependencies among units. One of the examples he makes is very intuitive. Imagine
you have to design an aircraft. This innovation process can be decomposed into two component tasks (each
to be undertaken by a different firm) using one of the following alternative ways. According to the first
partitioning, one firm has to design the airplane body and the other has to design the engine. According to
the alternative partitioning, instead, one firm is responsible for the front half of the aircraft body and engine
while the other is responsible for the back half of each. The difference between the two partitionings stems
from the interdependence of the specified tasks. In the latter case, a higher level of problem solving is
required since it is not possible to consider independently the two halves of the aircraft. Instead, in the

3What is uncertain is the external environment.


4See Page (1994) and also Frenken et al. (1999) for further details on these two measures of complexity of a problem.
5 According to Bechtel and Richardson’s analysis (Bechtel, 1993) decomposition is the attempt to differentiate

components of a system.

5
former, the design of a complete aircraft engine is much less dependent on the design of a complete aircraft
body.
As this example shows, the process of decomposition aims at having lots of interdependencies inside tasks
(or components) and no interdependencies across them.6 The example also highlights the fact that
interdependencies among states, components or tasks require a co-ordination activity. Moreover, they are
responsible of a great irregularity in the results that are obtained when a unit changes independently its own
state. For example, improving the power of the engine may decrease the overall performance of the airplane
since the other components have not been adjusted for this change (Marengo et al., 2000). This amounts to
say that, in such cases, it is difficult to reach the optimal solution through local and independent
adjustments. Some models stemming from biology have been recently proposed and widespread in order to
analyse the results of a system composed of interdependent elements. In particular, in Section 5.2 we present
one of these models, known as NK model (Kauffman, 1993).

Complexity and decomposable structures


The notion of decomposability stems from Simon (1962). In his discussion on complex systems, he
distinguishes among decomposable, non-decomposable and nearly decomposable structures.7
As it should emerge from what we have argued in the previous paragraph, decomposition assumes that one
activity of a whole system is the product of a set of subordinate functions performed in the system (Bechtel,
1993). Decomposable structures (or problems) are those in which components are minimally interactive and
interaction can be handled in an additive or perhaps linear way. The components perform independent
functions and send their outputs to other parts. Thus, in presence of decomposability, the value of overall
performances (or, respectively, of solutions) depends very little on the interactions among components. This
means that by optimising each single component we reach the (global) optimal solution. Therefore, systems
are decomposable if they can be divided into subsystems, each of which is independent of the others.8 “The
extent to which the assumption of decomposability is realistic can be decided only a posteriori, by seeing
how closely we can approximate system behaviour by assuming it. We may be led to erroneous
explanations, but it may be the only way to begin the task of explaining and understanding complex
systems. The failure of decomposition is often more enlightening than its success: it leads to the discovery of
additional important influences on behaviour” (Bechtel, 1993).

6 In contrast with the tradition of Smith, Young and Stigler, Devetag and Zaninotto rightly claim that “task

decomposition is artificially generated and there can be potentially infinite patterns of task decomposition for a given
product, some of which may be better than others. This implies two things: first, the design of decomposition patterns
and task separation is a complex activity requiring extensive investment in knowledge, when a firm may not be willing
to make freely disposable to the market. Second, the result of this design strategy, in the presence of the inevitable limits
to knowledge and computational capabilities that they have highlighted, will necessarily be an imperfect decomposition,
with residual interdependencies that will have to be managed and correlated ‘along the way’” (Devetag and Zaninotto,
2002).
7 As it will be clear from the content of the following sections, Kauffman’s NK model can be interpreted as a

mathematical construction that parameterise Simon’s classification (Nickerson and Zenger, 2001).
8 In Kauffman’s NK model systems characterised by a low degree of interdependencies among the components give rise

to relatively smooth fitness landscape with few peaks.

6
Nearly decomposable structures,9 instead, are those characterised by weak but not necessarily negligible
interactions. In such cases, the value of overall performances highly depends on the interaction among
system components. Moreover, the optimisation of each subsystem (independently) rarely lead to the global
optimum.
Thus described, the concept of decomposability appears to be strictly related to the one of modularity.10 We
claim that both decomposability and modularity, in turn, are strongly related to the paradigm of complexity.
Indeed, as we have already said, they make complexity manageable (Baldwin and Clark, 2000).
What we have said makes some questions arise. Is it possible to decompose a system? And, in particular, are
complex systems decomposable?
Simon (1969) argues that essentially all complex systems share the feature of having a near decomposable
architecture. Indeed, they are characterised by a structured ordering of successive sets of subsystems that is
“they are organised into hierarchical layers of parts, parts of parts, parts of parts of parts and so on” (Egidi
and Marengo, pg. 6). Bechtel and Richardson (1993), instead, claim that the assumption according to which
nature is decomposable and hierarchical is sometimes false.11

How can we measure the complexity of decomposable structures?


We suggest that we can characterise and analyse decomposable systems in terms of different parameters that
we aggregate under the term of complexity measures. In particular, we consider:
- measures based on information theory;
- measures based on the number of relevant relationships within the system.
In principle, the measures should take account of the behaviour of the system in terms of the functions
performed by its parts and the interactions between these parts (Bechtel, 1993). As we will state in the next
sections, the entropy measure focuses only on the information complexity of a system without taking
account of the interdependencies existing among the components. Interdependencies are instead considered
by the fitness measures.
Before presenting the complexity indicators we have just mentioned we briefly explore which are the origins
of the sciences of complexity.

9 Near decomposability imposes less stringent limits, as Simon explains: “(1) In a nearly decomposable system, the short-

run behaviour of each of the component subsystems is approximately independent of the short-run behaviour of the
other components; (2) in the long run the behaviour of any one of the components depends in only an aggregate way on
the behaviour of the other components” (Simon, 1969, pg. 100).
10 The latter involves the assembly of products from a set of modules: it is an activity in which the structuring in

independent modules takes place. Each module can be extremely complex internally, but externally must have a set of
clearly defined interfaces, which specify how that module can link to other modules. A module can be defined as a
building block with standardised interface (Niels Henrik, 2000). Thus, with modularity (or modularization) we mean the
decomposition of a product into building blocks (modules) with specified interfaces, driven by company specific reasons
(Erixon, 1998).
11 “There are clearly risks in assuming complex natural systems are hierarchical and decomposable” (Bechtel, 1993, pg.

27).

7
4 From entropy to complexity

Entropy is a term that comes from the study of thermodynamics. At the end of the 19th century, people like
Boltzmann, Carnot, Gibbs and Clausius were all working in this field. One result of their studies is known as
the Second Law of Thermodynamics. Using very simple and simplifying terms, this law says that everything
in the universe seems to tend to go from states of order towards states of chaos. To be more precise, stored
energy, whether it is stored as heat or as a structure tends to disperse into a system. Thus, things
characterised by a certain level of energy (like heat, motion or structural bounds) will over time transfer their
energy out to less energetic things.12
All things in systems undergo entropy and become chaotic over time: molecules, people, machines, cities,
markets. In fact, entropy turns out to be a very powerful tool for understanding and predicting physical and
social phenomena.
However, some phenomena remain unexplained. For example, if we observe communities or economies we
notice that they go ‘from formless to form and from form to formless’. How can we explain this change in both
directions? That is: how can we explain evolution of complex systems?
“Because organisms are not energetically closed systems, there is no way to deduce the direction, much less
the rate, of evolution from classical thermodynamic considerations. (…) In neither case does the entropy
change provide us with a guide to system behaviour” (Simon, 1962, pg. 471).
Simon (1962) clearly explained how complex systems evolve from simple system and he made examples
referred to different fields (from biology to human problem solving). He argued that the evolution from
simple to complex is more rapid when there are stable intermediate forms.13 It is the information about the
latter that guides the process of evolution and not “free energy or negentropy from the sun” (Simon, 1962,
pg. 473).
In the 20th century, several groups of theorists began exploring new disciplines to find possible answers to
this question. Such disciplines included Chaos Theory, Fractal Mathematics, Quantum Mechanics, biology
and eventually the study of Complexity. Their main task was to figure out what was wrong with other
foundational scientific paradigms such as Newtonian Mechanics and Darwinian Evolution.
One of the things they noticed was that though systems did undergo entropy and things did tend towards
chaos, the results were not necessarily the “bland” chaos that the Second Law of Thermodynamics predicted.
Instead, systems began to “complexify”, their chaos being manifested as “active and energetic”.
The study of Complexity developed around studying this phenomenon and attempts to bring rigor to the
social sciences by applying natural and physical sciences such as quantum physics, fluid dynamics and
chaos theory to social and economic systems. What theorists interested in the study of Complexity
discovered was that systems (living or social systems in particular) tended towards chaos but, as chaos
increased to a certain threshold of complexity, new structures and forms would emerge from nowhere as if
the system just arose order from chaos. This appearance of order is called “self organisation” or

12 This explains why without some outside energy source (like an oven or a refrigerator), ice and apple pies tend towards

room temperature over time.

8
“emergence”. Nature itself provides various examples of such “order for free” that at first seems counter
intuitive.
An important feature of the complexity approach is to acknowledge that the dynamics of many different
types of systems is characterised by self organisation, selection or adaptation so that a basic common
structure comes out even though systems apparently differ. These similarities in the structure of systems can
be exploited in order to transfer methods of analysis and knowledge from a certain field of interest to
another one. Therefore, the study of complexity is a multidisciplinary approach, which sheds light both on
the general structure and the behaviour of complex systems.

5 Complexity measures

We can distinguish two main groups of measures of complexity:


- the entropy measures;
- the fitness measures.
The first group of measures refer to the concept of entropy that can be defined as a measure of the expected
value of information. The entropy measure, therefore, focuses on the information side related to the
complexity that is to the amount of necessary information for the management of a system.
The measures of fitness are introduced through the description of one of the most important models in the
complex systems analysis, that is Kauffman’s NK model (Kauffman, 1993).

5.1 Entropy measures

Entropy measures are the first group of measures of complexity we consider. The entropy indicates the
expected amount of information mediated in time necessary to describe the state of a system. In other words,
the entropy is essentially the amount of information that we need to obtain in order to understand what is
happening in a system (for example, in a production process). As it emerges from the mathematical property
of the entropy measure that we will introduce in the next paragraph, the more complex a system is, the
higher the entropy is since a greater amount of information is required.

The formula for entropy and its properties


Shannon (1948) has been the first one to introduce the concept of measurement of the information amount
through the entropy, with its work ‘The mathematical theory of communications’. He borrowed the notion of
entropy from the thermodynamics in order to use it in the theory of communications and information
developed by him. His idea is to use the entropy as a measure of the amount of information passed from an
emitter to a receiver. Moreover, Shannon refers to the entropy as an index of the uncertainty level of a

13 If there are stable intermediate forms the resulting complex system will be, using Simon’s words, hierarchic.

9
stochastic process. The theory of information, therefore, as developed by Shannon, provides a measure of
how much information must be associated to a certain state of affairs.
Although the theory of information still today constitutes a field of intense studies and great interest, its
foundations are all contained in Shannon’s work. The entropy concept is defined formally as follows: given a
set of events E = e1, e2, … , en and the a priori probabilities of taking place of these events P = {p1, p2, … , pn}
n
where pi ≥ 0 and ∑ pi =1 , the entropy function is defined as follows:
i=1
n
H = −∑ p i ⋅ log(pi ) equation 1
i=1

where 0 ⋅ log0 = 0 .
The entropy H defined by equation 1 has the properties of a measure; we recall only two of them.
• If all probabilities pi are equal (pi = 1/n), then H is an increasing monotone function of n. This means
that in presence of equally likely events, as the number of possible events increases, also the choice
possibilities increase, that is the uncertainty increases. Therefore, an industrial resource (e.g. a machine)
that operates on many different products is characterised by a greater amount of information (or
information complexity) with respect to a resource that operates only on a single product.
• Like an additive measure, the entropy associated to a system composed of independent units is the sum
of the entropy associated to each unit (e.g. machine).14

Properties of the entropy measure


In the theory of communications, the information is a measure of the freedom of choice that one has when a
message is chosen. In the simplest cases, the information amount is determined by the logarithm of the
number of possible choices. Indeed, it is convenient to use logarithms. In particular, in Shannon’s formula of
entropy, the logarithm in base 2 is used, again because of convenience (Shannon, 1948). Therefore, the
information, when there are only two alternatives, is proportional to the logarithm of 2 in base 2 that it is
equivalent to a unit of information, called bit. In fact, when the logarithmic base is 2, the turning out units
employed in order to measure the information can be called binary digits or bit.15 The digits, 0 and 1,
represent each one of the two stable positions (closed or open) of a device. The latter can store a bit of
information, while N devices can store N bits, since the total number of possible states is 2N and log2(2N) = N.
When the entropy formula is used in order to measure the level of complexity of a production system, the
latter can assume a number of stable and definable states greater than 2. As we have already said, the
logarithmic base reflects the number of such states. If we use the measure unit bit and we are interested in
the information complexity of a production system, then the formula has to be corrected introducing a
positive constant K. We obtain therefore:

14 The complexity of a production process can be computed assuming that machines are independent units. This
assumption is due to the fact that the computation of such a measure is based on the scheduling of the process, that is an
a priori defined plan which let one knows at every time t in which state each machine is.
15 If base 10 is used (common or Brigg’s logarithm), the units are called decimal digits, while if the logarithmic base is

base e (natural logarithm), the information units are called natural units.

10
M Sj
( )
H = −K ⋅ ∑ ∑ pij ⋅ log2 pij equation 216
i=1 j =1

Equation 2 represents the entropy of a system composed of M machines (i.e. production units). Each of them
can assume Sj different states. A certain probability pij is associated to every possible state. In order to apply
the formula both properties, which we have mentioned above, must be valid.

An example: implications of the entropy measure when it is applied to production processes


The entropy measure focuses on the information side of complexity. Why is it used in order to measure the
level of complexity of a production process? In manufacturing, the complexity refers to the fact that the
systems are composed of a more or less high number of interdependent machines (and workers). Different
types of constraints like the ones determined by sequence or by technology link such production units.
Moreover, every machine is characterised by a certain number of operations that is able to carry out. The set
of such relations and units makes structured processes arise according to various degrees of complexity. The
entropy is used in order to measure such degree of complexity of the system from the information point of
view, that is it gives an indication on the amount of information necessary in order to manage and to control
the system (in our example, different production processes).17 However, the entropy measure takes into
account only two dimensions of the complexity: the number of states and the probability associated to each
state. By using the entropy measure, however, interdependence is not taken into account. Considering the
conditioned entropy can face such gap. In fact, the latter allows us to grasp an aspect of interdependencies,
but it does not take into account the difficulty in knowing the conditioned probability.
Let us illustrate this concept with an example. Imagine a simple production system composed of three
machines (production units), that operate sequentially. The system produces only four goods and every
machine takes part in the production of each of the four products. This means that each production unit can
be in one of four states according to the product it is working on and without taking into account of ideals
and breakdowns. Therefore, Sj=4 for each machine. We assume there is stochastic independence among the
states and that all states are equally likely.18 Thus, since pij=1/4 we obtain the following result:
M Sj 3 4 3 4
( ) ( )
H = −∑ ∑ pij ⋅ log2 pij = − ∑ ∑ 1 4 ⋅ log2 1 4 = − ∑ ∑ 1 4 ⋅ 2 =3⋅ 4⋅ 1 4 ⋅ 2 = 6
i=1 j =1 i=1 j =1 i=1 j=1

Thus, the amount of information necessary in order to describe the state of this simple production system is
equal to 6 bits.

16
Mathematically, in order to pass from the base a to the base b it is sufficient to multiply by logba.
17 In the last decade in the literature regarding operations management several measures based on entropy have been
proposed. In particular, it is possible to distinguish four approaches: Deshmukh’s approach (Deshmukh, 1992,
Deshmukh, 1998); Frizelle’s approach (Frizelle, 1995); Karp and Ronen’s approach (Karp, 1992, Ronen, 1994, Ronen, 1988,
Ronen, 1990, Ronen, 1991); and the Yao’s one (1985, 1990). Moreover, a following approach is the one by Efstathiou et al.
(Calinescu, 1998), which follows the way opened by Frizelle.
18 For example, because the amount of time required for working a job is the same no matter what the job is and what

machine we consider.

11
In order to reach this result strong assumptions have been made. What happens if we remove them? That is:
what happens if states are not equally likely? Is it sensible to assume stochastic independence among the
states?
Before exploring possible answers to these questions we present the second group of complexity measures.

5.2 Fitness measures

In order to analyse the performances of a system composed of several interdependent units some models
deriving from biology have been recently widespread in the economic analysis. The genetic structure of an
organism can be thought as a string of characters assumed by various genes whose contribution to the final
result (than in a biological organism it is measured in terms of adaptation to the environment) or fitness is
interdependent. One of the most successful formulations is Stuart Kauffman’s NK model (Kauffman, 1993,
Kauffman, 1996). The idea of a fitness landscape has been introduced in the biological literature by Sewell
Wright in 1932 (Levinthal, 1999). The landscape is simply a function that associates a level of fitness to a
genetic structure of an organism. This biological concept has been imported in many disciplines, among
which social and economic sciences where it has found several applications.
If we refer to organisations, the elements which influence the fitness can be interpreted in various ways and
can be represented by the profit or by a set of variables related to the objectives of the organisation.
Alternatively, the fitness landscape can be meant as a function of a set of individuals’ actions and of their
collective performance. However, the same structure can be seen from an individual perspective.
Kauffman (1993) showed how the topology of a fitness landscape is influenced by the degree of
interdependence of the contribution of the genes to the fitness. Kauffman characterises the fitness landscapes
essentially with two structural variables:
- N, which is the number of elements (i = 1, …, N) that characterises the entity;
- K, which is the number of elements of N with which a certain attribute interacts.
In order to describe the NK model it is therefore necessary, as a first step, to define a system composed of N
elements that can assume several states19 (e.g. a production process composed of N machines). Such
elements can be characterised by different degrees of interdependence. Thus, the variable K expresses the
average number of other elements, which every element is interdependent with. Every possible combination
of states of the single elements is called configuration. Finally, it is necessary to define a measure of
performance of the system that is called fitness.
A certain degree of fitness is associated to every possible configuration of the elements of the system. The
former depends on the (more or less high) level of exploitation of positive (complementarities) and negative

19 In particular, for each element there exist Ai possible states, called alleles in biology (Frenken et al., 1999).

12
interdependencies. The set of values of fitness constitutes the fitness landscape. We can use again the
example of a production system to clarify the point. In such a case, we can consider the performance as a
combination of the time required in order to complete a production process (a smaller process time means a
better performance) and of the cheapness in the use of the resources. In biology, the concept of contribution
of the genes to the fitness in presence of interdependencies is called ‘epistatic’ interactions. In particular,
when there is no epistasis, the landscape tends to assume a configuration with only one maximum (‘peak’)
as it happens in Figure 1.

Figure 1: fitness landscape (I)

As interdependencies grow, the landscape assumes a more rugged configuration that is characterised by
multiple peaks as the one represented in Figure 2.

Figure 2: fitness landscape (II)

According to such model, therefore, in absence of interdependencies (that is for K=0), we have a landscape
characterised by only one global maximum point. This can be explained by the fact that, if the contribution
of each single gene (or actor) to the overall fitness is independent of the one of the others, then there is a

13
behaviour which is independent of the one of the other genes (agents). This amounts to say that every gene
can improve its own contribution without interfering on the behaviour of the others.
What does this mean in organisational domain, when we refer to strategic choices or decisions regarding
systems or production processes design? In such cases, the lack of interdependencies in the fitness landscape
implies a situation in which we have only one global optimum. Every agent improves the overall fitness
through improvements and local adjustments of its contribution to the fitness, in such a way that the global
maximum can be caught up from any point by ‘walking’ towards adjacent positions that lead to higher
payoffs. To sum up, if all the elements are independent, each of them contributes independently to the
fitness of the system and it is sufficient to optimise independently every element in order to catch up the
global maximum of the fitness surface (that is the peak of the landscape). Such peak is unique: the surface,
indeed, has a regular and smooth shape (similar to a hill) like the one represented in Figure 1.
As for rugged and irregular landscapes, instead, the variety of the peaks is the direct result of the
interdependence among a set of agents and choices (strategic decisions, choices of design, etc.) (Marengo,
2000). When the level of interdependence is high, a change of a single action can lead to a decrease in the
fitness although a simultaneous change of a greater set of actions can improve the performance. For
example, adopting the system known as just in time for the management of inventories can decrease the
efficiency of the organisation in the case in which other contemporary changes in the production and
management systems are not made. However, with appropriate adjustments of other elements of the
organisation, the change towards a just in time system can lead to relevant benefits in terms of performance.
The reason of this effect can be easily intuited: as interdependencies grow, the number of ‘circuits’ of
elements connected through complementarity relations become higher (Levinthal and Warglien, 1999), but,
at the same time, also the possibilities of conflict and competition among the circuits themselves increase.
This generates a variety of configurations of the system that are locally optimal and cannot be improved
modifying the state of a single element.
Why these concepts deserve our attention? In operations management domain, the answer can be easily
found. Indeed, in order to know the total result or performance of one operating choice is necessary to be
able to represent correctly the interdependencies among the units of the system. Let us make an example:
imagine you have a plan regarding the use of a production resource that must satisfy several jobs. Suppose
that for each job it is necessary to produce a certain quantity of parts (pieces) that is different over time. In
such a case, it is not possible to take into account only the optimal use of that machine but it is necessary to
consider also the effects of our choice on the use of the machines that follow and whose activity is necessary
in order to complete the jobs.
What we have just said becomes clearer if we think of the example regarding the design of an aircraft made
in a previous section. In that case, as we have already noticed, the improvement of only one component (e.g.
a more powerful engine or a larger size of the wings) may decrease the overall performance of the airplane if
the other components have not been adjusted for this change. “At the end, an ‘effective’ solution to the
problem (i.e. a properly flying aircraft) will be one in which a large sets of traits and characteristics have
been coordinated in ways that turn out to be compatible with each other” (Marengo et al., 2000, pg. 759).

14
5.3 The usefulness of complexity measures

Are entropy and fitness measures appropriate indicators of the complexity of decomposable structures?
In this paragraph we try to give an answer to this question. In order to do that, it is worth analysing which of
the features that make the control of complex systems difficult are taken into account by each measure.
This attempt of evaluating the usefulness of the presented complexity measures is synthesised by Table 1.

Entropy measure Fitness Measure

Number of states that can The logarithmic base in N (number of elements


be assumed by the system the formula of entropy that characterises the
units reflects the number of entity) and Ai (number of
states (i = number of units; possible states for each
j = number of possible element)
states of each unit)

Interdependence Need for conditioned K (average number of


probabilities (difficulty in other elements, which
knowing them) each element is
interdependent with)

Different levels of
interdependence among
components corrispond to
different fitness
landscapes

Uncertainty Entropy as an index of the Implicitly considered by


uncertainty level of a the number of possible
sthocastic process configurations

A priori probabilities (they


are known)

Degree of Entropy is an additive The degree of


decomposability measure; with perfect decomposability is strictly
decomposability the related to the level of
entropy of the system is interdependence
the sum of the entropy of
each unit Relevant in the search
process

Table 1: comparison between entropy and fitness measures

Let us make some comments on Table 1.


Shannon (1948) refers to the entropy both as a measure of how much ‘choice’ is involved in the selection of
possible events and as an index of uncertainty about the outcome of a stochastic process. As for uncertainty,
the main points to stress are the following ones. First, the entropy is null if and only if the probabilities are

15
all zero except for one, which is equal to one. This means that “when we are certain of the outcome does H
vanish. Otherwise H is positive” (Shannon, 1948, pg. 394). Second, when all events are equally likely (i.e. all
pi are equal, pi = 1/n) we are in the case with the highest uncertainty that corresponds to the situation in
which H reaches its maximum value. Mathematically, this can be easily shown considering the case of two
possibilities with probability p and (1 - p). How can we interpret this property of the entropy measure if we
want to apply it, for instance to production processes?
According to Shannon, as the number of possible events increases, also the possibilities of choice increase
that is the uncertainty increases. Therefore, an industrial resource (e.g. a machine) that operates on many
different products is characterised by a greater amount of information (or information complexity) with respect
to a resource that operates only on a single product. Moreover, if we assume that units are independent, then
the entropy associated to a system is the sum of the entropy associated to each unit. In the formula of
entropy the logarithmic base reflects the number of states that each unit can assume (i is the number of units
while j is the number of possible states of each unit.
As we have already noticed, the entropy measure20 takes account of two dimensions of the complexity: the
number of states and the probability associated to each state. By using the entropy measure, however,
interdependence is not taken into account. Considering the conditioned entropy can face such gap. In fact,
the latter allows us to grasp an aspect of interdependencies, but it does not take into account the difficulty in
knowing the conditioned probability.
As for the fitness measure, the level of interdependence is reflected by the value of K. Indeed, K expresses
the average number of other elements, which each element is interdependent with. Also the first feature
presented in Table 1 is considered: indeed, N is the number of elements that characterises the entity and Ai is
the number of possible states for each element. The variety of possible configurations is embodied in the
type space and fitness landscape. Therefore, the NK model allows us to compare structures characterised by
a different number of elements or a different level of interdependence by changing the two structural
parameters.

6 Conclusions

In this paper complex systems have been defined as systems whose management and optimisation require
the co-ordination of interdependent elements. The functional relations of the components can only partially
understood. As we have argued in section 3, complex systems are chracterised by different levels of
decomposability. These levels are smaller or higher according to the nature of the interactions among the
components of the considered structure. Thus, the concept of interdependence becomes crucial in the
analysis of complex system. It refers to the fact that the contribution of each member to the overall result
depends, to several degrees, on the state assumed by the others components.

20
Simon (1962) stated that the entropy change does not provide us a guide to system behaviour.

16
In the article the problem of measuring decomposable structures has been discussed. The two groups of
measures we have considered address different characteristics of such structures.
Our attempt was to present the features of the complexity measures and then evaluate their strength and
limits. Future work has to be done in this respect.
We believe that a deeper understanding of complexity measures is useful also for the exploration of the
trade off between opportunities and costs of the complexity. Indeed, on one side, the greater the complexity
is, the more the number of opportunities grows since the possibilities of exploitation of the
interdependencies increase. On the other side, however, as the complexity increases there are more co-
ordination problems since in order to obtain some improvements with respect to a certain point of the
landscape it is necessary to co-ordinate multiple elements. This tension between possible advantages of the
interdependence and co-ordination costs or requirements is the heart of the problem of design in
organisations (Levinthal, 1999). In fact, since the first scientific contributions to the organisational design it
has emerged that the fundamental problem of organisations architecture is constituted by the nature of the
interdependencies among the various activities carried out in an organisation. The theories of organisational
design have always implicitly recognised that increasing levels of interdependence create increasing
requirements of co-ordination. Consequently, the problem has always been to find ways of putting the
problems of co-ordination within manageable limits, by reducing the interdependencies among the various
organisational components. Therefore, decisions of design must find a compromise between opportunities
and costs of the complexity, by finding out which elements to group and which instead to isolate or to
separate (Simon, 1994, Thompson, 1967).
Other questions are of great importance: to what extent do complexity measures deserve the attention of
organisational studies? Can these measures be applied empirically? Finding aswers to these questions is on
the agenda for future work.

7 References

AXELROD, R., COHEN, M.D., 2000, Harnessing Complexity: organizational implications of a scientific frontier,
Free Press.

BALDWIN, C.Y. and CLARK, K.B. 2000, Design Rules, volume 1, The power of modularity, MIT Press,
Cambridge MA.

BECHTEL, W. and RICHARDSON, R.C., 1993, Discovering complexity: decomposition and localization as
strategies in scientific research, Princeton, NJ, Princeton University Press.

17
CALINESCU, A. AND EFSTATHIOU, J. AND SCHIRN, J. AND BERMEJO, J., 1998, Applying and assessing
two methods for measuring complexity in manufacturing, Journal of the Operational Research, 49, 723-733.

CALLEBAUT, W. (Ed.), 2001, Modularity: Understanding the Development and Evolution of Complex Natural
Systems, MIT Press, Cambridge.

DESHMUKH, A.V., TALVAGE, J.J. AND BARASH, M.M, 1992, Characteristics of part mix complexity
measure for manufaturing systems, IIE Transactions, 2, 1384-1386.

DESHMUKH, A.V., TALVAGE, J.J. AND BARASH, M.M, 1998, Complexity in manufacturing system, Part1:
Analysis of static complexity, IIE Transactions, 30, 645-655.

DEVETAG, G. and ZANINOTTO, E., 2002, “The Imperfect Hiding: Some Introductory Concepts and
Preliminary Issues on Modularity”, paper to be presented at the conference on Innovative Research in
Management, May 9-11, 2002, Stockholm.

DI BERNARDO, B. and RULLANI, E., 1999, Il management e le macchine, Il Mulino, Bologna.

DOSI, G., HOBDAY, M. and MARENGO, L., 2000, “Problem-Solving Behaviours, Organisational Forms and
Complexity of Tasks”, Sant’Anna School of Advanced Studies, Laboratory of Economics and
Management Working Paper.

DOSI, G., LEVINTHAL, D. and MARENGO, L., 2001, “Bridging Contested Terrain: Linking Incentive-Based
and Learning Perspectives on Organizational Evolution”, Sant’Anna School of Advanced Studies,
Laboratory of Economics and Management Working Paper.

EGIDI, M., and MARENGO, L., (????), “Cognition, Institutions, Near Decomposability: rethinking Herbert
Simon’s contribution”, University of Trento, Computable and Experimental Economics Lab Working
Paper.

ERIXON, G., 1998, “MFD – Modular Function Deployment, A systematic method and procedure for
company supportive product modularisation”, PhD Thesis, The Royal Institute of Technology,
Stockholm, Sweden.

FRENKEN, K., 2001, “Modelling the organisation of innovative activity using the NK-model”, paper
presented for the Nelson-and-Winter Conference, 12-16 June 2001, Aalborg.

18
FRENKEN, K., MARENGO, L., VALENTE, M., 1999, “Interdependencies, nearly-decomposability and
adaptation”, University of Trento, Computable and Experimental Economics Lab Working Paper.

FRIZELLE, G. AND WOODCOK, E., 1995, Measuring complexity as an aid to developing operational
strategy, International Journal of Operations and Production Management, 5, 26-39.

GAIO, L., GINO, F. and ZANINOTTO, E., forthcoming, I sistemi di produzione. Manuale per la gestione delle
operazioni produttive, Carocci.

KARP, A. AND RONEN, B., 1992, Improving shop floor control: an entropy model approach, International
Journal of Production Research, 4, 923-938.

KARP, A. AND RONEN, B., 1994, An information entropy approach to the small-lot concept, IEEE
Transactions on Enegineering Management, 1, 89-91.

KAUFFMAN, S., 1993, The Origins of Order, Oxford University Press, Oxford.

KAUFFMAN, S., 1996, At Home in The Universe: The Search for Laws of Self-organization and Complexity, Oxford
University Press, Oxford.

LANGLOIS, R.N., 2000, “Modularity in Technology and Organization”, paper presented at the conference
on Austrian Economics and the Theory of the Firm, August 16-17, 1999, Copenhagen Business School.

LEVINTHAL, D.A., 1997, Adaptation on Rugged Landscapes, Management Science, Vol. 43, 7, July, 934-950.

LEVINTHAL, D.A., WARGLIEN, M., 1999, Landscape Design: Designing for Local Action in Complex
Worlds, Organization Science, 10, 3, 342-357.

MARENGO, L., ????, Decentralisation and Market Mechanisms in Collective Problem-Solving.

MARENGO, L., DOSI, G., LEGRENZI, P., PASQUALI, C., 2000, The Structure of Problem-solving
Knowledge and the Structure of Organizations, Industrial and Corporate Change, 9, 4, 757-788.

MARENGO, L., PASQUALI, C., VALENTE, M., 2001, “Decomposability and Modularity of Economic
Interactions”, in Callebaut, W. (Ed.).

McCARTHY, I.P. and RAKOTOBE-JOEL T. (Ed.), 2000, Complexity and Complex Systems in Industry,
Confernce Proceedings, 19th-20th September 2000.

19
NICKERSON, J.A., and ZENGER, T.R., 2001, A Knowledge-based Theory of Governance Choice – A
Problem-solving Approach.

NIELS HENRIK MORTENSEN, and MOGENS MYRUP ANDREASEN, “The need for Proper understanding
of modularisation”, February 7-8, 2000.

PAGE, S.E., 1994, Two measures of difficulty, Economic Theory, vol. 8, 321-346.

SANCHEZ, R. and MAHONEY, J.T., 1996, Modularity, Flexibility, and Knowledge Management in Product
and Organization Design, Strategic Management Journal, 17, Winter Special Issue, 63-76.

SHANNON, C.E., 1948, A Mathematical Theory of Communication, The Bell System Technical Journal, Vol.
XXVII, 3, July, 379-423.

SHANNON, C.E., WEAVER, W., 1949, The Mathematical Theory of Communication, The University of Illinois
Press, Urbana.

SIMON, H.A., 1962, The Architecture of Complexity, Proceedings of the American Philosophical Society, 106(6),
December, 467-482.

SIMON, H.A., 1996, The sciences of artificial, MIT Press, Cambridge MA.

THOMPSON, J.D., 1967, Organizations in action, New York, McGraw-Hill.

WESTHOFF, F.H., et al., 1996, Complexity, Organization, and Stuart Kauffman’s The Origin of Order, Journal
of Economic Behaviour and Organization, Vol. 29, 1-25.

YAO, D.D., 1985, Material and information flows in flexible manufacturing systems, Material Flow, 2, 143-
149.

YAO, D.D. AND PEI, F.F., 1990, Flexible parts routing in manufacturing system, IIE Transactions, 1, 48-55.

20

You might also like