Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/220231363

A Complex System Engineering Design Model.

Article  in  Cybernetics and Systems · November 2010


DOI: 10.1080/01969722.2010.520212 · Source: DBLP

CITATIONS READS

4 1,289

2 authors:

Mahmoud Efatmaneshnik Carl Reidsema


University of South Australia The University of Queensland
87 PUBLICATIONS   513 CITATIONS    64 PUBLICATIONS   426 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Peer learning in online discussions View project

Special Issue: Simplifying the Complex: Modeling and Analysis for System Design and Management View project

All content following this page was uploaded by Mahmoud Efatmaneshnik on 23 November 2016.

The user has requested enhancement of the downloaded file.


Cybernetics and Systems

ISSN: 0196-9722 (Print) 1087-6553 (Online) Journal homepage: http://www.tandfonline.com/loi/ucbs20

A COMPLEX SYSTEM ENGINEERING DESIGN MODEL

Mahmoud Efatmaneshnik & Carl Reidsema

To cite this article: Mahmoud Efatmaneshnik & Carl Reidsema (2010) A COMPLEX
SYSTEM ENGINEERING DESIGN MODEL, Cybernetics and Systems, 41:8, 554-576, DOI:
10.1080/01969722.2010.520212

To link to this article: http://dx.doi.org/10.1080/01969722.2010.520212

Published online: 25 Nov 2010.

Submit your article to this journal

Article views: 175

View related articles

Citing articles: 1 View citing articles

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=ucbs20

Download by: [UNSW Library] Date: 22 November 2016, At: 18:11


Cybernetics and Systems: An International Journal, 41:554–576
Copyright # 2010 Taylor & Francis Group, LLC
ISSN: 0196-9722 print=1087-6553 online
DOI: 10.1080/01969722.2010.520212

A Complex System Engineering Design Model

MAHMOUD EFATMANESHNIK and CARL REIDSEMA


The University of New South Wales, Sydney, New South Wales, Australia

Competitive markets and the advanced needs of modern societies


necessitate mastery and successful design of complex products
and systems. The design management of complex products is the
focus of this article. We pay particular attention to the fact that
organizational communication management is central in the
development of complex product design. Any template for complex
systems design must pay particular attention to the coupling
between product, process, and organization structures. This impor-
tant coupling can be managed through the effective simulation of
the product parametric structure. The complex systems design
methodology presented is focused on the organizational struc-
ture design, such that the complexity and structure of the to-be-
designed product determines the essential interactions of the
designers.

KEYWORDS complex systems, complexity, design management,


design methodology, product design, systems engineering

INTRODUCTION

Engineering design often entails the creation of a system whose desired


functionality may be complex, resulting in complex management processes.
The design theory asserts that the scientific method can hypothesize and
develop a design science. The aim of science is to explain as much as pos-
sible the complexity of the natural phenomena and its processes, whereas
the designer wants to create complex functionalities, in the form of complex
artefact. Designers and scientists seek to find the contributing parameters for
the behavior of the systems and understand the interplay of these para-
meters. Within this perspective, both natural sciences and design science

Address correspondence to Mahmoud Efatmaneshnik, School of Surveying and Spatial


Information Systems, The University of New South Wales, Anzac Parade, Kensington, Sydney,
NSW 2052, Australia. E-mail: mahmoud.e@.unsw.edu.au

554
Complex System Design Model 555

are concerned with understanding and explicating problem solving itself.


The problem-solving process is a search and decision-making process:
searching for solutions of a given problem and then opting and deciding
from those solutions for the one that has the highest utility.
Another trend in modeling the design process of complex products
suggests the exploitation of emergence and greater complexity (rather than
less complexity), which may lead to more complex product functions at the
conceptual design level (Weber and Condoor 1998; Van Alstyne and Logan
2007). A stereotypical complex product (e.g., an aircraft, a computer, or a
Mars probe) can be regarded as a system. The common definition of a system
is a set of interacting or interdependent entities, real or abstract, forming an
integrated whole. The entities can be systems on their part, in which case
the term system of systems is used. From the systems thinking perspective pro-
duct design and system design overlap. The term systems engineering is pre-
ferred to product design, when the design process requires the application of
engineering and scientific knowledge from several disciplines. ‘‘Systems
engineering, in general, is an interdisciplinary field of engineering that wants
to integrate all disciplines and specialty groups into a team effort forming a
structured development process that proceeds from concept to production
to operation’’ (pg. 1, Keating et al. 2003). Systems engineering methodologies
have been applied to the design of loosely coupled systems successfully by
using traditional systems design templates. The most widely referenced tra-
ditional template comprises the following seven phases (Keating et al. 2003):

1. Stating the problem


2. Investigating alternatives
3. Modeling the system
4. Integrating
5. Launching the system
6. Assessing the performance
7. Reevaluating

Today’s challenges as well as future ones necessitate novel approaches


to understanding, analysing, and synthesizing complex and interconnected
large-scale systems. Human history has demonstrated a relentless progress
toward greater complexity—in man-made products and artefacts (Mina
et al. 2006). The last two centuries in particular can be characterized by a rad-
ical move toward greater complexity, in economical, political, social, and
technological systems (Mina et al. 2006), such that complexity, desired or
otherwise, is now dominating almost every aspect of modern life (Marczyk
1999). For that reason, Ottino (2004) argued that a further understanding
of complex systems science within engineering is an imperative for the future
of scientific evolution and has captured the attention of scientists and engi-
neers alike, from almost every discipline. The design of complex systems
556 M. Efatmaneshnik and C. Reidsema

with emergent properties (collective properties absent at the local level of


parts) is almost impossible with traditional engineering tools and methods
(Ottino 2004). ‘‘All the great innovations of the history harnessed emergence:
the tool making of early man; the internet; Gutenberg’s printing press etc.’’
(pg. 2, Van Alstyne and Logan 2007). All of nature’s products harness
emergent functionalities and properties that are the result of cooperation
between large number of microorganisms (Van Alstyne and Logan 2007).
Accordingly, formalizing a template for complex systems engineering would
be central to our ability to design, manufacture, and implement complex
forms, whether large-scale systems from megacities to ballistic missiles or
small nanomachines and products.
Design methods are thus viewed as design tools that lead toward more
holistic solutions and better final products by providing important insights
about the design. For example, a design method might facilitate multidisci-
plinary collaboration amongst design teams, or it can lead to an enhanced
use of brainstorming. In any case, in order to facilitate systematic thinking
about design and to utilize the new design methods there needs to be system
models of the design process and=or design methodologies that represent
the features and characteristics of complexity. System thinking is a design
process methodology (model or template) that, though having potential to
limit creativity to some extent, can prevent the design process from deterior-
ating into chaotic states. Design theory includes several design process
models (or design methodologies) that can be classified as descriptive, pre-
scriptive, computer-based models for distributed design problems (Finger
and Dixon 1989). However, models for handling complex (system) design
problems must be added to this list. This article presents a complex systems
design methodology.

COMPLEXITY IN SYSTEMS ENGINEERING

A generally held view is that the complexity of a system can be fully


represented by size, coupling, and cyclic interactions. The size of a system
is the minimum number of elements that the system can be described with
(Edmonds 1999). Coupling (or connectivity) is the collective dependency
between system elements. The coupling of a system is a strong indicator of
its decomposability (Edmonds 1999). The number of independent cycles is
a basic graph measure sometimes referred to as cyclomatic complexity and
is the number of independent cycles or loops1 in a graph (McCabe 1976).

1
Number of independent cycles is easily calculated by the formula c(G) ¼ m  n þ p
where m is graph size, n is graph order, and p is the number of independent components
determined by multiplicity of zero in the eigenvalue spectrum of a Laplacian matrix.
Complex System Design Model 557

According to Erdi (2008), simple systems are characterized by:

1. Single cause and single effect


2. A small change in the cause implies a small change in the effects
3. Predictability

In contrast, complex systems have interconnected elements and are charac-


terized by (Erdi 2008):

1. Circular causality, feedback loops, logical paradoxes, and strange loops2


2. Small change in the cause may imply dramatic effects
3. Emergence and unpredictability

A complex system is then potentially one in which a large number of inde-


pendent variables interact. Complex systems intrinsically possess potential
for catastrophic failure because the behavior of complex systems is not pre-
dictable from the knowledge of individual elements, no matter how much we
know about them (Merry 1995). Complex systems require special treatment
because they possess the potential for catastrophic failure (Merry 1995; Marc-
zyk 1999). Indeed, ‘‘complexity in conjunction with uncertainty brings about
a host of new problems and phenomena’’ (pg. 39, Marczyk 1999). Uncer-
tainty, seemingly harmless for simple systems, becomes a fundamental issue
that can lead to fragility for large and complex systems; in other words with
the combination of complexity and uncertainty, catastrophe is always around
the corner (Marczyk 1999).
The integration problem in complex new product development (CNPD)
and its tasks at all levels of the product, process, and organizational (PPO)
system involves a tremendous effort. At the integration level, where the pro-
duct components are brought together for interpretability tests, it has been
often observed that complex CNPD processes tend to spiral out of control
(Mihm and Loch 2006). This means that the design process oscillates between
being ahead and behind the schedule. Mihm and Loch (2006) described a
host of these types of oscillating process performance behaviors resulting
in failures within various industries; for example, in the development of
the Microsoft Office software project, in the aeronautics industry for the
Boeing 767-F project, and from the semiconductor industry in Intel’s Itanium
chip design project (Mihm and Loch 2006). Engineers of future complex sys-
tems face an emerging challenge of how to address problems associated with

2
Circular causality in essence is a sequence of causes and effects whereby the explanation
of a pattern leads back to the first cause and either confirms or changes that first cause (Erdi
2008).
558 M. Efatmaneshnik and C. Reidsema

integration of (multiple) complex systems (Keating et al. 2003). In response


to this need, a branch of systems engineering is beginning to become known
as complex system engineering.

DESIGN METHODOLOGY REVIEW

Descriptive models of design are cognitive models that explain, simulate,


or emulate the human designer’s underlying mental processes when cre-
ating an artefact or product (Finger and Dixon 1989). The main detailed
descriptive model of the design process, also known as a canonical
design model, was presented by French (1985). The stages were analysis
of the problem, conceptual design, embodiment of schemes, and detail-
ing. Another widely used descriptive design process was presented by
Cross (2000). Prescriptive models for design describe the necessary and
crucial procedures that the design process ought to have (Finger and
Dixon 1989). Jones’s (1970) prescription for the design process was analy-
sis, synthesis, and evaluation. Archer (1984), Pahl and Beitz (1996), and
Kroll et al. (2001) also developed similar prescriptive design models.
The traditional systems engineering template should also be considered
as a prescriptive design model.
Computer-based models of design processes are concerned with how
computers can design or assist in designing (Finger and Dixon 1989). A
computer-based model expresses a method by which a computer may
accomplish a specified task. Computer-based models are generally specific
to a well-defined class of design problems such as conceptual design, con-
figuration design, and parametric design (Finger and Dixon 1989). In each
of these design classes, design automation is achieved by the emulation of
a computer-based design model that is based, more than anything else, on
specific problem representation schemes.
Distributed design processes are aimed at dealing with the organiza-
tional and process aspects of large-scale design problems. For large-scale
design problems such as aircrafts, cars, etc., the design process is carried
out by multiple design teams that are more often than not multidisciplinary
design teams. In multiteam design, a team refers to a collaboration of design
participants that, in a broad sense, can consist of designers, computers, or
even algorithms (Chen and Li 2001). The distributed models generally utilize
two different tools: decomposition and abstraction. Decomposition reduces
the problem to several subproblems that may be distributed among several
distinct design teams. The subproblems need to be integrated back into a
whole system after they are solved. Abstraction simplifies the description
or specification of the system. Abstraction provides a layered view of the sys-
tem. The layers can be thought of as parts or information about parts of the
product that are categorized according to their importance to the product
Complex System Design Model 559

functions. Several decomposition strategies and abstraction strategies are


respectively discussed in Efatmaneshnik et al. (2009) and Liu et al.
(2003). Consequently, the integration issue is concerned with the coherence
of the product across many abstraction levels or subproblems (distributed
to spatially diverse and possibly multidisciplinary design teams). Coordi-
nation strategies are normally required to handle the coherence problem
and most of the models of distributed design in the literature differ on
how they handle the integration problem. For example, Meunier and Dixon
(1988) described a computer program for hierarchical distributed problem
solving.
Reidsema and Szczerbicki (2001) proposed a model for distributed
planning that had the elements of both abstraction and decomposition. He
described it as a cyclic execution of Generation, Decomposition, Distri-
bution, Integration (GDDI cycle), shown in Figure 1. The Reidsema and
Szczerbicki (2001) model is a gradualist approach to distributed design
problem solving. It uses abstraction layering to gradually unveil and unravel
the design problem. In this article we will adopt and modify this simple
model to suit the characteristics of complex problems.

Complex System Design Models


A model that addresses all or some of the following issues can be regarded as
a model of complex design problems:

1. The relationship between global emergent properties of an engineered


artefact and local properties of its parts.
2. The control, coordination, and cooperation relationships between design
teams at the level of CNPD organization.
3. The interdependencies of the solutions at different abstraction levels.

FIGURE 1 GDDI cycle or the distributed model of Reidsema and Szczerbicki (2001).
560 M. Efatmaneshnik and C. Reidsema

5. The integration problem of many interdependent subproblems at one


abstraction level.
6. The management of complex CNPD projects prone to constraints cost,
time, and quality requirements.

Concurrent engineering (Prasad 1996), spiral design (Boehm 1986), evol-


utionary design (McConnell 1996), and enlightened evolutionary processes
(Bar-Yam 2004), are among the design models that claim ability and utility
in handling complexity.
Concurrent engineering and the spiral design process are not sufficient
to represent complex systems. These processes respectively deal with life
cycle issues and unclear initial product requirements conditions (Prasad
1996; Unger 2003). They are accordingly tackled by means of considerable
overlapping between the tasks of the different stages of the canonical design
model (Prasad 1996) and multiple cross-phase iterations (Unger 2003). Cer-
tainly, these two process models do not adequately represent the complexity
of the design problem and can increase the complexity of representation
when applied to complex design problems. Concurrent engineering and
the spiral design process do not satisfy any of the issues related to the com-
plex problems defined as ones that are large scale, densely connected
(coupled), and have high cyclomatic complexity.
Existing models for complex problem solving are the evolutionary
model of the design process (McConnell 1996) and the enlightened engineer-
ing process (Bar-Yam 2004). The former process is based on prototyping and
testing multiple versions of the product, whereas the latter is based on com-
petition between multiple design teams for higher fitness in their solutions.
Evolutionary processes are also gradualist but not through abstraction or lay-
ering. They take a holistic view toward products as opposed to abstracted
views and see all layers (abstraction levels) as connected. This means that
the integration is performed centrally (inside each unit or team) rather than
in a distributed fashion. Thus, the integration problem is totally removed by
the assumption that there would be no need for interaction between the
design teams at all. The evolutionary model of the process is most suitable
for design problems that are likely to have emergent properties due to their
dense connectivity. However, it is not clear how the evolutionary process can
be helpful when the scale of the product is also large and when cost effec-
tiveness is a design objective. If, for example, the Boeing 747 were to be
designed by evolutionary means, how many prototypes and what cost would
this approach require? Bar-Yam (2004) also proposed overcoming the inte-
gration difficulty of complex systems design through the concept of radical
innovation and discounted the effectiveness of decomposition and abstrac-
tion. This remedy is viable only for problems that are small in size and
decomposition increases their overall complexity (see Efatmaneshnik et al.
2009). It is not possible to tackle large-scale problems all at once.
Complex System Design Model 561

In the context of systems engineering there are three main streams in


which complexity can be addressed:

1. Complexity of products
2. Complexity of CNPD processes
3. Complexity of CNPD organizations

In addition to the inherent complexity of the product, process, and organiza-


tion system, there is another source of complexity in CNPD: this complexity
arises from the circular dependency between the product, process, and orga-
nizational structures (Browning 2001). These circular couplings make the
planning for development of even not-that-complex products a complex
task. As such a valid design methodology for complex problems must take
the PPO circular coupling into account. This important consideration
requires a complexity design model that will extend the new design meth-
odologies=design theories of traditional design process models into the realm
of organization structure and management models. The model presented in
this article is an attempt toward this end.

THE PROPOSED MODEL

Figure 2 depicts our proposed simulation-based complex systems design


model that is an evolved version of Reidsema and Szczerbicki’s (2001) model
with an added simulation stage, and significant differences in each stage. The
model is suited for problems that are large scale, densely interconnected, and
have cyclic dependencies. Various stages of this complex systems design

FIGURE 2 Simulation-based model of design process for complex problems.


562 M. Efatmaneshnik and C. Reidsema

model, their characteristics, and their relationship to complexity are dis-


cussed in detail next.

Abstraction vs. Layering


The contributing parameters to engineering design problems can be
numerous and have different criticalities=importance toward satisfying the
functional requirements (Aleisa and Lin 2008). Obviously, essential features
of the design problem and less important details cannot be considered sim-
ultaneously lest the design process become ineffective and lengthy. This
issue necessitates the gradual (temporal) introduction of less critical design
parameters, which represent the lower abstraction levels in design. The
abstraction method leads to multiple, often hierarchical design parameter
representation. The common interest in the abstracting strategy is that it must
be performed in a way that does not lead to backtracking the design process
across many abstraction levels (Aleisa and Lin 2008). As Bar-Yam (2004) has
pointed out, this ideal abstraction is not possible for complex systems, simply
because in complex systems each layer is sensitive to variation in other
abstraction layers. To understand this we could consider the skin of the
human body that has tasks that are not completely represented when con-
sidering its basic low-level role in covering up essential internal organs.
The same analogy would hold for the skin of an aircraft. In some aircrafts
such as Boeing airliners the skin is designed to be load bearing. Thus, the
skin material, thickness, and joining methods must be designed along with
the other structural parts. This perception of complex systems led Kuras
(2007) and Ryan (2007) to argue that the role of abstraction in traditional sys-
tems engineering must be replaced by ‘‘scope’’ or ‘‘viewpoint’’ or ‘‘level of
observation’’ in complex systems engineering; asserting consideration of
the fact that the criticality-based hierarchical structure of simple systems must
be replaced with seemingly flat, complex structures that have global emerg-
ence properties. The view places an emphasis on the operational as well as
functional criticality of all layers of a complex system.
The layered view as opposed to abstracted view advocates the coher-
ence between design solutions in various layers. As such, complex systems
engineering processes deteriorate into massively iterative processes that har-
ness emergence and employ self-organizing processes in a bottom-up
approach (Alstyne and Logan 2007). This required homeostatic relationship
between design and emergence is the condition for the outbreak of inno-
vation (Alstyne and Logan 2007). Weber and Condoor (1998) argued for
exploiting synergetic effects of combining design solutions from a morpho-
logical table at various abstraction levels. It seems that innovativeness
remains as the sole common characteristic of integrated solutions to complex
engineering problems. To harness the massive iteration between various
layers, Keating et al. (2003) recommended maintaining distributed focus on
Complex System Design Model 563

various layers during the design process as opposed to a single focus in tra-
ditional systems engineering. As such, in order to harness massive iteration
between different layers of design, we propose that the design must be car-
ried out by utilizing virtual design teams that can be formed and dismantled
more rapidly with response to low-level product structural knowledge. This a
priori knowledge as can be obtained by simulation. See Efatmaneshnik and
Reidsema (2009) for a complete treatment of the issue, where it was argued
that the design processes for various layers may be carried out simul-
taneously by using IMMUNE, which is a conceptual decision support system
based on blackboard and multi-agent structures.

Generation
The process of generating the design problem, often referred to as conceptual
design, is the first stage of the canonical design process. Here we assume that
these concepts are generated in the parametric format as sets of new design
variables. But if this is not the case, the concepts must be parameterized
through extraction from structured and behavioral representation of the pro-
duct before proceeding to the next stage. The task of determining the values
of the design variables constitutes the lowest level problem-solving activity.
Naturally the generated variables at the initial abstraction levels are taken
as those variables related to the higher abstraction levels that encompass
the more intrinsic characteristics of the product, which has to do with the
main functions and performances of the product. In the same way, the lower
abstraction levels usually locate the variables that describe the detailed func-
tionalities of the product. However, these rules may be forsaken. We believe
that it is advantageous to think of the variables in the first abstraction level in
the hierarchy as a seed that all the solutions of all other abstraction levels
depend heavily upon. Given this premise, the seed must have a foretaste
of other problems at other abstraction levels. Thus the seed must not only
reflect the most important and general functions of the product but also those
functions that are low in the functional decomposition chart and still have
large dependencies on the other functions=attributes of the product.
The number of variables=parameters that can be generated at each
abstraction level can be regarded as the termination criteria for each layer.
More variables, obviously, implies larger search and solution spaces. The stop-
ping criteria should be managed vary carefully, though. To allow synergy
between different layers, Liu et al. (2003) proposed that the number of vari-
ables at each stage=layer must increase until about the midpoint in the design
process and by then the global trend (determined by the number of variables)
must be toward convergence (decrease in the number of variables). Liu et al.
(2003) proposed that for complex problems an ideal approach to abstraction
would be to carefully specify the number of possible solutions to be con-
sidered at the divergent stage of each abstraction level in order to have a global
564 M. Efatmaneshnik and C. Reidsema

trend of divergence–convergence. The ideal approach of concept generation


should follow multiple divergence–convergence in order to gradually increase
the number of solutions in concept generation, followed by an overall diver-
gent and convergent tendency to detail these concepts with an overall
decrease in the solution number. This way, the design process would benefit
both from synergetic solutions and also a manageable search in the solution
space. Note that the ideal layering approach was presented in abstracting con-
text that has a time-linear characteristic. Nevertheless, the ideal technique can
be employed in the context of layering, and the ideal approach does not essen-
tially disagree with simultaneous and iterative search.

Simulation
After the generation phase, simulation is required to determine the relationship
between the generated parameters. Referring to the process of systematically
testing ideas early in new product development (NPD) as ‘‘enlightened
experimentation, (pg. 1)’’ Thomke (2001), in the article ‘‘Enlightened Exper-
imentation: The New Imperative for Innovation,’’ argued that simulation tech-
nologies enhance the number of design breakthroughs by testing a greater
variety of ideas in a virtual environment. According to Thomke (2001), ‘‘com-
puter simulation doesn’t simply replace physical prototypes as a cost-saving
measure but it introduces an entirely different way of experimenting that invites
innovation (pg. 2).’’ Simulation is the key to resolving performance as well as
operational requirements improvement with sensible development and pro-
duction costs, times, and risks for multidisciplinary systems (Marczyk 1999).
Monte Carlo simulation is often suggested as a means of establishing a design
space because creating high-fidelity simulation models is often expensive.
Monte Carlo simulations can digest information obtained from the
experiments to tune the simulation for higher compatibility with the real
system. Monte Carlo simulation requires the estimation of the conditional
probability distribution of every pair of design variables; for example, from
the design of experiments results or from available models of the artefact=
problem. Figure 3 shows the typical modules that a simulation engine might
contain. Here the simulation is a parametric simulation in the statistical sense
that utilizes the parametric model of the problem=artefact. Thus a simulation
engine must contain a parameterization module to present the problem in
the parametric format.
The outcome of simulation is the fitness landscape or design space at a
given design layer. The fitness landscape is the metamodel of the design vari-
ables and their relations. We propose that the metamodels of the product can
in effect be used to construct a dependency table between the design
variables, which is known as the design structure matrix (DSM). A design
structure matrix is a system modeling and knowledge representation tool that
is useful in decomposition and integration (Browning 2001). A DSM shows
Complex System Design Model 565

FIGURE 3 Simulation engine module and its integration in the problem-solving procedure.

the relationships and interplay of components of a system as a matrix that has


identical row and column labels. DSM is usually employed in modeling pro-
duct, processes, and organizational architecture. Browning (2001) presented
the following types of DSM:

1. Parameter-Based DSM (PDSM): represents the product architecture and is


used for modeling low-level relationships between design variables.
2. Activity-Based DSM: models the information exchange and dependencies
between various tasks of an activity network. This model has been
developed to address and assess high-level estimation of the integration
problem and interdependency issues of low-level design activities such
as parametric design (Kusiak 1999).
3. Team-Based or Organizational DSM: models the organizational structure
interns of the information exchanges and interactions between the design
players such as design teams.

Utilizing design space simulation in the design process to estimate the


PDSM of the product=problem as early as possible and at the upstream of
the design process is the main pillar of our complex engineering problem-
solving model. The simulated PDSM gives important insights about the
couplings in the problem structure and thus tasks and organizational structures.
Therefore, by simulation not only can the performance of the product be esti-
mated but the design process can be managed more effectively and efficiently.

Decomposition
According to Papalambros (2002), decomposition of large-scale design
problems allows for

. utilization of different solution techniques for individual subproblems,


. simultaneous design, modularity, multiobjective analysis, and
. efficient communication and coordination among the diverse groups
involved in the design process.
566 M. Efatmaneshnik and C. Reidsema

Problem decomposition and partitioning of the self-map of the system fits


within the area of graph partitioning. A bipartitioning of graph G is a division
of its vertices into two sets or subgraphs, P1 and P2. Similarly, a
k-partitioning is the division of the vertices of the graph into k nonempty sets
P ¼ [P1, P2, . . . , Pk]. A graph can be partitioned in many different ways. In the
domain of problem solving, every node or vertex of a graph represents a
variable of the system, and every edge of the graph suggests that two para-
meters of the system are dependent on each other; however, there are sev-
eral other representational schemes (see, for example, Papalambros [2002]
and the references therein; Chen et al. [2005]). In this article, the strength
of the relationship between two variables is the corresponding edge weight.
An undirected graph as the self-map of the system indicates that variables
affect one another mutually and equally. The subgraphs can be regarded
as subsystems or subproblems or agents (Kusiak 1999). The notion of agency
implies that the subproblems are solved more or less independently from
each other. Each design team has autonomy to explore parts of the solution
space that is of interest to its own assigned subproblem (agent). We suggest
that decomposition must be taken as the decomposition of simulated PDSM.
Kuras (2007) portrayed a fundamental difference that traditional systems
engineering and complex systems engineering ought to have: the need for
multiple conceptualizations (evolutionary and dynamic modeling) of the sys-
tem at different scales (layers) compared to one static conceptualization
(unchanged model) of the system in traditional systems engineering. Ring
and Madni (2005) also acknowledged that a static model of the system is
not only insufficient but leads to serious misunderstandings and undercon-
ceptualization of the solution, which opens the way for unintended conse-
quences. We also propose decomposing the PDSM in several modes and
according to its connectivity level. Modal decomposition of various layers
would be aligned with the recommendations of Kuras (2007) and Ring and
Madni (2005). Problem connectivity is the total number of edges in the
self-map of the product=problem divided by the total number of possible
edges; that is, the number of edges of a complete graph with the same num-
ber of nodes. The total number of possible edges in a complete undirected
graph with n nodes or vertices is

 
n n  ðn  1Þ
K¼ ¼ ð1Þ
2 2

If the self map of the PDSM has k connections (edges), we define the
problem connectivity as:

k
p¼ ð2Þ
K
Complex System Design Model 567

TABLE 1 Decomposition Modes of Self-Map of Problems


Very low Low Intermediate High Very high
Connectivity (0–0.02) (0.02–0.1) (0.1–0.2) (0.2–0.3) (0.3–1)

Possible or best Full Integrative Modular Overlap No


decomposition decomposition clustering clustering clustering decomposition
strategy
Image

The decomposition modes are brought together in Table 1. Bearing in


mind that it is usually desirable to have subsystems of similar order, the
implementation of some of these decomposition modes (in particular, full
decomposition mode and integrative mode) may not always be feasible.
The connectivity values in Table 1 are based on this knowledge and the
experience of the author with randomly generated graphs. For problems
with a denser self-map (higher connectivity), modular clustering and overlap
decomposition can be used. If the problem’s map is very dense and the sys-
tem is regarded as highly complex, then it may not be decomposed at all
(Bar-Yam 2004).
Each of these decomposition modes brings specific strengths, weak-
nesses, and particularity to the problem-solving process. As an example,
an aircraft with a blended wing body may not be decomposed completely
into separate body and wings, with the related design variables being inde-
pendent or loosely dependent. Instead, for systems that have subsystems
with fuzzy boundaries, overlap decomposition may be used. The reason
why much effort in this work has been devoted to problem decomposition
is that clustering in fact is the key to tying top-down, activity-based DSMs
together with bottom-up, parameter-based DSMs (Browning 1999).

Composition and Distribution


The formation of the design teams and distribution of the design tasks among
those teams forms the composition and distribution stage. Integrated NPD
describes how tasks are interconnected and seeks to integrate product pro-
cess and organization (Prasad 1996). Organizational partitioning and inte-
gration are tied to the nature of the product decomposition (Gulati and
Eppinger 1996; Browning 1999). It is a valid argument that the efficiency
of the design integration process is somehow dependent on the alignment
of the task structure and organizational structure (Browning 1999). Thus inte-
grated NPD requires that decomposition and integration schemes have
congruence. A successful CNPD project requires that the harmony between
the two structures be deliberately created by synchronization of formation
and distribution tasks. In this way, the background or environment in which
568 M. Efatmaneshnik and C. Reidsema

a system is being designed is analyzed and temporarily modified to influence


the complex system’s self-directed development (Kuras 2007). In general,
two main approaches are reported in the literature for the deliberate align-
ment of problem structure and NPD organization structure: bottom-up plan-
ning for flexible organizational structure and top-down planning for fixed
organizational structures.
Bottom-up planning for flexible organizational structure forms the
design teams after product decomposition has taken place. For example,
the aligned organizational architecture in charge of conceptual and para-
metric design of an aircraft’s body and wings, in the same way, constitutes
two different design teams with overlapping boundaries. Tanaka et al.
(2000) took this approach in the context of distributed problem solving
and called it multi-agent system creation. This approach has also been
used in the manufacturing system MetaMorph (Maturana et al. 1999), which
could dynamically change its form to mimic the task structure. In this case,
the number of the subsystems the system is decomposed into may be
maximized for better overall performance. Also, a conceptual design decision
support system IMMUNE, presented in Efatmaneshnik and Reidsema (2009),
is a blackboard database to facilitate the grouping of design agents according
to the product complexity characteristics.
Top-down planning for fixed organizational structures decomposes the
problem=product in a way that suits the organizational structure; this is used
when the organizational structure is fixed and solid, which means that design
teams are formed prior to the introduction of the problem. A greater use of
coordination activities between the design teams and=or the use of integra-
tion teams results. In this case, the number of subsystems is determined
according to the number of design teams.
The two described approaches correspond to using low-level and
high-level knowledge for design planning. In general, in a problem-solving
environment the designer’s actions can be planned or controlled by using
three kinds of knowledge (Reidsema and Szczerbicki 2001):

1. Low-level problem knowledge


2. Medium-level knowledge of the problem-solving process (activities)
3. High-level organizational knowledge

Browning (2001) emphasized that PDSM, which represents the low-level


product knowledge, has integrative applications. He asserted that design
planning usually is performed in a top-down fashion by giving centrality to
decomposition. This approach prevents the design planners from accounting
for the lowest levels of design activity, which is the determination of individ-
ual design parameters based on other parameters. As such, a bottom-up
approach provides a more integrative analysis of these low-level design
activities by providing process structure insights (Browning 2001).
Complex System Design Model 569

In product design, the performance and operational requirements of the


product are micro- or low-level parameters, whereas production costs, times,
and risks are macro- or high-level and often emergent properties of the pro-
cess. Macrolevel properties (those that are observed at the organizational
level) dynamically arise from the interactions between the microlevel proper-
ties that are the low-level activities. Thus, in order to resolve the uncertainty
and deal with high-level emergent properties of the process and organiza-
tional operations (that can be chaotic), the low-level knowledge of the prob-
lem must be used to characterize the behavioral rules of individual problem
solvers. Low-level knowledge of the product indeed suggests a bottom-up
approach.
The PDSM that represents the low-level product knowledge is suitable
to be utilized for planning and distributing complex engineering problems.
We propose that low-level knowledge of simulated PDSM be used to derive
team-based DSM. This team-based DSM must not only be used for the forma-
tion of design teams, in terms of the size of teams, and the expertise but also
for ensuring adequate team interactions as the plan for necessary minimal
collaborations. The predicted team-based matrix reflects the likely infor-
mation exchanges as well as the internal complexity of the problems that
the design teams ought to deal with. The direct mapping between organiza-
tional DSM and the simulated PDSM can be used to force the organization
structure to mirror the product architecture. This allows for predefined com-
munication and information exchange channels, in a large and complex
environment, which, per se, should prevent the process from spiraling out
of control by losing track of the necessary communications.

Integration
Integration combines the partial solutions of a large problem (Reidsema and
Szczerbicki 2001). The integration problem for complex problems and com-
plex problem-solving environments is a key challenge for problem solvers.
For complex systems, due to coupling between the distributed tasks, inte-
gration may not be performed linearly simply by adding the partial solutions
together. Because the coupled problems tend to be nonlinear (the same as
the coupled differential equations), the solutions may not be achieved by
using the usual concurrent planning, which adds the partial solutions to
obtain the overall solution. The nonlinearity limits the kind of knowledge
being used for planning.
Integration within the design process can be conducted by two major
methods:

1. Supervised integration
2. Unsupervised integration
570 M. Efatmaneshnik and C. Reidsema

Supervised problem-solving architectures involve high level integration


teams and centralized planning (see Figure 4). Eppinger (1997) stated that
at one level integration takes place inside each design team, which is also
known as concurrent engineering. Here a cross-functional team can address
many design and production concerns simultaneously. He asserted that to
ensure that the entire system is coherent and works together, subsystem
design teams must attempt tight cooperation and must work together. Addi-
tional teams are usually assigned for this important issue to supervise the
critical challenge of integrating the subsystems into a globally coherent (over-
all) system. However, for densely coupled and complex problems=systems,
using high-level integration teams as coordinators cannot be effective,
because the high load of coordination complexity would be a greater barrier
to the effectiveness of the integration process.
Unsupervised problem-solving architectures can cope with problems of
more complexity using a distributed planning approach. These include sys-
tems that use

1. low-level integration teams,


2. multi-agent architectures, and
3. information-intensive architectures.

Systems using low-level integrators and multi-agent architectures correspond


to two decomposition patterns that are recognized by Sosa et al. (2000) as
coordination based and modular. Coordination-based decompositions par-
tition the system into several relatively independent subsystems and only
one (or few) strongly connected subsystem(s), namely, the coordination
block(s) (Figure 5(a)). The identification of a coordination block in a system
can be performed through integer programming (Sosa et al. 2000). An
example of a coordination block can be the microprocessor or CPU of a
computer. If a company decides to design the entire computer with all its

FIGURE 4 Integration team acts as a high-level coordinator.


Complex System Design Model 571

FIGURE 5 Interactions between design teams of low-level integration scheme (a) and
multi-agent design system (b).

modules and submodules from scratch, most of the parts are affected by the
performance, quality, and characteristics of the CPU chip. The coordination
block is an integrative subsystem and the design team in charge of integrative
subsystem design is regarded as a low-level integration team that can
implicitly coordinate the activities of other teams. Obviously, the design of
the integrative subsystem must be much more complex than the other
subsystems. As such, the integration of complex systems with more than a
certain amount of coupling is not desirable with low-level integration
schemes through coordination-based problem decompositions.
The interactions between the design teams in multi-agent systems are
autonomous and based on agents’ social knowledge. Multi-agent systems
are a relatively complex field of research. The solution to the design pro-
blems in multi-agent systems is formed in a self-organizing fashion that
emerges as result of autonomous interaction of the agents (Figure 5(b));
multi-agent systems correspond to modular problem decomposition.
Information-intensive architectures can also be regarded as multi-agent
systems in which the design teams (or coalition of agents) have overlapping
boundaries. Information-intensive architectures correspond to overlap
decomposition of products=systems in which subsystems are overlapped
and share some of the design variables with each other. Information-
intensive structures facilitate collaborative design for large-scale and complex
design problems. According to Klein et al. (2003), collaborative design is per-
formed by multiple participants representing individuals, teams, or even
entire organizations, each potentially capable of proposing values for design
parameters and=or evaluating these choices from their own particular
perspectives. For large-scale and severely coupled problems, collaborative
problem solving is possible when the design space or problem space is
decomposed in an overlapped manner: the design teams explicitly share
some of their parameters, problems, and tasks. The main characteristic of this
process model is the intense collaboration between coalitions of agents,
making this mode an information- and knowledge-intensive process (Klein
et al. 2003). The impact of new information on the design process in this
572 M. Efatmaneshnik and C. Reidsema

integration scheme is relatively high; as such, overlap decomposition and its


corresponding integration scheme are suitable for problems of high
complexity and self-connectivity.
Following the idea of enlightened engineering (Bar-Yam 2004), we
suggest that for problems or subproblems with high levels of connectivity,
several design groups must work in parallel to each other on the same prob-
lem (which has not been decomposed). The design teams in this situation
compete rather than cooperate with each other. Highly connected layers of
a complex product, as a general rule, are not many. It should be noted that
one should not expect many highly dense layers, or irreducible cores, in a
complex systems. Complex systems are usually characterized by few highly
connected hubs (Marczyk and Deshpande 2006).
Although we do not rule out the utilization of the above integration
schemes, we propose that they must be employed according to their adequ-
acy to cope with problems of higher complexity as shown in Figure 6. We
suggest that the overall team configuration must be determined according
to the complexity of the problem at that layer because ‘‘the productivity of
design teams depends to a large extent on the ability of its members to tap
into an appropriate network of information and knowledge flows’’ (pg. 6,
Kratzer et al. 2004). Thus at each round of the design cycle (a design layer)
the design organization must be, in terms of the team configuration, adapted
to the generated tasks. This is indeed an adaptive coordination scheme that is
needed to direct the design process so that a design solution is sought in a
way that accommodates the team interactions.
Adaptive organization structuring, of course, is not new. For example,
utilizing cross-functional teams that adapt the organization structure to the
task structure has been around for many years (Browning 1999). Also, the
idea of embedding different knowledge-sharing patterns among the design
agents and design teams of one system has been to date considered by sev-
eral other systems researchers (Efatmaneshnik and Reidsema [2009] and
references therein). Giddens, in a sociological context, studied ‘‘the

FIGURE 6 Ranking various integration schemes’ capabilities in coping with complexity.


Complex System Design Model 573

production and reproduction of the social systems through members’ use of


rules and resources in interaction,’’ which is well known as the theory of
structuration (Desanctis and Poole 1994). DeSanctis and Poole (1994)
adopted Giddens’ theory to study the interaction of groups and organizations
with information technology, denoted as adaptive structuration theory. The
theory essentially deals with the evolution and development of groups and
organizations with observable patterns of relationships and communicative
interaction among the people.

CONCLUSION

The presented model of design process for complex products is distributed,


evolutionary, adaptive, simulation based, and has elements of the spiral pro-
cess cross-layer information exchange. We emphasized the role of simulation
as central in managing and planning the design process. The model was
based on the parametric representation of the design at various layers that,
due to its simplicity, is a preferred notation to abstraction levels when dealing
with complex systems. The model uses the low-level parametric knowledge
of the problem to plan the development organization, that is, a bottom-up
approach, within structuring the design process. An important research ques-
tion in the field of organization design is how to constitute cross-functional
teams. By focusing on this we suggested using the low-level knowledge of
the design problem to configure and reshape the high-level organizational
structure. The proposed model of design goes beyond the traditional models
by blending engineering design with organization design. This approach is
closer to the dual abilities of preventing the process from spiraling out of
control (or falling into chaos) and at the same time harnessing the emergence
of innovation.

REFERENCES

Aleisa, E. E. and Lin, L. 2008. Abstraction hierarchies for engineering design. World
Academy of Science, Engineering and Technology 45: 585–597.
Alstyne, V. and Logan, G. R. 2007. Designing for emergence and innovation:
Redesigning design. Artifact 1(2): 120–129.
Archer, L. B. 1984. Systematic method for designers. London: Council of Industrial
Design of London.
Bar-Yam, Y. 2004. Making things work: Solving complex problems in a complex
world. NECSI Knowledge Press, USA.
Boehm, B. 1986. A spiral model of software development and enhancement. ACM
SIGSOFT – Software Engineering Notes 11(4): 14–24.
Browning, T. R. 1999. Designing system development projects for organizational
integration. Systems Engineering 2: 217–225.
574 M. Efatmaneshnik and C. Reidsema

Browning, T. R. 2001. Applying the design structure matrix to system decomposition


and integration problems: A review and new directions. IEEE Transactions on
Engineering Management 48(3): 292–306.
Chen, L., Ding, Z., and Simon, L. 2005. A formal two-phase method for decomposition
of complex design problems. Journal of Mechanical Design 127: 184–195.
Chen, L. and Li, S. 2001. Concurrent Parametric Design Using a Multifunctional
Team Approach. Paper presented at the Design Engineering Technical
Conferences DETC’01, Pittsburgh, PA.
Cross, N. 2000. Engineering design methods: Strategies for product design, 3rd ed.
John Wiley & Sons., Ltd., New York.
Desanctis, G. and Poole, M. S. 1994. Capturing the complexity in advanced
technology use: Adaptive structuration theory. Organization Science 5:
121–147.
Edmonds, B. 1999. Syntactic measures of complexity. University of Manchester.
Efatmaneshnik, M. and Reidsema, C. 2009. Immune: A collaborating environment for
complex system design. Computational intelligence: Collaboration, fusion and
emergence, edited by C. Mumford and L. Jain. Springer-Verlag, Berlin=Heidel-
berg, pp. 275–320.
Efatmaneshnik, M., Reidsema, C., Marczyk, J., and Balaei, A. 2009. Immune
decomposition and decomposability analysis of complex design problems with
a graph theoretic complexity measure. In Smart information and knowledge
management, edited by E. Szcerbicki and N. T. Nguyen. Springer-Verlag,
Berlin=Heidelberg, pp. 27–52.
Eppinger, S. D. 1997. A Planning Method for Integration of Large Scale Engineering
Systems. Paper presented at the International Conference on Engineering
Design ICED 97 Tampere, Finland.
Erdi, P. 2008. Complexity explained. Springer-Verlag, Berlin=Heidelberg.
Finger, S. and Dixon, J. R. 1989. A review of research in mechanical engineering
design. Part I: Descriptive, prescriptive, and computer-based models of design
processes. Research in Engineering Design 11: 51–67.
French, M. J. 1985. Conceptual design for engineers. London: Design Council.
Gulati, R. K. and Eppinger, S. D. 1996. The coupling of product architecture and
organizational structure decisions. Cambridge, MA: MIT.
Jones, C. H. 1970. Design methods: Seeds of human futures. New York: John Wiley &
Sons.
Keating, C., Rogers, R., Unal, R., Dryer, D., Sousa-Poza, A., Safford, R., Peterson, W.,
and Rabadi, G. 2003. System of systems engineering. Engineering Management
Journal 15(3): 62.
Klein, M., Sayama, H., Faratin, P., and Bar-Yam, Y. 2003. The dynamics of collabora-
tive design: Insights from complex systems and negotiation research. Concur-
rent Engineering Research & Applications 11(3): 201–209.
Kratzer, J., Leenders, R. Th. A. J., and Van Engelen, J. M. L. 2004. A delicate mana-
gerial challenge: How cooperation and integration affect the performance of
NPD teams. Team Performance Management 10(1=2): 20–25.
Kroll, E., Condoor, S. S., and Jansson, D. G. 2001. Innovative conceptual design.
Cambridge University Press, UK.
Complex System Design Model 575

Kuras, M. L. 2007. An introduction to complex-system engineering. InterJournal.


Manuscript ID 2006.
Kusiak, A. 1999. Engineering design: Products, processes, and systems. Academic Press,
San Diego, US.
Liu, Y. C., Chakrabarti, A., and Bligh, T. 2003. Towards an ideal approach for
concept generation. Design Studies 24(4): 341–355.
Marczyk, J. 1999. Principles of simulation based computer aided engineering. FIM
Publications, Madrid.
Marczyk, J. and Deshpande, B. 2006. Measuring and Tracking Complexity in
Science. Paper presented at the 6th International Conference on Complex
Systems, Boston, MA.
Maturana, F., Shen, W., and Norrie, D. H. 1999. MetaMorph: An adaptive agent-based
architecture for intelligent manufacturing. International Journal of Production
Research 37(10): 2159–2173.
McCabe, T. J. 1976. A complexity measure. IEEE Transactions on Software Engineer-
ing 2(4): 308–320.
McConnell, S. 1996. Rapid development: Taming wild software schedules. Redmond,
WA: Microsoft Press.
Merry, U. 1995. Coping with uncertainty: Insights from the new sciences of chaos,
self-organization, and complexity. London: Praeger.
Meunier, K. and Dixon, J. R. 1988. Iterative Respecification: A Computational Model
for Hierarchical Mechanical System Design. Paper presented at the ASME Com-
puters in Engineering, San Francisco, CA.
Mihm, J. and Loch, C. 2006. Spiraling out of control: Problem-solving dynamics in
complex distributed engineering projects. In Complex engineered systems.
Mina, A. A., Braha, D., and Bar-Yam, Y. 2006. Complex engineered systems: A new
paradigm. In Complex engineered systems.
Ottino, J. M. 2004. Engineering complex systems. Nature 427: 399.
Pahl, G. and Beitz, W. 1996. Engineering design—a systematic approach. New York:
Springer.
Papalambros, P. Y. 2002. The optimization paradigm in engineering design:
Promises and challenges. Computer-Aided Design 34: 939–951.
Prasad, B. 1996. Concurrent Engineering Fundamentals: Integrated Product and
Process Organisation. Prentice Hall, Upper Saddle River, NJ.
Reidsema, C. and Szczerbicki, E. 2001. A blackboard database model of the design
planning process in concurrent engineering. Cybernetics & Systems 32(7):
755–774.
Ring, J. and Madni, A. 2005. Key Challenges and Opportunities in ‘‘System of Systems’’
Engineering. Paper presented at the IEEE International Conference on Systems,
Man and Cybernetics, Waikoloa, Hawaii.
Ryan, A. J. 2007. Emergence is coupled to scope, not level. Complexity 13(2):
67–77.
Sosa, M. E., Eppinger, S. D., and Rowles, C. M. 2000. Designing Modular and Inte-
grative Systems. Paper presented at the DETC2000 International Design Engin-
eering Technical Conferences and Computers and Information in Engineering
Conference, Baltimore, MD.
576 M. Efatmaneshnik and C. Reidsema

Thomke, S. 2001. Enlightened experimentation: The new imperative for innovation.


Harvard Business Review 79(2): 66–75.
Unger, D. W. 2003. New product process design: Improving development response to
market, technical, and regulatory risks. MIT.
Van Alstyne, G. and Logan, R. K. 2007. Designing for emergence and innovation:
Redesigning design. Artifact 1(2): 120–129.
Weber, R. G. and Condoor, S. S. 1998. Conceptual design using a synergistically
compatible morphological matrix. In Proceedings of the 28th Annual Frontiers
in Education, FIE98#1: 171–176.

View publication stats

You might also like