Professional Documents
Culture Documents
A Complex System Engineering Design Model
A Complex System Engineering Design Model
net/publication/220231363
CITATIONS READS
4 1,289
2 authors:
Some of the authors of this publication are also working on these related projects:
Special Issue: Simplifying the Complex: Modeling and Analysis for System Design and Management View project
All content following this page was uploaded by Mahmoud Efatmaneshnik on 23 November 2016.
To cite this article: Mahmoud Efatmaneshnik & Carl Reidsema (2010) A COMPLEX
SYSTEM ENGINEERING DESIGN MODEL, Cybernetics and Systems, 41:8, 554-576, DOI:
10.1080/01969722.2010.520212
INTRODUCTION
554
Complex System Design Model 555
1
Number of independent cycles is easily calculated by the formula c(G) ¼ m n þ p
where m is graph size, n is graph order, and p is the number of independent components
determined by multiplicity of zero in the eigenvalue spectrum of a Laplacian matrix.
Complex System Design Model 557
2
Circular causality in essence is a sequence of causes and effects whereby the explanation
of a pattern leads back to the first cause and either confirms or changes that first cause (Erdi
2008).
558 M. Efatmaneshnik and C. Reidsema
FIGURE 1 GDDI cycle or the distributed model of Reidsema and Szczerbicki (2001).
560 M. Efatmaneshnik and C. Reidsema
1. Complexity of products
2. Complexity of CNPD processes
3. Complexity of CNPD organizations
various layers during the design process as opposed to a single focus in tra-
ditional systems engineering. As such, in order to harness massive iteration
between different layers of design, we propose that the design must be car-
ried out by utilizing virtual design teams that can be formed and dismantled
more rapidly with response to low-level product structural knowledge. This a
priori knowledge as can be obtained by simulation. See Efatmaneshnik and
Reidsema (2009) for a complete treatment of the issue, where it was argued
that the design processes for various layers may be carried out simul-
taneously by using IMMUNE, which is a conceptual decision support system
based on blackboard and multi-agent structures.
Generation
The process of generating the design problem, often referred to as conceptual
design, is the first stage of the canonical design process. Here we assume that
these concepts are generated in the parametric format as sets of new design
variables. But if this is not the case, the concepts must be parameterized
through extraction from structured and behavioral representation of the pro-
duct before proceeding to the next stage. The task of determining the values
of the design variables constitutes the lowest level problem-solving activity.
Naturally the generated variables at the initial abstraction levels are taken
as those variables related to the higher abstraction levels that encompass
the more intrinsic characteristics of the product, which has to do with the
main functions and performances of the product. In the same way, the lower
abstraction levels usually locate the variables that describe the detailed func-
tionalities of the product. However, these rules may be forsaken. We believe
that it is advantageous to think of the variables in the first abstraction level in
the hierarchy as a seed that all the solutions of all other abstraction levels
depend heavily upon. Given this premise, the seed must have a foretaste
of other problems at other abstraction levels. Thus the seed must not only
reflect the most important and general functions of the product but also those
functions that are low in the functional decomposition chart and still have
large dependencies on the other functions=attributes of the product.
The number of variables=parameters that can be generated at each
abstraction level can be regarded as the termination criteria for each layer.
More variables, obviously, implies larger search and solution spaces. The stop-
ping criteria should be managed vary carefully, though. To allow synergy
between different layers, Liu et al. (2003) proposed that the number of vari-
ables at each stage=layer must increase until about the midpoint in the design
process and by then the global trend (determined by the number of variables)
must be toward convergence (decrease in the number of variables). Liu et al.
(2003) proposed that for complex problems an ideal approach to abstraction
would be to carefully specify the number of possible solutions to be con-
sidered at the divergent stage of each abstraction level in order to have a global
564 M. Efatmaneshnik and C. Reidsema
Simulation
After the generation phase, simulation is required to determine the relationship
between the generated parameters. Referring to the process of systematically
testing ideas early in new product development (NPD) as ‘‘enlightened
experimentation, (pg. 1)’’ Thomke (2001), in the article ‘‘Enlightened Exper-
imentation: The New Imperative for Innovation,’’ argued that simulation tech-
nologies enhance the number of design breakthroughs by testing a greater
variety of ideas in a virtual environment. According to Thomke (2001), ‘‘com-
puter simulation doesn’t simply replace physical prototypes as a cost-saving
measure but it introduces an entirely different way of experimenting that invites
innovation (pg. 2).’’ Simulation is the key to resolving performance as well as
operational requirements improvement with sensible development and pro-
duction costs, times, and risks for multidisciplinary systems (Marczyk 1999).
Monte Carlo simulation is often suggested as a means of establishing a design
space because creating high-fidelity simulation models is often expensive.
Monte Carlo simulations can digest information obtained from the
experiments to tune the simulation for higher compatibility with the real
system. Monte Carlo simulation requires the estimation of the conditional
probability distribution of every pair of design variables; for example, from
the design of experiments results or from available models of the artefact=
problem. Figure 3 shows the typical modules that a simulation engine might
contain. Here the simulation is a parametric simulation in the statistical sense
that utilizes the parametric model of the problem=artefact. Thus a simulation
engine must contain a parameterization module to present the problem in
the parametric format.
The outcome of simulation is the fitness landscape or design space at a
given design layer. The fitness landscape is the metamodel of the design vari-
ables and their relations. We propose that the metamodels of the product can
in effect be used to construct a dependency table between the design
variables, which is known as the design structure matrix (DSM). A design
structure matrix is a system modeling and knowledge representation tool that
is useful in decomposition and integration (Browning 2001). A DSM shows
Complex System Design Model 565
FIGURE 3 Simulation engine module and its integration in the problem-solving procedure.
Decomposition
According to Papalambros (2002), decomposition of large-scale design
problems allows for
n n ðn 1Þ
K¼ ¼ ð1Þ
2 2
If the self map of the PDSM has k connections (edges), we define the
problem connectivity as:
k
p¼ ð2Þ
K
Complex System Design Model 567
Integration
Integration combines the partial solutions of a large problem (Reidsema and
Szczerbicki 2001). The integration problem for complex problems and com-
plex problem-solving environments is a key challenge for problem solvers.
For complex systems, due to coupling between the distributed tasks, inte-
gration may not be performed linearly simply by adding the partial solutions
together. Because the coupled problems tend to be nonlinear (the same as
the coupled differential equations), the solutions may not be achieved by
using the usual concurrent planning, which adds the partial solutions to
obtain the overall solution. The nonlinearity limits the kind of knowledge
being used for planning.
Integration within the design process can be conducted by two major
methods:
1. Supervised integration
2. Unsupervised integration
570 M. Efatmaneshnik and C. Reidsema
FIGURE 5 Interactions between design teams of low-level integration scheme (a) and
multi-agent design system (b).
modules and submodules from scratch, most of the parts are affected by the
performance, quality, and characteristics of the CPU chip. The coordination
block is an integrative subsystem and the design team in charge of integrative
subsystem design is regarded as a low-level integration team that can
implicitly coordinate the activities of other teams. Obviously, the design of
the integrative subsystem must be much more complex than the other
subsystems. As such, the integration of complex systems with more than a
certain amount of coupling is not desirable with low-level integration
schemes through coordination-based problem decompositions.
The interactions between the design teams in multi-agent systems are
autonomous and based on agents’ social knowledge. Multi-agent systems
are a relatively complex field of research. The solution to the design pro-
blems in multi-agent systems is formed in a self-organizing fashion that
emerges as result of autonomous interaction of the agents (Figure 5(b));
multi-agent systems correspond to modular problem decomposition.
Information-intensive architectures can also be regarded as multi-agent
systems in which the design teams (or coalition of agents) have overlapping
boundaries. Information-intensive architectures correspond to overlap
decomposition of products=systems in which subsystems are overlapped
and share some of the design variables with each other. Information-
intensive structures facilitate collaborative design for large-scale and complex
design problems. According to Klein et al. (2003), collaborative design is per-
formed by multiple participants representing individuals, teams, or even
entire organizations, each potentially capable of proposing values for design
parameters and=or evaluating these choices from their own particular
perspectives. For large-scale and severely coupled problems, collaborative
problem solving is possible when the design space or problem space is
decomposed in an overlapped manner: the design teams explicitly share
some of their parameters, problems, and tasks. The main characteristic of this
process model is the intense collaboration between coalitions of agents,
making this mode an information- and knowledge-intensive process (Klein
et al. 2003). The impact of new information on the design process in this
572 M. Efatmaneshnik and C. Reidsema
CONCLUSION
REFERENCES
Aleisa, E. E. and Lin, L. 2008. Abstraction hierarchies for engineering design. World
Academy of Science, Engineering and Technology 45: 585–597.
Alstyne, V. and Logan, G. R. 2007. Designing for emergence and innovation:
Redesigning design. Artifact 1(2): 120–129.
Archer, L. B. 1984. Systematic method for designers. London: Council of Industrial
Design of London.
Bar-Yam, Y. 2004. Making things work: Solving complex problems in a complex
world. NECSI Knowledge Press, USA.
Boehm, B. 1986. A spiral model of software development and enhancement. ACM
SIGSOFT – Software Engineering Notes 11(4): 14–24.
Browning, T. R. 1999. Designing system development projects for organizational
integration. Systems Engineering 2: 217–225.
574 M. Efatmaneshnik and C. Reidsema