Professional Documents
Culture Documents
Computational Design and Digital Manufacturing: Panagiotis Kyratsis Athanasios Manavis J. Paulo Davim Editors
Computational Design and Digital Manufacturing: Panagiotis Kyratsis Athanasios Manavis J. Paulo Davim Editors
Computational Design and Digital Manufacturing: Panagiotis Kyratsis Athanasios Manavis J. Paulo Davim Editors
Panagiotis Kyratsis
Athanasios Manavis
J. Paulo Davim Editors
Computational
Design
and Digital
Manufacturing
Management and Industrial Engineering
Series Editor
J. Paulo Davim, Department of Mechanical Engineering, University of Aveiro,
Aveiro, Portugal
This series fosters information exchange and discussion on management and indus-
trial engineering and related aspects, namely global management, organizational
development and change, strategic management, lean production, performance
management, production management, quality engineering, maintenance manage-
ment, productivity improvement, materials management, human resource manage-
ment, workforce behavior, innovation and change, technological and organiza-
tional flexibility, self-directed work teams, knowledge management, organizational
learning, learning organizations, entrepreneurship, sustainable management, etc. The
series provides discussion and the exchange of information on principles, strategies,
models, techniques, methodologies and applications of management and industrial
engineering in the field of the different types of organizational activities. It aims
to communicate the latest developments and thinking in what concerns the latest
research activity relating to new organizational challenges and changes world-wide.
Contributions to this book series are welcome on all subjects related with manage-
ment and industrial engineering. To submit a proposal or request further information,
please contact Professor J. Paulo Davim, Book Series Editor, pdavim@ua.pt
Panagiotis Kyratsis · Athanasios Manavis ·
J. Paulo Davim
Editors
Computational Design
and Digital Manufacturing
Editors
Panagiotis Kyratsis Athanasios Manavis
Department of Products and Systems Department of Products and Systems
Design Engineering Design Engineering
University of Western Macedonia University of Western Macedonia
Kozani, Greece Kozani, Greece
J. Paulo Davim
Department of Mechanical Engineering
University of Aveiro
Aveiro, Portugal
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Switzerland AG 2023
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
The present book intents to cover a variety of topics that deal with computational
design and digital fabrication. We aimed in following an approach that combines
both the design process using high end digital tools and the manufacturing with
new technologies that are based on new technologies. We have managed to incor-
porate a number of directions, i.e., CAD-based design automation and applications,
parametric and algorithmic design, generative design, additive manufacturing and
prototyping, product design applications and methodologies, modern machining,
materials, manufacturing automation, topology optimization, augmented reality, and
mechatronics.
A series of state-of-the-art chapters were gathered and deal with the following
eras. The first chapter “Generative Design in Architecture: From Mathematical Opti-
mization to Grammatical Customization” provides a methodological overview of
generative design in architecture. It highlights the commonalities among the math-
ematical optimization methods for topology optimization, the shape optimization
and the agent-based design games. The second chapter “Augmented Reality-Driven
Prototyping for Error Elimination During Computational Design and Digital Fabri-
cation” proposes a methodology for small-scale prototyping using computational
design and digital fabrication by applying augmented reality. Its aim is to eliminate
the errors that appear in each case. The third chapter “Design of Gesture-Controlled
Interface for Mechatronic Systems: A Computational Approach” focuses on applying
a computational approach when designing and developing an interface for interacting
with a mechatronic system through gestures. Machine vision techniques are used for
recording and locating the gesture through an image capture system. The fourth
chapter “Topology Optimization Utilizing Density-Based Approach for Additive
Manufactured Components: A Case Study of an Automotive Brake Caliper” applies
the principles and methods of topology optimization, when analyzing a novel case
study dealing with the design of a topology optimized brake caliper, when intended to
be manufactured using additive manufacturing (design for additive manufacturing).
The fifth chapter “Rethinking the Brick: Developing a File to Fabrication Frame-
work for Mortar-Free, Robotic Masonry Wall Assembly” deals with the creation of a
file-to-fabrication framework for mortar-free robotic assembly of masonry walls. It
v
vi Preface
focuses on the relationship between the individual brick geometry and the complete
structure under consideration. The sixth chapter “Knowledge-Based Design: A Func-
tion-Knowledge Reasoning Model for Product Conceptual Design” uses a function-
knowledge reasoning model for product conceptual design that constructs a mapping
relationship between subfunctions after decomposition and granular clustered knowl-
edge. It focuses on knowledge-based product conceptual design that helps identify
critical issues and generates new conceptual solutions through configuration.
The seventh chapter “Equation Driven Micro-Milling of 2D Free Form Models
and Off-Line G-code Generation for Variable Feed Machining” deals with paramet-
rically designed and machined out of brass spiral type and micro-gear 2D freeform
models. The result is achieving improved surface quality, machining time and process
efficiency. The eighth chapter “Study of the Topography of Face Milled Surfaces
Using CAD-Based Simulation” presents a simulation model that is able to deal with
improved machining in a CAD-based simulation platform. The outcome includes
the resulting surface topography as well as the surface roughness metrics. The ninth
chapter “Study on Design and Manufacturing of an Engine Block Using Digital
Tools” offers access to the implementation of digital tools, when designing and
manufacturing a lightweight four stroke engine block for kart competition. The tenth
chapter “Automatization of CAD Model Development of Slewing Bearing Using
Solid EdgeTM ” presents a novel approach in CAD-based automated development of
slewing bearings. 2D sketches and their dimensions are linked to a lookup table and
thus resulting in the automatic creation of the 3D solid models. All the appropriate
operations are linked to the application developed for this purpose. The eleventh
chapter “CAD-Based Application in VBA for Tool’s Profiling” allows profiling of
tools such as rack-gear tool, gear shaped cutting tool or rotary cutter via visual basic
for applications (VBA) and the implementation of the “virtual pole” method.
The editors acknowledge the aid of Springer Publications and express their grati-
tude for this opportunity and for their professional support. The editors also express
their gratitude to all the chapter authors for their availability and for delivering their
high-quality research work.
vii
viii Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Generative Design in Architecture: From
Mathematical Optimization
to Grammatical Customization
1 Introduction
What is generative design and why does it matter? What problem can it solve for
society? This book chapter is to answer these questions in addition to providing a
technical overview of the most important generative design processes right ahead of
the frontier of the state of the art of generative design technologies. Here, we provide
a brief definition of generative design processes given their applications. As shall
be illustrated, contrary to the common belief, generative design processes are not
necessarily meant to automate design processes, albeit except for generative models
used for [procedural] content generation in the games and entertainment industry.
One of the main contributions of this piece is to show the commonalities of three
distinct sorts of generative design processes (mathematical derivation, grammatical
itemization, and gamified exploration) in terms of similarities in their methodologies
(algorithms) and representations (data structures). However, the clear distinction
between applications of generative design to design derivation, design customization,
and participatory design problems is key for avoiding the commonplace reduction
of computational design to design automation.
In what follows, we start with a brief introduction to the generative design spec-
trum; revisit three types of computational design problems (optimization, diver-
sification, and consensus-building) and their inherent complexities; take a glance
at the key role of discrete design representation in generative design; revisit some
basic definitions and terminological matters essential to understanding the generative
design paradigm, and then present three types of generative design methodologies,
illustrated in Fig. 1, in-depth:
The theoretical underpinnings of practice in these three lines of work can be traced
to the core concepts of design sciences [1] and the paradigm of design as a matter of
“rational problem solving” [2, 3].
The mathematical derivation methods can be traced back to topology optimiza-
tion [4] and algebraic shape optimization (e.g. through the Force Density Method
introduced by Schek [5]). Grammatical itemization methodologies can be traced
back to the linguistic analogy in space syntax [6], pattern language [7], and shape
grammars [8], all of which seem to have been inspired by Chomsky’s seminal book
on syntax and formal grammars [9]. Later, Lindenmayer systems [10] and cellular
automata of Von Neumann, especially those in the style of the [Solitaire] Game of
Life of John H. Conway (see a recent compendium [11]) and Stephen Wolfram [12]
as well as the ideas of William J. Mitchel [13], Jonathan Cagan [14], and Kristina
Shea [15, 16] enhanced the grammatical generative design literature and emboldened
it as an approach for promoting combinatorial design creativity. The idea of gami-
fied exploration as a form of generative design can be traced back to multiple ideas,
namely using multi-actor systems in search of a satisfactory equilibrium through
group decision-making and multi-criteria decision analysis introduced by Herbert
A. Simon [17], the notion of generative sciences put forward by Joshua Epstein [18]
(i.e. complexity sciences driven by simulations), and the idea of Markovian design
machines introduced by Michael Batty [19]. The particular ideas of building “design
games” and “configurators” can be traced back, respectively, to the works of Henry
Sanoff [20] and Yona Friedmann [21].
Generative design is a paradigm in computational design rooted in the paradigm
of design as rational problem solving based on the ideas of Herbert a. Simon, q.v.,
the theoretical discussion on design paradigms by Kees Dorst on this matter [2, 22].
As an adjective, the term “generative” historically originates from the idea of gener-
ative grammars by Chomsky, on the one hand, and the notion of generating (deriving
or discovering) a definite spatial design (typically for a mechanical structure) from
given design requirements using topology optimization or shape optimization (a.k.a.
form-finding) on the other hand. While the former sense (the grammatical defini-
tion of generative design) has been well-known in the context of architecture since
at least the 1980s, the latter (i.e. the derivational definition of generative design
based on gradients or partial differentials a.k.a. sensitivities, particularly in topology
optimization) has been known in mechanical and structural engineering since the
late 1990s (while differential/numerical shape optimization can be traced back to the
1970s, e.g. the force density method by Schek [5] or even much earlier, actually to the
nineteenth century if we consider physical or geometric shape optimization methods
4 P. Nourian et al.
Inter alia, the importance of having an explicitly navigable picture of the design space
(and its dual performance space a.k.a. performance landscape) is key to the ability
of a designer for explaining and justifying design decisions.
Two dual problems can be identified in this regard, which can reveal the complexity
of design endeavours from a mathematical stance: the problem of mapping the
multiplex associations of many design choices to their few integral consequences
(dubbed here as the primal evaluation problem or the mapping problem in a manner
of speaking) and the more challenging problem of figuring out what choices must
be made to attain a specific multitude of performances or quality levels that can be
regarded as the eventual consequences of the design choices (dubbed here as the dual
design problem or the navigating problem in a manner of speaking) (see Fig. 2).
The navigability of the design spaces requires the explicit modelling of the rela-
tions between one design alternative and another. Such relations can be referred to as
design updates (based on gradient descent), design rules (as in grammatical rules)
Generative Design in Architecture: From Mathematical Optimization … 5
or design moves (as in moves of a game) defining how one can get from one design
alternative to another. Navigability also implies that we can have a rigorous sense of
topology in our design space as to which we can measure similarity or dissimilarity
between designs, depending on the number of operations, rules, or moves that set
them apart. In other words, generative design processes are inherently topological
workflows. The importance of this explicit notion of similarity can be understood
by considering the key role of two sorts of similarities between designs that would
provide for clustering or manifold mapping of design spaces for simplifying cogni-
tive decision-making for human designers: similarity between designs in the high-
dimensional design [decision] space and similarity between designs in terms of the
outcomes of interest in the performance space (referred to as the vectors of choices
and consequences in Fig. 2).
Three archetypical generative design systems are identified here to define the poles
of a broad spectrum (as introduced in Fig. 1):
• Design Equations: Design optimization through mathematical derivation, in
search of optima, exemplified in topology optimization and shape optimization
processes;
• Design Grammars: Design customization through grammatical itemization, in
search of idiomata, exemplified in formal grammars, L-systems, and generative
grammars (for shape rewriting or graph rewriting);
• Design Games: Participatory design through gamified exploration, in search of
[Nash] equilibria or another notion of satisfaction of multi-actor multi-criteria
decision analysis problems.
6 P. Nourian et al.
While all of these approaches are feed-forward and gradual as compared to black-
box feedback optimization, it can be noted that the method of reasoning in grammat-
ical design itemization is akin to forward induction and that the method of reasoning
in mathematical design derivation is akin to backward induction (i.e. working our
way backwards from the target objective(s) converging to a particular vector of deci-
sion variables as opposed to the divergent expansion the design catalogue in the
grammatical or rule-based itemization methods).
A similar comparison can be made in terms of the depth-first-search and breadth-
first-search methods being analogous respectively to the design space navigation
methods in mathematical design derivation and grammatical design itemization. In
our proposed framework, gamified design exploration methods can exploit both types
of navigation mechanisms (hereinafter referring to search and induction processes).
Even though these three types of generative design methods cover the main appli-
cations of generative design, there is a much newer technology that is rapidly devel-
oping with excellent potential for the performance-based generative design that is yet
to be explored: generative models. Towards the end of the chapter, we shall reflect
on the utility of generative models for augmenting performance-based generative
design with artificial intelligence.
2 Theoretical Background
logical leap of design [28]. This leap refers to the difficult, unexplainable, and often
unjustifiable transition that can be considered as the intuitive or creative synthesis
process that appears to an external observer as jumping to a conclusion or solution
without even explicitly formulating the problem. This gap of reasoning, concerning
the dichotomy of the form and function of a design, can also be seen in the descriptive
function-behaviour-structure (FBS) design framework by John Gero et. al. [29].
This ontological framework identifies some typical representations starting from
the most abstract design requirements (R) to the most concrete eventual design
description (D) and the typical actions leading back and forth to four intermediary
representations of a design process: a process of “formulation” of the function (F)
and the expected behaviour [performance] (Be) of the system, a process of “synthe-
sis” leading to the proposed/designed structure [form] (S), a process of “analysis”
revealing the actual behaviour of the structure (Bs), and a process of “evaluation” for
comparing the expected behaviour with the actual behaviour analysed (simulated)
on the structure to assess how appropriate is the particular structure proposed in a
design iteration, followed by a process of “documentation” detailing the proposed
structure [Form] into a design description (D). The description also contains three
“reformulation” processes as to which design processes are sometimes referred to as
“co-evolution of problems and solutions” [30].
Echoing the genuine appeal of Yona Friedman’s “Towards a Scientific Architec-
ture”, we argue that the “Sciences of the Artificial” [31] in their plurality, as the
decision sciences focused on how to change the current state of things and environ-
ments towards better states, are key to such a scientific transition. Contrary to the
reductionist connotation of the term scientific, the non-reductionist notion of “gener-
ative sciences”[18] takes it for granted that the matter of mapping a myriad of choices
to a few consequences is an endeavour in the realm of complexity sciences that can
be dealt with through utilizing simulations, be it simulations based on first princi-
ples often encapsulated in partial differential equations, stochastic or deterministic
simulations of design moves on a decision tree of grammatical rules (exhaustive-
enumerative search for cataloguing or probabilistic approaches such as the wave
function collapse [32]), agent-based simulations, Markov chains, or Markov decision
8 P. Nourian et al.
processes, as well as simulation games that can reveal the emergence of “collective
intelligence” [33].
3 Framework
While there is no general way of proving that a design is the best it could possibly
be, generative design can at least bridge the logical leap of design by methodically
structuring and mapping the design space and its dual performance space, starting
with defining the requirements, defining valid or meaningful design moves (deci-
sions leading to an identifiable change in the discrete state of a design from one to
another), setting explicit quality criteria, and identifying alternatives in the design
space through a systematic navigation process to satisfy the set quality criteria in three
distinct problem settings (optimization, customization, or participation problems as
explained in the Introduction). A generative design framework must explicitly iden-
tify the “purpose” of design or the nature of the design problem, as to which we have
identified three types of purposes, namely: reaching a unique optimum, producing
a plethoric catalogue of idiomatic designs, or reaching a fair/equitable equilibrium
in a multi-actor participatory design process. The framework presented here is an
extension and elaboration of the GoDesign framework for generative design [26].
However, one can trace back the roots of this inherently discrete generative design
framework to the configurational studies of pioneers of computational design such as
Lionel March, Philip Steadman and Raymond Matela on discrete design and config-
urations with Polyominos and William J. Mitchel’s discrete grammatical approach
[34, 35].
The most obvious common feature of the three types of generative processes rein-
troduced here is their discretized design spaces. While this discreteness is not neces-
sarily carried over to the actual construction of the designed objects (e.g. in topology
optimization of small objects), in the relatively large scale of buildings, the discrete
design variables can correspond one to one to the actual building blocks. Observe
that buildings are too large to be made as really monolithic objects and as such their
construction design most probably requires some sort of segmentation (polygoniza-
tion and polyhedralization). In the discrete architecture paradigm [36], discreteness or
jaggedness of the results is not regarded as a limitation but embraced as a digital way
of building as an aggregation of discrete modules. Discretization not only brings about
advantages for the construction process but also, as a mathematical proviso, provides
a discrete notion of an array of design variables that is key to generative design. As
can be seen in the context of the three archetypical generative design processes, such
Generative Design in Architecture: From Mathematical Optimization … 9
similarity of designs in terms of their inner structure, i.e. the similarity of their combi-
nations of choices, is typically harder and depends on the extent of explication of
a design space. In the mathematical sense of topology optimization, the similarity
between decision variables will be a matter of defining a similarity metric as to an
inner product between high-dimensional arrays of density-like decision variables
(typically in the form of x ∈ (0, 1]n ). In the most abstract case, when the decisions
are all completely discrete, i.e. in the grammatical design sense, the notion of simi-
larity between graphs or meshes is purely discrete and pertains to the number of
rules that need to be applied to get from one design to another (the number of design
moves or operations in a decision graph). Describing similarity between design states
in tile-based design games is similarly explained in [38].
incorporate the gradients in their formulation, such as the force density method
[39]).
However, not all the qualities and constraints are easy to formulate explicitly
as analytical functions of decision variables. The frontier challenges in the art and
science of generative design can be directly bordered by the extent to which one can
formulate the objectives and constraints as functions of the decision variables. In other
words, emblematically speaking, the central question of generative design is how to
navigate in the realm of decision variables with a compass informed by the polari-
ties in the realm of objectives and constraints. In cases where subjective opinions,
intangible cultural values, and aesthetics, are to be considered, we eventually have to
rely on human judgement for the evaluation process, whether directly by involving
humans in the loop or by training neural networks as “function approximators” using
AI.
Architectural design is almost always concerned with multiple validity constraints
and quality criteria as to which the alternatives need to be assessed. Often, such
criteria are not commensurate due to their different physical dimensions. Before
jumping to the conclusion that architectural design is about multi-objective optimiza-
tion, we need to observe that in the mathematical sense, speaking of the optimum
beyond a single objective can be rather absurd in most cases. Instead, in the general
case, the endeavour is directed at finding a satisfactory design that can be char-
acterized either as a hierarchically dissected chain of multiple optimization prob-
lems, a diversification problem concerned with the enumeration of valid designs for
customization (that is not at all about optimization), or satisfaction of decision-
making criteria in a multi-actor and/or multi-criteria setting using multi-criteria
decision analysis (MCDA).
As stated in our definition, generative design is all about confidently navigating design
spaces based on a possibly partial but explainable map of the associations between
design choices and their performance consequences. Specifically, however, the navi-
gation strategies differ from one type of generative design to another, primarily
because the nature of the design spaces can be very different from one type to another.
Considering the most general purpose of design processes in architecture and struc-
tural design as forming a particular material distribution or spatial arrangement, we
can observe that discrete design is practically about making changes in some meshes
(hyper-graphs containing 0D vertices, 1D edges, 2D faces, and possibly 3D cells) in
the general form of M = (V , E, F ) for discrete 2D manifolds or M = (V , E, F, C)
for discrete 3D manifolds. Without loss of generality, in the most common cases,
the faces and cells are respectively triangular–tetrahedral or quadrilateral–hexahe-
dral. Generative design can be described in terms of changes that can be made on
such meshes in terms of the positions of vertices or “shape changes”, attributes of
12 P. Nourian et al.
Fig. 4 Design space navigation for a mesh: changing shape (form), connectivity (topology), and
colour (data)
Fig. 5 Spectrum of design space navigation methodologies and the essential mathematics of
generative design
In this section, we explain the foundations of the generative design processes that can
be categorized as being mathematical as to their commonalities in terms of derivation
of forms (topologies and geometries, a.k.a. configurations and shapes).
These methodologies are known as topology optimization and shape optimization
in the computational design literature. In this sense, however, the implication of the
term optimization goes beyond fine-tuning a design or haphazardly generating combi-
nations of parameters to create forms in a genetic or evolutionary fashion. Instead,
14 P. Nourian et al.
the search processes characterized as shape and topology optimization are directly
navigated by the gradients (a.k.a. sensitivities) of the cost (or benefit/utility) func-
tions for converging to forms that have some measurable and differentiable minimal
or maximal properties, and so it is fair to call such approaches as form-finding
approaches. Even though the terms shape optimization and topology optimization
originate from structural engineering and refer to notions of performance related
to minimal compliance or minimal materialization of structures under certain load
cases, we can see that it is possible to generalize their definitions to other domains-
specific notions of performance, e.g. passive climatic design principles (governing
equations from geometric optics) or even environmental psychology (insofar as the
governing equations can be formulated in terms of the decision variables). We shall
see in detail that these approaches are most suitable for convergent search in a design
space equipped with a clear notion of performance or quality.
Here, we provide two compact and general definitions of these processes:
Shape Optimization Problem: Given a network (graph) or a mesh (hyper-graph)
with fixed topology, it is desired to find a valid embedding (i.e. 3D vertex positions)
for the network/mesh that would minimize a cost function describing how costly
(uncomfortable) the shape is, possibly subject to multiple constraints of validity. The
general idea is to formulate a so-called [virtual] energy function (typically explainable
as a Dirichlet energy functional) that measures the extent to which a shape is far from
being relaxed, optimal, or desirable.
Topology Optimization Problem: The core idea of topology optimization is about
solving a problem in this general form to find a [black & white] colouring of a
tessellated cell space that would minimize a cost function. The cell colours are
eventually supposed to be integer labels determining whether a cell is in or out
of the ultimate form. The white-coloured cells are effectively removed from the
final configuration. So, the topology of the final configuration effectively alters as
compared to the initial super-graph that is dual to the cellular tessellation of the
design domain, hence the name topology optimization.
The history of structural shape and topology optimization can arguably be traced
back some 150 years (see this literature review [42]) but providing an exact timeline
or a comprehensive literature review in this regard falls out of the scope of this
book chapter. In this section, we briefly introduce some critical references to the
existing works on [mostly] structural shape optimization and topology optimization,
specifically those of particular interest in terms of generality and applicability in
the AEC. The generalization of structural optimization approaches to architectural
design remains a challenge that would go beyond the scope of this summary. For
the sake of brevity, as well as our focus on explainability, scalability, reproducibility,
and justifiability of generative design methods, we shall restrict our attention to
the form-finding or mathematical design derivation approaches that are explicitly
Generative Design in Architecture: From Mathematical Optimization … 15
and rigorously led by gradients and the only hint to alternative approaches such as
metaheuristic methods or integer programming, namely topology optimization than
shape optimization.
Shape Optimization
In the context of discrete generative design, shape optimization refers to methods for
optimizing (typically minimizing) an energy function that takes the shape of a mesh
as the input function. Thus, the objective function is often in the form of a Dirichlet
energy integral. Such an integral functional, which is typically based on a partial
differential equation governing the dynamics of the movement of the mesh vertices,
can be written as a discrete integral or the sum of magnitudes of the very forces that
could be considered as the gradients of the input function (mostly the positions of
vertices) over the mesh elements (edges, faces, or cells). This generalized defini-
tion points to two types of approaches for solving the shape optimization problem:
those concerned with analytically solving the governing equations in the strong form
(differential form) and thus characterizing the optimality criteria, in which typically
the Laplacian (see two prominent introductions by Olga Sorkine [43] and Bruno
Levy [44]) or generally the Kirchhoff Stiffness matrix operators (see a thorough
mathematical introduction [45]) are used to discretize the governing equations, the
classical example of which is the force density method by Hans-Jörg Schek [39];
and those approaches that can be described loosely as being based on the weak
form (integral form) focused on numerical integration of the energy functional and
updating the discrete differentials alternatively, the classical example of which is the
dynamic relaxation method by Barnes [46, 47]. While the former has the elegance
and appeal of a purely mathematical method, the latter has the flexibility of dealing
with nonlinear optimization problems. For a more detailed comparison of these two
approaches in a broader sense, see a comprehensive literature review [48] with a
thorough mathematical formulation and collation of the methods. While the force
density method already refers to generalizations for possibly nonlinear constrained
problems, it is noteworthy that general methods such as the shape-up method of
constraint projections by Bouaziz et al. [49–51] or the thrust network analysis by
Block and Ochsendorf [52] have been introduced for interactive and constrained
form-finding. Additionally, a class of interactive methods for confidently navigating
the space of optimal shapes (networks) in equilibrium have been proposed based on
graphic statics, e.g. an exemplary compendium [53].
Topology Optimization
A generalized problem formulation for topology optimization problems can lead to
new frontiers of research in computational design that go much beyond the famous
structural design applications, cf., the formulation of massing and zoning problems
in architectural configuration inspired by structural topology optimization problems
in the GoDesign framework [26]. Referring to the three types of design moves in
generative design as mesh editing procedures (for modification of shape, colour, and
topology), it must be noted that contrary to what the name suggests, the most common
and the most scalable approaches to topology optimization do not work with altering
16 P. Nourian et al.
the topology (by using binary decision variables x = [xe ]n×1 ∈ {0, 1}n ) throughout
the optimization process. Even though the ultimate aim of the topology optimiza-
tion process is to end with a particular combination of binary decision variables
defining a distinct topology of a cellular material domain, the ambient colours typi-
cally indicate float density-like variables corresponding to the possibility/probability
of having a cell or not. Thus, it is common to consider an array of float density vari-
ables corresponding to the cellular finite elements in the tessellation in the form
of x = [xe ]n×1 ∈ (0, 1]n . This change of decision variable x presents two relax-
ations to the theoretical topology optimization problem. One relaxation concerning
the change of a Boolean decision variable to a float variable brings about straightfor-
ward differentiability, thus resolving the need to compute a topological derivative,
q.v. [54]. Another relaxation is about changing the infimum from zero to a small
nonzero value (e.g. 10E-3) to avoid singularities in the stiffness matrix (as well as to
avoid division by zero in the smoothing/filtering step [55]).
The most crucial step of any analytical topology optimization process is the
simulation of the cost function and its gradient as a function of the decision vari-
able. In the case of structural topology optimization, this is performed in these
steps:[re]computing the Stiffness Matrix with the current density variables based
on the current density distribution; computing a numerical solution to the governing
equations using the finite element method; computing a gradient vector showing the
sensitivity of the cost function to the perturbations of density variables; iteratively
seeking a Lagrange multiplier (in the optimality criteria method) for solving the
volume constraint; multiplying the sensitivities by the Lagrange multiplier; applying
some distance based filtering or smoothing on the gradient vector, and lastly multi-
plying the next-iteration density vector by the smoothened gradient in a heuristic
fashion assuming that where there is higher sensitivity, there should be more material.
The most common approaches to structural topology optimization, to this date,
are either directly based on or inspired by the Solid Isotropic Microstructure with
Penalization (SIMP) methodology [4, 56, 57]. Notably, several pieces of vectorized
MATLAB code have been published for topology optimization based on the SIMP
methodology including the famous 99 Lines [4] and 88 Lines [58] for the 2D structural
optimization or 3D structural optimization [55].
Rozvany and Zhou [59] explain in depth why the intuitively appealing and bio-
inspired approaches known as soft-kill, hard-kill, evolutionary structural optimiza-
tion (ESO), or bidirectional evolutionary structural optimization(BESO) that were
popularized by Mattheck [60, 61] and Yi Min (Mike) Xie [62, 63] are not guaranteed
to find optimal solutions. Thus, due to the mathematical orientation of the proposed
approaches in this book chapter, we shall skip these methodologies in favour of those
directly working with the gradients of the objective function.
While the use of a numerical simulation is the core of the methodology, it does
not have to be based on the finite element method; there are alternative formulations
based on other numerical methods such as the discrete element method [64].
Generative Design in Architecture: From Mathematical Optimization … 17
Here, the basic concepts of topology optimization methodologies are explained with
a view towards generalization, from the structure of the design space to the different
design space navigation methods.
Tessellation of a Spatial Material Domain
A material domain is modelled as a topological space, technically tessellated into
some regular or irregular cells, e.g. without loss of generality, some triangles or
quadrilaterals in 2D or tetrahedra or hexahedra in 3D. Together, these cells form
an algebraic topology (a [combinatorial] cell complex) that can be represented as a
graph whose vertices are dual to the cells of the cell complex.
A Vector Design Variable
To every cell of the cell complex, a virtual density-like parameter is attributed, making
up an n-vector of [binary] decision variables for n cells in the tessellation. The
ultimate goal can be described as figuring out for every cell if it should exist as a
materialized domain in the final design or not for some minimization or maximization
goal to be achieved. However, the most prominent approaches to solving this problem
tend to deviate from defining binary decision variables at least for the duration of
the topology optimization process for multiple reasons that can be summarized as
differentiability and preservation of topology.
Formulating a Cost Function
A virtual energy function or a cost function is typically formulated as a function of
the vector decision variable to be minimized (hence the term optimization), which
represents something that conceptually defines the extent to which a design can be
assumed to be tense or unsatisfactory (opposite of relaxed or satisfactory); this is
usually a form of the Dirichlet energy, such as the strain energy of a structure known
as the compliance of the structure under a particular load case given some boundary
conditions (supports). More generally, any cost or benefit function that is explicitly a
function of the vector decision variable can be sought to be minimized or maximized
in this regard. Correspondingly, the argmin or argmax will be the optimum design
effectively describing a foggy picture of the minimal or maximal material distribution
over the design domain that is typically snapped to a binary density vector at the end
of the process.
Topology Optimization Problem-Solving Method
From the very description of the design variable, it must be noted that even in the case
of the binary decision variables, the search space expands exponentially in the order
of O(2n ) and thus, a naive brute-force search quickly becomes intractable. Further-
more, even after introducing the float variables and the common SIMP heuristic (the
penalization scheme) that makes the cost function easily differentiable, the problem is
almost always non-convex and generally difficult to solve with thousands or hundreds
of thousands of decision variables. The approaches for solving topology optimization
18 P. Nourian et al.
problems bifurcate with respect to at least two major questions: (a) binary decision
variables or float decision variables, and (b) analytical or metaheuristic. As explained
earlier, the metaheuristic approaches fall out of the scope of this summary. The
approach of using binary decision variables is perhaps interesting or appealing for
small educational examples such as LayOpt [65] but is not scalable to be applicable
in high-resolution applications [54]. As for the analytical approaches, in addition
to the most common optimality criteria method introduced briefly earlier, a notable
alternative is a method of moving asymptotes (MMA) [66] and its newer variant,
the generally convergent method of moving asymptotes (GCMMA) [67] by Krister
Svanberg. However, the optimality criteria method generally remains more accessible
and thus more prevalent in mathematical topology optimization.
Architectural Layout Methods
The GoDesign framework [26] explains how the problem of 2D/3D space layout
in architecture can be broken down into three sub-problems of massing (binary
colouring) [68], zoning (multi-colour spatial allocation or space planning), and
routing (corridor generation) [69]. The problem of architectural space planning,
especially the 3D layout, is arguably the hardest in computer-aided architectural
design especially because it is ill-defined and over-constrained [70, 71], and so to
keep the discussion contained we refer the reader to a few literature reviews [72–
74] and restrict our attention to explicitly discrete problem formulations such as
space allocation [75] or mesh colouring [76], most of which propose mathematical
integer programming (linear or quadratic optimization with integer variables using
operations research solvers) [77, 78].
Fig. 6 Structural topology optimization for designing a discrete compression-only structure, image
credit: Dijk [81]; a a 2D toy problem showing the proposed method as compared to the common
isotropic formulation obtained from the Ansys Topology Optimization package; b a 3D toy problem
showing the discrete results of the topology optimization process for a compression-only/funicular
structure to be built out of discrete blocks
Fig. 7 Climatic topology optimization for deriving a building mass/envelope of optimum solar
potential, mage credit: Florou [82], a a hypothetical binary mass configuration and the changes in
performance from one configuration to another, b the proposed analytical solution to the topology
optimization problem for maximizing the solar potential as a level set in the design space
20 P. Nourian et al.
The rule-based design encapsulates different applications with design grammars that
can apply to language, form and structure through different levels of complexity,
starting with formal grammars.
Formal Grammars
Chomsky introduced in the 1950s generative design applied to language using formal
grammars [9, 83]. A finite set of formal grammar rules apply modifications to a
finite set of words, enabling the generation of an infinite set of sentences. The design
rules define the language—or design space—to which these sentences—or designs—
belong.
Fig. 8 Introducing grammatical generative design: fractal square subdivision of squares by moving
a marker (in pink)
Generative Design in Architecture: From Mathematical Optimization … 21
L-Systems
L-systems extend formal grammars for the generation of geometries. In 1968,
Lindenmayer introduces L-systems as a type of formal grammars to algorithmically
describe the growth of plants [10, 85]. A set of rules applies to a string of charac-
ters that come from an alphabet. The interpretation of the string generates the corre-
sponding shape in the manner of turtle graphics [86]. The rules are applied iteratively
to generate different levels of growth of the shape. The shape depends on the starting
string axiom and the different parameters in the case of parametric rules. L-systems
can also describe the generation of patterns such as fractals. L-systems can apply to
architectural and structural design as parametrization strategies for the generation of
designs. The combination of L-systems and genetic algorithms provides a means for
topology optimization for the search for statics-optimized designs [87, 88].
Shape Grammars
Shape grammar does not only modify a chain of characters, as formal grammars do
but a geometrical design and found many applications for exploration and design
of general geometries. Stiny and Gips introduce in 1971 shape grammars for the
generation of shapes in painting and sculpting [8]. A classification can be found in
[89]. Subsequently, shape grammars found a large set of applications in many fields
of design, architecture, and engineering [14, 84, 90].
Architectural Grammars
Architectural grammars include a variety of applications, such as Palladian villas
[91], Frank Lloyd Wright’s prairie houses [92], Queen Anne Houses [93], Yingazo
Fashi Chinese buildings [94], or Siza’s houses in Malagueira [95]. The grammar
decodes each architectural style or building typology, formalized and structured
into a set of rules to generate multiple other designs that share the same char-
acteristics. Design grammars also apply at other scales than the building. Urban
shape grammars tackle the district, like the Medina of Marrakesh [96], or the city,
like Praia in Cabo Verde [97]. Product shape grammars tackle specific styles of
chairs [98], coffeemakers [99], cars [100], or tableware [101], for instance, allowing
mass customization. An architectural grammar for housing rehabilitation takes into
account the varied information like new usage, among others, instead of decoding a
design style [102].
Functional, Structural and Force Grammars
Even though architectural grammars focus on the arrangement of shapes, gram-
mars can include functional aspects other than geometry. Shape grammars evolve
into functional and structural grammars to include non-geometrical data related to
structures. This data can include structural-behaviour and construction-technology
requirements. Engineering applications include houses [13], towers [103, 104], halls
[105], bridges, [106], trusses [41], or geodesic domes [15, 16]. Some applications
are specific to a fabrication technology, like CNC machines [107, 108], instead of
a structural system. Force grammars describe the organization between forces, as
22 P. Nourian et al.
opposed to the one between spaces or elements, which can later materialize into a
structure. Force grammars for graphic statics are introduced by Lee et al. and applied
to 2D edge networks [109] and 3D cell decompositions [110].
Mesh and Graph Grammars
The exploration of the topology of patterns and meshes resulted in the development of
several operations, sometimes also framed as grammar rules, which are not specific
to architecture and structures, which abstract a design and generalize the range of
applications. Considering [polygon] meshes as hyper-graphs that can contain higher-
dimensional topological entities known as hyper-edges (more commonly known as
faces and cells), mesh grammars are also considered graph grammars. The extensive
range of applications of meshes sparked the design of very different sets of rules tuned
for specific objectives, mainly for aesthetic purposes. Hansmeyer and Dillenburger
[111] introduce a mesh grammar following a formal grammar approach that modi-
fies the density and the geometry of meshes to generate highly detailed ornamental
shapes. This grammar focuses on shape. Indeed the pattern and its singularities are
not modified. In computer graphics, quad-mesh grammars modify the topology to
improve the regularity of dense and unstructured meshes [112–115]. This regularity
for modelling and representation concerns both geometry and topology. These quad-
mesh grammars consist of a set of local rules that preserve the quad-mesh constraint,
unlike other grammars for triangulated meshes. Another family of quad-mesh gram-
mars does not consist of rules, but different patterns [116–118]. The patterns feature
different singularities to patch N-sided polygons with a prescribed number of subdi-
visions on each side. Combining these patterns allows for completing the mesh of
different shapes. These rules apply to meshes for the field of computer graphics,
animation and rendering. These rules have little use in the context of architectural
and structural design, where design does not start with a dense unstructured mesh
and where many more objectives come into play. The Conway operators constitute
a grammar developed by John H. Conway for the description of polyhedra [119].
Conway operators can be applied to translate one tessellation into another, already
applied for optimising space frame structures [120, 121]. Indeed, tessellations present
different structural and fabrication properties worth investigating [122–124]. The
design of a pattern is a trade-off between topological regularity and irregularity.
Oval et al. [125] provide a grammar tailored for the topological design of vertex
singularities in quad meshes. Heisserman [126] presents a grammar on the boundary
of solids represented by meshes for modelling architectural spaces or volumetric
objects. Modifications of the topology result from modifications of the geometry.
The topology is updated when a movement modifies the adjacency or creates an
overlap. This grammar explores the topology of the shape, the underlying mesh
being only a representation for computation.
Generative Design in Architecture: From Mathematical Optimization … 23
Fig. 9 Illustrating grammatical generative design: combining shape and connectivity rules to evolve
a series of truss designs
Generative Design in Architecture: From Mathematical Optimization … 25
In the middle of the spectrum of generative design methods lies the simulation-
driven games where the interactivity of the grammatical methods meets the robust
evaluation and assessment capabilities of mathematical methods. In a broad sense,
the gamified generative design processes are design processes where a gamified
environment allows the players to navigate the design space in a structured way
while the alternatives are being evaluated and the corresponding scores are reported
back to the player; hence emblematically dubbed play & score mechanisms.
The first element, play, refers to serious gaming, which is individual or collective
activities for making decisions with high utility based on conflicting objectives and
limiting resources [127]. The second element, score, pertains to the inherent capa-
bility of evaluating different alternatives based on their integral performance as a
design and consequences as a decision vector combining many choices. Thus we
require simulation models that represent the states and dynamic behaviours of real-
world phenomena [128], so we can predict the functional performance and conse-
quences of choices (design moves). To recapitulate, a gamified generative design
process is a serious game that has an incorporated simulation model of the real world,
26 P. Nourian et al.
which interactively informs the players about the consequences of their actions. Such
games provide exploration and experimentation environments for the players to take
actions, learn, and design through formal simulations based on logic, rules, first prin-
ciples, and statistical co-relations [129]; and role-play in [group decision] simulation
(games) based on experience, negotiation, and intuition [130].
The main challenge of the simulation game designer is to (1) formulate the main
objective of the game, (2) capture the essential dynamics of the real phenomena in
the simulation engine, and (3) provide the proper interaction channels so players
can understand decision dynamics. These three aspects have been named: meaning,
reality and play by Harteveled [131].
One of the key characteristics of a game is the level of abstraction which has
major consequences on the ease of reasoning about the system [132] and the focus of
the reasoning in the game. For illustration, board games often have a higher level of
abstraction than computer games, as their medium is more constraining. The degree
of abstraction and simplification is essential in the modelling and design processes;
however, in the gamified approach, it is specifically consequential. Games provide
an artificial environment for exploration, assessment, and discussion of decisions.
A highly abstracted game has the risk of omitting the complexities of the original
problem, and the lowly abstracted game has the risk of losing the focus on the main
problem. Therefore, the gamified environment needs to have an adequately abstract
representation of the complexity of the real problem.
In Fig. 3, we have elaborated on different types of complexities present in a built
environment design problem. These complexities demonstrate numerous inputs and
outputs with high levels of inter-dependencies which can impede human cognition in
grasping the details of the problem and taking action accordingly [133]. Simulation
games facilitate the decision makers in understanding, assessing, and exploring these
complex problems; hence, their utilization can also be understood as an approach for
augmenting human cognition.
The earliest use of serious games in urban planning and architectural design goes
back to Abt [127] and Sanoff [20]. During the last decades, the use of games in
design and decision-making has increased. Initially, the main driving force was inte-
grating the simulation in an interactive environment, but later, designers and planners
started to focus on the participatory potential of games as well [134]. In this section,
we provide an overview of different gamification approaches and applications in
generative design.
Games for Education
One of the main advantages of serious games is that they provide a safe environ-
ment to explore a model and learn about its dynamics. The majority of the research
body in serious gaming aims to exploit this pedagogical potential of games by
educating the players through interaction in simulation-driven game sessions [135].
Generative Design in Architecture: From Mathematical Optimization … 27
For instance, researchers have utilized simulation games to explore the complex inter-
dependencies of sustainable development of the built environment [135] to allow
first-year architectural students to learn by interacting with such systems [136]. The
foundations of how games can contribute have been established in Constructivist
learning theories since they highlight interaction, curiosity, and social negotiation
as the main driving force of internalizing knowledge [137]. Furthermore, it is key
to note that other games that do not have a primarily educational goal exploit this
potential for familiarizing the players with the topics and establishing a common
language for discussions and negotiations.
First Principle Simulations
Beyond the educational case, other applications of serious games, such as decision-
making and planning support, have received relatively little attention [138]. However,
these applications are central to the gamified generative design approach, as they
support evidence-based navigation of the design space. The most prominent way to
provide evidence for the comparison of alternatives is first principle simulations and
evaluations. In these approaches, a formal model that is rooted in previously estab-
lished scientific and theoretical work is used to assess the alternatives. A classic
example of this is the original SimCity game [139], in which the authors used
Forrester’s model to replicate urban dynamics. This helped them predict the popu-
lation increase or decrease based on education, unemployment, and growth rates
[140]. More recently, Sanchez focused on the ecological aspect of urban develop-
ment through Block’hood: a simulation-driven voxel placement game that resembles
sand-boxes such as Minecraft and Simcity for neighbourhood development [141].
Specific to urban infrastructure, SimPort-MV2 focuses on land allocation in port-
planning [142]; MATRICES in the ProRail games focuses on the scheduling and
rail infrastructure to predict the consequences of different scenarios [143]; the Train
Fever game utilizes an integrated transportation and land-use model to predict popu-
lation and employment flow based on the rail networks [140]. More recently, there
is increasing interest in games that aim to integrate different domains to achieve
a more holistic representation of the urban planning complexities, such as water-
energy-food-land-climate nexus serious game [144] where the authors embed statis-
tical correlation from different domains to make an integrated inference engine for
assessment of scenarios. Another interactive multi-player game is the Sustainable
Infrastructure Planning Game (SIPG) which focuses on the inter-dependencies of
different sectors in strategic planning exercises [145].
On the architectural scale, Moloney proposes a serious game for integral sustain-
able design [136]. They provide a set of spatial quality measures based on the rule of
thumb and a peer review mechanism that capture the qualitative views of the players
as a collective. Savov proposes a block-based assembly game for facade design with
a setup similar to Jenga. They conduct light and structural analyses to provide live
feedback to the players about the performance of the design [146]. Lin suggests
integrating VR and BIM to communicate the qualitative requirements of healthcare
buildings to the players [147].
28 P. Nourian et al.
The simulation games sit in-between the mathematical derivation and grammatical
itemization mainly as their navigation of the design space is a merger of the two ends
of this spectrum. In this segment, we first elaborate on the similarities and differences
of the gamified approach with each of the mathematical and grammatical approaches
and then explain how they make it a unique methodology in generative design.
The common thread between grammatical and gamified approaches is twofold.
Firstly, they both limit the navigation to a rule-set. In simulation games, the player’s
actions are predefined based on the game rules. In the grammatical approach, the
rules define what the possible design changes that can be made are. Often, the game
rules are more forgiving than grammar structures as they provide for the players’
exploration. However, in both approaches, decisions/actions are defined based on
the rule sets and the grammar structures, which determine the possible changes
that can be made in the decisions/actions. Thus, the grammar creates pathways in
the design space that make it navigable for human agents. This brings us to the
second similarity between gamified and grammatical approaches: they both rely on
the human agency as the principal controller of the navigation process. This means
that the decision to apply a rule primary lies in the control of the human agents. In
some examples, game actions can be delegated to artificial agents or might be based
on the system’s recommendation, but the human player remains the authority in the
navigation process.
On the other hand, gamified and mathematical approaches are similar as they
both incorporate evaluation mechanisms that can provide a basis for comparison of
alternative actions/decisions. In simulation games, these evaluations are generally
30 P. Nourian et al.
introduced as indicators that allow the players to have a basis for their compar-
ison. Performing the evaluation relies on formal or role-play simulations that essen-
tially project the consequences of actions and decisions to the identified indicators.
The formal simulations can be embedded in the game as analogue mechanisms in
card games or digital simulations similar to objective functions in the mathematical
approach. Role-play simulations are normally introduced to represent the societal
and organizational complexities of the problem. Thus, they are closely related to the
consensus-building capabilities of games. In all cases, the overlap of the gamified
and mathematical approach is that given two decisions, it is possible to differentiate
them by evaluating indicators. In the mathematical approach, this differentiation is
performed formally through the gradient-based approaches that also help the process
identify the new potential decisions. However, in the gamified approach, the compar-
ison of the alternatives and the specification of the new decisions are performed purely
by the human agent.
In this segment, we elaborate on the EquiCity design game to dive deeper into the
potential of spatial simulation-driven games as generative design methodologies.
EquiCity is a spatial game for planning the redevelopment of a neighbourhood in
Delft, the Netherlands. The existing site was a historical factory that has been an
economic attraction of Delft in the last century. As the factory has stopped working,
the municipality is aiming to redevelop the area with a mixture of functionalities
such as residential, commercial, cultural, and public spaces. The main objectives
of the redevelopment consist of three main categories: (1) environmental such as
light and visibility; (2) social such as accessibility; and (3) economic concerning
the intervention extent. The main planning measures pertain to allocating different
functions in various sites within the district. As the game is a multi-player game, the
participants from various backgrounds were involved in deciding about this district.
On the methodological level, the problem has been formulated based on the intro-
duced framework. As indicated in Fig. 10, the decisions of the players were structured
(t)
as the interest matrix X n×m×o . Players would specify their decisions individually at
the beginning of each round. Then the game engine would gather all of the decisions,
construct the interest matrix, and perform opinion pooling, and proportional fitting
(t)
to achieve a consensual decision on the allocation of functions Vn×o . As the deci-
sions are on the planning level, the spatial decision on the configuration of each site is
degated
{ } to the agent-based massing process, which produces the prospective massing { }
k (t)
j . In the next step, the game engine evaluates the corresponding massing k (t) j
based on the predefined economic, social, and environmental objective functions.
Finally, individual utility values are combined through multi-criteria decision anal-
ysis (MCDA) to produce the group score, individual score, and achievement badges.
At this point, one iteration of the game is complete; and players can explore their
Generative Design in Architecture: From Mathematical Optimization … 31
Fig. 11 Screenshots of two iterations of game-play in the EquiCity game, the case of the redevel-
opment of a former factory into an urban district featuring a mix of residential, commercial, and
cultural spaces in addition to communal/public spaces. The overlaid screenshots show the infor-
mation on individual and collective scores (right) and the control, interest, and power difference
matrices (left) shown to the players
32 P. Nourian et al.
the stakeholders identifies their individual agenda and how their individual score is
computed. The control matrix of the stakeholders specifies how influential they are in
each site-function pair. With these two matrices, we can conduct the opinion pooling
and iterative proportional fitting process to achieve a consensual decision about the
district. However, this decision might not perform the best for the environmental
and social indicators. Therefore, the stakeholders are motivated to deviate from their
initial interest X (0) and produce new decisions X (t) to reach a better performance.
The translation of this approach in the gameplay is an iterative process where the
stakeholders will negotiate with their peers about their agenda and performance of
the area as a whole and try to form new decisions. At the end of each round, after
submitting their decision, they will get the evaluation of their decision which is the
basis for comparison with the previous decision and is potentially constructive in
guiding them towards a new decision.
EquiCity embodies the principles of gamified generative design as it (1) structures
the design space by defining decision variables (planning measures); (2) establishes
a robust evaluation system for differentiating the alternatives based on evidence from
the simulations; (3) incorporates the participatory nature of the problem in the game
and facilitates them in reaching consensus; and finally, (4) provides an interactive
environment where the relevant information is accessible to the stakeholder to support
their decision-making process (see Fig. 11).
Nevertheless, EquiCity is just a detailed example of this approach. As it has
been established in the previous subsections, the gamified generative design can
(1) have various sets of decision variables to address different application areas
in the spatial domain; (2) benefit from various evaluation methods such as first
principles, ABM, statistical inference, and even participatory evaluation such as
peer review; (3) incorporate various levels multi-actor complexity to undertake the
societal and organizational challenges of the spatial problem; and finally (4) utilize
various technological infrastructure such as web-based interfaces, game engines,
VR, etc. to provide an interactive environment for the participants (Fig. 12).
Fig. 12 Top rows show the application of the Technique for Oder of Preference by Similarity to
Ideal Solution (TOPSIS) for solving the massing configuration problem, and the bottom row shows
the application of the method of Multi-Actor Multi-Attribute Gradient-Driven Mass Aggregation
(MAGMA) Method, a participatory zoning algorithm based on Fuzzy Aggregation in Configraphics
[153, 164]
let alone generating designs of high quality with respect to multiple quality criteria.
Thus, it is somewhat pointless to discuss the performance of the generative capabil-
ities of generative models for generating realistic pictures in the styles learnt from a
corpus of humans (such as the 2D or 3D images typically generated with generative
adversarial networks or DALL.E). Regardless of whether it is a utopian or dystopian
future for AI models to generate architectural designs, the technological possibility
for generating a design is already available.
The more critical question of interest regards the capability of AI for solving hard
problems of performance-based generative design [165] where mapping or navi-
gating the design space is intractable due to the difficulty of formulating the associ-
ations of choices and consequences, see a comprehensive review of the applications
of deep generative models in design [166].
One particular class of models of interest are thus the models that can help make
these high-dimensional “bipartite design spaces” intuitively understandable and
tractable for humans; a particularly, relevant example is a one-layer deep Bayesian
belief network [167] that can be trained to work in both directions, i.e. both from the
design space to the performance space and from the performance space to the design
space. Considering the difficulty of simulating some aspects of performance related
to subjective and cognitive matters such as ergonomics, the need for building function
approximators capable of associating performance levels with spatial configurations
can be fulfilled with trainable neural networks, provided a systematic data collec-
tion campaign is conducted; see a connective perspective on spatial computing for
design [168]. In general, however, there are two major types of problems that can be
meaningfully addressed with AI: mapping or associating low-dimensional perfor-
mance data points with high-dimensional input design variables (encoding spatial
34 P. Nourian et al.
configurations) and the inverse or dual problem of navigating this bipartite space
in the opposite direction (see Fig. 2: Duality of mapping and navigation problems
in generative design). Two major lines of work can be identified as relevant in this
regard: one that uses neural networks as function approximators for regression aimed
at Interpolation and Extrapolation between data points; and another that is concerned
with manifold learning, dimensionality reduction (encoding), signal reconstruction
(decoding), and sampling (as in design of experiments), see a recent example for
making an explainable encoder–decoder network architecture [169]. A particular
class of generative models with special relevance to the dual problems of mapping
and navigation are probabilistic graphical models (PGM), which are of two general
types [170, 171]: Bayesian (directed acyclic graphs) and Markovian (bidirected or
undirected). Examples of such models include Bayesian belief networks, restricted
Boltzmann machines [171], flow-based models, Markov random fields, and diffusion
models. In short, generative models equipped with AI can augment the three types
of generative design processes where there is a complication related to the difficulty
of navigation or mapping in high-dimensional bipartite design spaces. However, the
use of generative models for content generation, no matter how interesting for the
entertainment industry, does not introduce a new type of generative design relevant
to AEC, unless one adopts a reductionist approach to GD as design automation
(automating the task of a human, not necessarily supposed to produce better results
than those made by humans).
5 Conclusion
Here, we look back at the rhetorical questions posed at the beginning and provide
succinct answers. Cedric Price in 1966 expressed the provocation “Technology is
the answer, but what was the question?”. When it comes to the application of digital
technologies in AEC, sadly, this rhetorical question seems to be relevant more than
ever before. The AEC industry as a whole has yet to learn from other Sciences
of the Artificial for forming a transparent culture of listing and formulating prob-
lems. Before claiming to have invented yet another solution, one needs to ask rigor-
ously what the problem is. The most pressing confusion about “the problem of the
design problem” seems to be about the notion of automated design. We hope to
have presented compelling reasons to think beyond automation, optimization, or
even problem solving by presenting the essentially different problem settings where
the purpose of the design process can be the gradual and explainable derivation
of design, participation in design for the sake of ensuring inclusivity and equity,
or mass customization and formation of rich design languages capable of forming
diverse designs, not only for the sake of comprehensively reaching better performance
levels but also for enriching design cultures as integral parts of human cultures. In this
sense, we hope to have adequately shown that it is time to think beyond the cliched
wicked problem mind-set and go about formulating design problems rigorously. The
following paragraphs summarize the moral of the paradigm of generative design.
Generative Design in Architecture: From Mathematical Optimization … 35
Methodological Commonalities
As we have illustrated in the “three plus one” methodologies, generative design is
not an agglomeration of isolated and distinct approaches but a coherent spectrum of
methodologies that surround the principles of formulating the problem, structuring
the design space, and navigating the design space based on evidence, knowledge,
rules (design principles), or policies (control or game-playing strategies). Despite the
differences in the three approaches, there are commonalities in these methodologies,
most importantly, they all have explicit discrete decision variables on which the
design space and an explicit formulation of the problem are based. This discreteness
allows the design solutions to be countable, whether finite in the case of mathematical
optimization or infinite in the grammatical approach. The enumerability of the design
space is a prerequisite of a structured navigation strategy.
The three methodologies have explicit and explainable navigation strategies. In the
grammatical approach, the grammar describes the applicability of the rule-set alterna-
tives, which in total exhibits pathways through the idiomatic expansion of the design
space. In the gamified approach, the navigation pathways are looser to allow for the
creative exploration of the players. Nevertheless, the scoring mechanism facilitates
the players to learn about the dynamics of the game and reach a consensus collec-
tively. In the mathematical approach, navigation is delegated to the gradient-based
search to have the utmost objectivity in the design derivation process.
The last common theme is explicit validation and evaluation which appears in
all of the methods. In the mathematical approach, constraints ensure the validation
and objective function will be an evaluation of the performance of the design that
we aim to maximize. In the grammatical approach, the validation is ensured by the
limitations of the grammar and the evaluation is based on the subjective opinion
of the designer. In the gamified approach, the game rules ensure the validity of the
design and the feedback provided by the simulations enables evaluation.
Customization Versus Automation
More unfortunate than the prevailing false dichotomy of automation and customiza-
tion is the preconception that automation is integral to generative design. The
mapping of the generative design methodologies in this paper clearly illustrates how
generative design ultimately provides for customization of solutions; customization
through itemization in the rule-based grammatical approach; customization through
the informed exploration to reach consensus in gamified approaches; and customiza-
tion for the satisfaction of the specific needs of a particular problem in mathematical
approach. The generative design paradigm aims to formulate the problem, structure
the design space, and devise navigation strategies to facilitate the customization of
the design to the requirements and preferences of the future inhabitants.
Future Research
We can envisage two main directions for future research in generative design: devel-
opment of methodologies and rigorous evaluation of methodologies. The existing
methodologies under the umbrella of generative design are rooted in disparate fields
36 P. Nourian et al.
References
1. Simon HA (2008) The sciences of the artificial, 3rd edn [Nachdr.]. MIT Press, Cambridge,
MA
2. Dorst K (2003) The problem of design problems. Expert Des 135–147
3. Simon HA (1973) The structure of Ill structured P coblems. Artificial Intelligence 21
4. Sigmund O (2001) A 99 line topology optimization code written in Matlab. Struct Multidisc
Optim 21:120–127
5. Schek H-J (1974) The force density method for form finding and computation of general
networks. Comput Methods Appl Mech Eng 3(1):115–134
6. Hillier B, Hanson J (1984) The social logic of space. https://doi.org/10.1017/CBO978051159
7237
7. Alexander C (1977) A pattern language: towns, buildings, construction. Oxford University
Press
8. Stiny G, Gips J (1971) Shape grammars and the generative specification of painting and
sculpture. In: Proceedings of the congress international federation for information processing
1971:1460–1465
9. Chomsky N (1957) Syntactic structures. Syntactic structures
10. Lindenmayer A (1968) Mathematical models for cellular interactions in development i.
Filaments with one-sided inputs. J Theor Biol 18(3):280–299
11. Adamatzky A (2010) Game of life cellular automata. Springer
12. Wolfram S (1997) New kind of science
13. Mitchell WJ (1991) Functional grammars: an introduction. In: Reality and virtual reality:
association for computer aided design in architecture conference proceedings 1991. University
of California at Los Angeles, pp 167–176
14. Cagan J (2001) Engineering shape grammars: where we have been and where we are going.
In: Formal engineering design synthesis. Cambridge University Press, pp 65–92
Generative Design in Architecture: From Mathematical Optimization … 37
15. Shea K, Cagan J (1997) Innovative dome design: applying geodesic patterns with shape
annealing. Artif Intell Eng Des Anal Manuf 11:379–394
16. Shea K, Cagan J (1999) The design of novel roof trusses with shape annealing: assessing the
ability of a computational method in aiding structural designers with varying design intent.
Des Stud 20:3–23
17. Simon HA (1997) Administrative behavior. Simon; Schuster
18. Epstein JM (2006) Generative social science: studies in agent-based computational modelling.
Princeton University Press, Princeton
19. Batty M (1974) A theory of markovian design machines. Environ Plann B Plann Des. https://
doi.org/10.1068/b010125
20. Sanoff H (1979) Design games. W. Kaufmann
21. Friedman Y (1975) Toward a scientific architecture. First American. MIT Press, Cambridge,
MA
22. Dorst K, Dijkhuis J (1995) Comparing paradigms for describing design activity. Des Stud
16:261–274
23. Veloso P, Krishnamurti R (2020) An academy of spatial agents: generating spatial configura-
tions with deep reinforcement learning
24. Ligtenberg A, Bregt AK, Van Lammeren R (2001) Multi-actor-based land use modelling:
spatial planning using agents. Landsc Urban Plan 56:21–33
25. König R (2011) Generating urban structures: a method for urban planning supported by
multi-agent systems and cellular automata. Przestrzeń i Forma, 353–376
26. Azadi S, Nourian P (2021) GoDesign: a modular generative design framework for mass-
customization and optimization in architectural design. In: Towards a new, configurable
architecture. CUMINCAD, Novi Sad, Serbia, pp 285–294
27. Harding JE, Shepherd P (2017) Meta-parametric design. Des Stud 52:73–95
28. Kroes P, Meijers A (2006) The dual nature of technical artefacts. Stud Hist Philos Sci 37:1–4
29. Gero JS (1990) Design prototypes: a knowledge representation schema for design. AI Mag
11:26
30. Maher ML, Poon J (1996) Modeling design exploration as co-evolution. Comp-Aided Civil
Infrastruct Eng 11:195–209
31. Simon HA (2019) The sciences of the artificial, reissue of the third edition with a new
introduction by john laird. MIT press
32. Gumin M (2016) Wave function collapse algorithm
33. Azadi S, Nourian P (2021) Collective intelligence in generative design: a human-centric
approach towards scientific design. BouT: Periodical Build Tech Generative Design 76:7–16
34. March L, Matela R (1974) The animals of architecture: some census results on n-Omino
populations for n = 6, 7, 8. Environ Plann B Plann Des 1:193–216
35. March L (1998) [8+ (6)+ 11] = 25 + x. Environ Plann B Plann Des 25:10–19
36. Retsin G (2019) Discrete: Reappraising the digital in architecture. Wiley
37. Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D, Riedmiller M (2013)
Playing Atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602
38. Bai N, Azadi S, Nourian P, Roders AP (2020) Decision-making as a social choice game. In:
Proceedings of the 38th eCAADe Conference, p 10
39. Linkwitz K (2014) Force density method: design of a timber shell (Chapter 6). In: Adriaenssens
S, Block P, Veenendaal D, Williams C (eds) Shell structures for architecture. Routledge, pp
73–84
40. Cooper S, Khatib F, Treuille A, Barbero J, Lee J, Beenen M, Leaver-Fay A, Baker D, Popović
Z, Players F, (2010) Predicting protein structures with a multiplayer online game. Nature
466:756–760
41. Shea K, Cagan J, Fenves SJ (1997) A shape annealing approach to optimal truss design with
dynamic grouping of members. J Mech Des 119:388–394
42. Lógó J, Ismail H (2020) Milestones in the 150-year history of topology optimization: a review.
Comp Assis Methods Eng Sci 27:97–132
43. Sorkine O (2005) Laplacian mesh processing. Eurographics (State of the Art Reports) 4
38 P. Nourian et al.
69. Peng C-H, Yang Y-L, Bao F, Fink D, Yan D-M, Wonka P, Mitra NJ (2016) Computational
network design from functional specifications. ACM Trans Graph 35:131:1–131:12
70. Arvin SA, House DH (1999) Making designs come alive: using physically based modeling
techniques in space layout planning. In: Augenbroe G, Eastman C (eds) Computers in building.
Springer, US, Boston, MA, pp 245–262
71. Zawidzki M, Tateyama K, Nishikawa I (2011) The constraints satisfaction problem approach
in the design of an architectural functional layout. Eng Optim 43:943–966
72. Lobos D, Donath D (2010) The problem of space layout in architecture: a survey and
reflections. Arq 6:136–161
73. Liggett RS (2000) Automated facilities layout: past, present and future. Autom Constr 9:197–
215
74. Liggett RS, Mitchell WJ (1981) Optimal space planning in practice. Comput Aided Des
13:277–288
75. Saha PK, Borgefors G, di Baja GS (2016) A survey on skeletonization algorithms and their
applications. Pattern Recogn Lett 76:3–12
76. Zawidzki M (2016) Discrete optimization in architecture: architecture & urban layout. https://
doi.org/10.1007/978-981-10-1106-1
77. Wu W, Fan L, Liu L, Wonka P (2018) MIQP-based layout design for building interiors. Comp
Graph Forum 37:511–521
78. Hua H, Hovestadt L, Tang P, Li B (2019) Integer programming for urban design. Eur J Oper
Res 274:1125–1137
79. Xie YM (2022) Generalized topology optimization for architectural design. ARIN 1:2
80. Hofmeyer H, Schevenels M, Boonstra S (2017) The generation of hierarchic structures via
robust 3D topology optimisation. Adv Eng Inform 33:440–455
81. van Dijk F (2020) Topology optimization as architectural form finding: using structural
topology optimization to generate architectural geometry. MSc Thesis in Building Technology,
TU Delft
82. Florou A (2021) Generative solar-climatic configuration: a model for feed-forward opti-
mization of building envelopes as to solar energy potential. MSc Thesis in Building
Technology
83. Chomsky N (1956) Three models for the description of language. IRE Trans Info Theory
2:113–124
84. Stiny G (2006) Shape: talking about seeing and doing. MIT Press
85. Prusinkiewicz P, Lindenmayer A (2012) The algorithmic beauty of plants. https://doi.org/10.
1007/978-1-4613-8476-2
86. Goldman R, Schaefer S, Ju T (2004) Turtle geometry in computer graphics and computer-aided
design. Comput Aided Des 36:1471–1482
87. Kobayashi MH (2010) On a biologically inspired topology optimization method. Commun
Nonlinear Sci Numer Simul 15:787–802
88. Bielefeldt BR, Akleman E, Reich GW, Beran PS, Hartl DJ (2019) L-system-generated
mechanism topology optimization using graph-based interpretation. J Mech Robot 11:020905
89. Garcia S (2017) Classifications of shape grammars. In: Design computing and cognition’16.
Springer, pp 229–248
90. Knight T, Stiny G (2015) Making grammars: from computing with shapes to computing with
things. Des Stud 41:8–28
91. Stiny G, Mitchell WJ (1978) The palladian grammar. Environ Plann B Plann Des 5:5–18
92. Koning H, Eizenberg J (1981) The language of the prairie: Frank lloyd wright’s prairie houses.
Environ Plann B Plann Des 8:295–323
93. Flemming U (1987) More than the sum of parts: the grammar of queen anne houses. Environ
Plann B Plann Des 14:323–350
94. Li AI et al (2001) A shape grammar for teaching the architectural style of the yingzao fashi.
PhD thesis, Massachusetts Institute of Technology
95. Duarte JP (2005) A discursive grammar for customizing mass housing: the case of siza’s
houses at malagueira. Autom Constr 14:265–275
40 P. Nourian et al.
96. Duarte JP, Ducla-Soares G, Caldas LG, Rocha J (2006) An urban grammar for the medina of
marrakech. In: Design computing and cognition’06. Springer, pp 483–502
97. Beirão JN, Duarte JP, Stouffs R (2011) Creating specific grammars with generic grammars:
towards flexible urban design. Nexus Netw J 13:73–111
98. Knight TW (1980) The generation of hepplewhite-style chair-back designs. Environ Plann B
Plann Des 7:227–238
99. Agarwal M, Cagan J (1998) A blend of different tastes: the language of coffeemakers. Environ
Plann B Plann Des 25:205–226
100. McCormack JP, Cagan J, Vogel CM (2004) Speaking the Buick language: Capturing,
understanding, and exploring brand identity with shape grammars. Des Stud 25:1–29
101. Costa EC e, Duarte JP (2013) Mass customization of ceramic tableware through digital
technology. Green Design, Materials and Manufacturing Processes, 467–471
102. Eloy S, Duarte JP (2011) A transformation grammar for housing rehabilitation. Nexus Netw
J 13:49–71
103. Baldock R, Shea K, Eley D (2005) Evolving optimized braced steel frameworks for tall
buildings using modified pattern search. Computing Civil Eng. https://doi.org/10.1061/407
94(179)60
104. Baldock R (2007) Structural optimisation in building design practice: case-studies in topology
optimisation of bracing systems. PhD thesis, University of Cambridge
105. Geyer P (2008) Multidisciplinary grammars supporting design optimization of buildings. Res
Eng Design 18:197–216
106. Mueller CT (2014) Computational exploration of the structural design space. PhD thesis,
Massachusetts Institute of Technology
107. Sass L (2006) A wood frame grammar: a generative system for digital fabrication. Int J Archit
Comput 4:51–67
108. Ertelt C, Shea K (2009) Generative design and CNC fabrication using shape grammars.
In: ASME 2008 international design engineering technical conferences and computers and
information in engineering conference. American Society of Mechanical Engineers Digital
Collection, pp 25–34
109. Lee J, Mueller C, Fivet C (2016) Automatic generation of diverse equilibrium structures
through shape grammars and graphic statics. Int J Space Struct 31:147–164
110. Lee J, Meled TV, Block P (2016) Form-finding explorations through geometric transforma-
tions and modifications of force polyhedrons. In: Proceedings of the annual symposium of
the international association for shell and spatial structures 2016
111. Hansmeyer M, Dillenburger B (2013) Mesh grammars. In: Stouffs R, Janssen P, Roudavski
S, Tunçer B (eds) Conference on computer-aided architectural design research in Asia, pp
821–829
112. Daniels J, Silva CT, Shepherd J, Cohen E (2008) Quadrilateral mesh simplification. Assoc
Comp Mach Trans Graph 27:148
113. Daniels J II, Silva CT, Cohen E (2009) Localized quadrilateral coarsening. Comp Graph
Forum 28:1437–1444
114. Tarini M, Pietroni N, Cignoni P, Panozzo D, Puppo E (2010) Practical quad mesh
simplification. Comp Graph Forum 29:407–418
115. Peng C-H, Zhang E, Kobayashi Y, Wonka P (2011) Connectivity editing for quadrilateral
meshes. Assoc Comp Mac Trans Graph 30:141
116. Nasri A, Sabin M, Yasseen Z (2009) Filling n-sided regions by quad meshes for subdivision
surfaces. Comp Graph Forum 28:1644–1658
117. Takayama K, Panozzo D, Sorkine-Hornung O (2014) Pattern-based quadrangulation for N-
sided patches. In: Proceedings of the symposium on geometry processing 2014. Eurographics
Association, pp 177–184
118. Peng C-H, Barton M, Jiang C, Wonka P (2014) Exploring quadrangulations. Assoc Comp
Mac Trans Graph 33:12
119. Conway JH, Burgiel H, Goodman-Strauss C (2016) The symmetries of things. CRC Press
Generative Design in Architecture: From Mathematical Optimization … 41
147. Savov A, Tessmann O, Nielsen SA (2016) Sensitive assembly: gamifying the design and
assembly of façade wall prototypes. Int J Archit Comput 14:30–48
148. Lin Y-C, Chen Y-P, Yien H-W, Huang C-Y, Su Y-C (2018) Integrated BIM, game engine
and VR technologies for healthcare design: a case study in cancer hospital. Adv Eng Inform
36:130–145
149. Raghothama J, Hauge JB, Meijer S (2022) Curating player experience through simulations
in city games. Urban Planning. https://doi.org/10.17645/up.v7i2.5031
150. Hauge JB, Carretero MR, Kodjabachian J, Meijer S, Raghothama J, Duqueroie B (2016)
ProtoWorld a simulation based gaming environment to model and plan urban mobility. In:
Lecture notes in computer science. Springer International Publishing, pp 393–400
151. Chakraborty N, Haworth B, Usman M, Berseth G, Faloutsos P, Kapadia M (2017) Crowd
sourced co-design of floor plans using simulation guided games. In: Proceedings of the tenth
international conference on motion in games. https://doi.org/10.1145/3136457.3136463
152. Khoury M, Gibson MJ, Savic D, Chen AS, Vamvakeridou-Lyroudia L, Langford H, Wigley S
(2018) A serious game designed to explore and understand the complexities of flood mitigation
options in urbanrural catchments. Water 10:1885
153. Lim SJ, Vasilatou V, Wuu SH (2020) The use of CA to generate informal architectural systems.
In: Proceedings of the 11th annual symposium on simulation for architecture and urban design,
pp 1–8
154. Soman A, Azadi S, Nourian P (2022) DeciGenArch: a generative design methodology for
architectural configuration via multi-criteria decision analysis. In: Proceedings of eCAADe
2022. Education; research in Computer Aided Architectural Design in Europe, p forthcoming
155. Veloso PJR, RK (2019) Multi-agent space planning: a literature review (2008–2017). In: Lee
J-H (eds) Hello, Culture! [18th International Conference, CAAD Futures 2019, Proceedings.
ISBN 978-8-89453-05-5. Daejeon, Korea, pp 52–74
156. Savov A, Tessmann O (2017) Introduction to playable voxel-shape grammars. ACADIA
proceedings. https://doi.org/10.52842/conf.acadia.2017.534
157. Savov A, Buckton B, Tessmann O (2016) 20,000 blocks: can gameplay be used to guide non-
expert groups in creating architecture? ACADIA Proceed. https://doi.org/10.52842/conf.aca
dia.2016.024
158. Kelly G, McCabe H (2006) A survey of procedural techniques for city generation. ITB J
14:342–351
159. Parish YIH, Müller P (2001) Procedural modeling of cities. In: Proceedings of the 28th annual
conference on computer graphics and interactive techniques. SIGGRAPH 1. https://doi.org/
10.1145/383259.383292
160. Duering S, Chronic A, Koenig R (2020) Optimizing urban systems: integrated optimization
of spatial configurations. In: Proceedings of the 11th annual symposium on simulation for
architecture and urban design, pp 1–7
161. Lu SC-Y, Elmaraghy W, Schuh G, Wilhelm R (2007) A scientific foundation of collaborative
engineering. CIRP Ann 56:605–634
162. Yenardi A, Janssen P (2021) Mass participatory design on the web: a voxel-based 3D
modelling approach
163. Nourian P (2016) Configraphics: graph theoretical methods for design and analysis of spatial
configurations. https://doi.org/10.7480/isbn.9789461867209
164. Regenwetter L, Ahmed F (2022) Towards goal, feasibility, and diversity-oriented deep
generative models in design
165. Regenwetter L, Nobari AH, Ahmed F (2022) Deep generative models in engineering design:
a review. J Mech Des 144:071704
166. Conti ZX, Kaijima S (2021) Explainable ML: augmenting the interpretability of numerical
simulation using bayesian networks. The routledge companion to artificial intelligence in
architecture. Routledge, pp 315–335
167. Bhatt M, Freksa C (2015) Spatial computing for design—an artificial intelligence perspec-
tive. In: Gero JS (ed) Studying visual and spatial reasoning for design creativity. Springer,
Netherlands, Dordrecht, pp 109–127
Generative Design in Architecture: From Mathematical Optimization … 43
168. Marin R, Rampini A, Castellani U, Rodolà E, Ovsjanikov M, Melzi S (2021) Spectral shape
recovery and analysis via data-driven connections. Int J Comput Vis 129:2745–2760
169. Pearl J (1988) Probabilistic reasoning in intelligent systems: networks of plausible inference.
Morgan kaufmann
170. Koller D, Friedman N (2009) Probabilistic graphical models: principles and techniques. MIT
Press, Cambridge, MA
171. Montavon G, Orr GB, Müller K-R (eds) (2012) Neural networks: tricks of the trade, 2nd edn.
https://doi.org/10.1007/978-3-642-35289-8
Augmented Reality-Driven Prototyping
for Error Elimination During
Computational Design and Digital
Fabrication
Odysseas Kontovourkis
1 Introduction
O. Kontovourkis (B)
Department of Architecture, University of Cyprus, P.O. Box 20537, 1678 Nicosia, Cyprus
e-mail: kontovourkis.odysseas@ucy.ac.cy
lead to major geometrical errors in the execution phase of the process and production
of physical prototypes.
Augmented Reality (AR) technology has been increasingly applied in the fields of
architecture and construction industry the last few decades, especially in the context
of the transition to Industry 4.0, which refers to the introduction of advanced tech-
nology and the digitization in the construction sector. The ability of AR technology
to allow real-time interaction between the physical and digital environment and
in particular its ability to project digital information that is visible to the users in
the physical space [1] allows the emergence of a range of advantages in terms of
architectural design and construction processes.
Along with the other advantages that this technology can bring in the construc-
tion industry is the easy and low-cost use in the fabrication of complex shapes, an
issue under consideration in the modern integrative processes from digital design
to automated construction. More specifically, AR technology offers an alternative
way of using digital construction media towards more efficient, low-cost and faster
processing of complex structures. This might offer an alternative and productive inte-
gration of computational design and digital fabrication tools and at the same time
might allow the active and efficient contribution of users during the process.
Within this context, the next section of this chapter provides a literature review as
regard the latest advances in this area of investigation and more specifically discusses
the integration of AR technology within the framework of computational design to
digital fabrication, which can be expanded into different directions and objectives.
The theoretical investigation provides a generic overview regarding the latest devel-
opments in this direction of research and assist in the formulation of the methodology
together with its application in the specific case study scenario of an undergraduate
course.
2 Relevant Works
natural user interface (NUI), cloud computing and mobile devices, aiming to assist
AR technology towards specific architectural, engineering and construction tasks.
Since then, significant advances have been made as regard the introduction of AR
tools and technologies in AEC direction with applications to be mainly focused
on visualization, inspection and monitoring in job sites, allowing decisions to be
made [4], but without neglecting other implementations like 3D exterior and interior
environment perception of space, collaborative and users centric design in various
scales.
Recently, significant work has been done towards the implementation of AR tech-
nology to enhance manufacturing and construction activities, which might allow a
more productive and efficient involvement of unskilled users/workers in various
constructions tasks. This was due to the maturity of AR technology that has reached
a stage where digital platforms linked with AR devices have been widely available
and open to the public for investigation. Special focus has been given to complex
and non-standards designs, where the accuracy in the implementation of predefined
tasks and the processing of the large amount of information often lead to errors and
deviations during fabrication.
In a recent work by Song et al. [5], the application of AR for digital fabrication in
architecture has been examined based on literature review and analysis of respective
works and publications. Briefly, in this work, AR applications were classified into
three main categories: AR 3D holographic instruction, AR data sharing and AR for
human–computer interaction. The first category, that of 3D holographic instruction
was found to be the most widely published the last years with two sub-categories to be
distinguished: AR 3D holographic instruction for assembly and AR 3D holographic
instruction for fabrication.
As regard the assembly process, several works attempt to apply AR for brick laying
procedures, especially in case of complex wall structures that can be implemented
by unskilled workers. Fazel and Izadi [6] introduced an affordable AR procedure for
constructing free-form modular surfaces based on accessible devices, which included
two cameras, two markets and one Head Mounted Display (HMD) based on Google
Cardboard model. Several real-scale walls and columns were used as case studies
for experimental execution, that included projection of virtual guidelines augmented
on the HMD screen and user’s adjustment and bonding of bricks. These experimen-
tations showed that the majority of errors were smaller than 6 mm and less than 2°
for orientation. Similarly, Jahn et al. [7] demonstrated an AR approach for the holo-
graphic construction of a parametric wall in the form of a double-curved geometry
based on the parametric environment of Grasshopper [8] with the use of Fologram
[9], a plug-in for Grasshopper that allows viewing and sharing of information in the
mixed reality environment, and the Microsoft HoloLens [10], a commercially avail-
able HMD. In this case, various planar courses of bricks were created and information
related to the outline of the top face of each brick was overlaid on the construction
site in order for the bricklayers to view each course separately, making the necessary
corrections as regard the brick placement. The physical wall result showed that the
use of the holographic model reduced construction time if comparison was made
with time-consuming manual methods of measuring and setting out brick locations.
48 O. Kontovourkis
Also, in terms of accuracy, the result was scanned and compared with the digital
model, showing average deviations according to brick courses ranged from 21 to
5 mm due to the dimension differences of bricks.
In similar directions, which involve the implementation of AR technology for the
assembly of structural parts and modules through the projection of digital information
or guidelines for their accurate placement, other examples using different components
and materials can be found. In the work by Jahn et al. [11, 12], a method for steam
bent timber elements was introduced, assisted by the application of AR technology
and specifically the Fologram plug-in for Grasshopper with the use of HoloLens
HMD. According to the suggested approach, temporary formworks were placed and
an approximate method of forming timber was conducted based on AR. The results
show that the use of AR for forming and assembling was a fast and reliable method
for fabricating the bending elements due to the ability of the specific technology to
allow easy positioning of formworks, eliminating, in parallel, their cost production.
Similarly, in the work by Jahn et al. [13], AR plug-in Fologram with HoloLens
hardware was applied for bending and then assembling, in this case, using mild steel
tubes. In both cases, although deviations and errors might occur due to the available
materials and the skills of the workers, the great importance and benefits of the
method applied in cases of construction of complex structures were recognized.
Examples of digital information and sequence of instructions that drive assembly
with the application of AR can also be observed in the literature. In the work by
Lharchi et al. [14], a framework in architectural applications was demonstrated based
on an augmented assembly process using AR devices. In particular, a proposed
Assembly Digital Model (ADM) was accessed by different stakeholders through
a web-based platform with a 3D viewer for mobile, smartphone and tablet on-
site usage. The users were able to assemble complex structures without previous
knowledge or experience, only by extracting step by step information and guide-
lines regarding the assembly process, again through the HoloLens. Also, the work of
Wu et al. [15] presented a method for bamboo material analysis, machine learning
and augmented reality application. In this case, the system allowed the users to
interactively create their own space, and after scanning available bamboo poles, it
teaches them how to assemble the bamboo component in a sequence of steps using
smartphone devices, resulting in an effective and fast construction of complex struc-
tures. The application of AR technology as information-driven mechanism that direct
assembly sequence can be also observed in the work by Kontovourkis et al. [16]. In
this work, an AR-driven methodology for assembling a modular shading device was
presented stressing the necessity for specific application especially in cases where
structures consist of a large number of elements and complex assembly tasks. In this
work, construction time issues were investigated by measuring the time required by
skilled and unskilled users/workers, showing that their assembly time performance
can be improved with the use of AR, especially during the first layers of assembly.
A large part of AR applications in digital fabrication is associated with the inte-
gration of AR technology into robotic manufacturing processes [17], which can be
classified under the category of AR for human–computer interaction according to the
work by Song et al. [5]. In this case, the need for seamless processes that incorporate
Augmented Reality-Driven Prototyping for Error Elimination During … 49
AR and robotic technology for fabrication might help towards a fully automated
and enhanced manufacturing process. An example in this direction was presented
in the work by Johns [18], where the user was able to draw on the floor the trace
of a complex wall, which was then projected in front of him using custom-made
augmented reality glasses. The digital information could then be transferred to an
industrial robot to build the wall by picking and placing brick components.
The increased cost of using robotic mechanisms in digital manufacturing
processes, and on the other hand, the need of introducing new and economically
beneficial technologies, that reduce manufacturing costs, leading to AR application
solutions, have been stressed and demonstrated in several works [15].
But beyond the cost of automated mechanisms, an important reason that has influ-
enced the application of AR technology and has been discussed in many research
examples is the ability of AR technology to achieve precision in the construction
of complex structures. Also, the easy and effective guidance provided to unskilled
users, workers, bricklayers, etc. towards the development of those structures. Further-
more, the available AR technology can provide a fast and reliable platform for para-
metric design and control, visualization and inspection, eliminating in this way the
time require for programming and control of digital structures before their actual
fabrication.
Nowadays, the literature emphasizes that the rapid development and application
of AR technology has been achieved due to the development of widely available tools
but also due to the development of cloud computing and smartphones. In addition,
parametric associative design environments like Grasshopper, a plug-in for Rhino
and the ability of developers to introduce and make AR technology plug-ins available
to the broader community, like the Fologram plug-in [9], have helped in this direction.
Finally, the rapid development of HMDs for AR, such as HoloLens [10] has also
benefited this direction of research as well.
The above-mentioned literature review as regard the application of AR technology
in several examples of computational design and digital fabrication but also the
currently available technologies reveal their great potential as mechanisms to enhance
users’ intervention in the workflow, offering advantages over processes where the
user operates without the aid of technology. However, its implementation is still at an
early stage of adoption in the AEC industry with a number of issues still unresolved,
requiring their further exploration and consideration. An important aspect is the
accuracy of the physical results obtained based on the available AR technology, which
in the future might lead to widely adoption for application in real-scale construction
scenarios.
Towards this direction, the current research work proposes a specific computa-
tional design and fabrication workflow by integrating AR technology, as mecha-
nism to increase the accuracy of physical prototypes produced when users/workers
are involved. The integration of AR technology into a specific digital design and
construction workflow aims to demonstrate the advantages that this technology can
bring in terms of geometrical accuracy based on quantitative and qualitative criteria
and discusses advantages and disadvantages over the users’ intervention without
AR-assisted technology.
50 O. Kontovourkis
The following section describes the methodology adopted for this research work
by describing the two basic steps undertaken, the computational design and digital
fabrication process based on manual users/workers intervention and the AR-assisted
one. In both cases, the parametric design and control algorithms introduced are
explicitly analysed and discussed further, together with their mechanism of compar-
ison. Then, results are analysed and discussed, and finally, concluding remarks are
provided.
3 Methodology
In order to achieve this, initially the study refers to the analysis of results obtained
based on a computational design to digital fabrication workflow as regard their
geometrical accuracy. The development of the design results and physical proto-
types was carried out as part of an undergraduate course taught by the author in the
Department of Architecture at the University of Cyprus during Fall 2021.
Within the framework of the above-mentioned course, although in fabrication
stage a CNC router was introduced for formwork development, the human involve-
ment was an indispensable part of the process towards the physical execution of
prototypes. Inevitably, this has led to numerous deviations of the physical outcomes
compared to the digital ones. The results were analysed using an algorithm that
compares the digital design prototypes with the physical ones after their scan-
ning using a structured light scanner. The deviations occurred due to the difference
between digital and physical prototypes are presented and comparatively discussed.
Then, same digital prototypes are selected according to quantitative and qualitative
criteria for the second part of the process, that of physical prototyping with the
application of Augmented Realty (AR) technology. In order to achieve this, initially
the formworks produced through CNC milling are incorporated in a custom-made
mechanism for AR implementation in order to achieve the accurate specification of
formworks position for casting.
Final part of the process includes the scanning of physical prototypes that are
developed based on AR and their comparison as regard deviations and error measure-
ments, which are occurred during physical production. The aim is to provide an
indication of the differences occurred between the two fabrication approaches, the
manually driven one, where formworks are positioned by students without the assis-
tance of AR technology and the AR-driven physical prototyping one, where the
application of AR technology is introduced as part of the fabrication process.
Augmented Reality-Driven Prototyping for Error Elimination During … 51
The current study begins with the development of a series of 3D digital models and
their physical implementation, a process that took place in the context of the under-
graduate course, which was focused on introducing students to computational design
and digital fabrication during the Fall 2021. The procedure referred to specific and
distinct implementation steps, which the students were aware of and followed in order
to implement their design solutions as well as their physical prototypes. Briefly, the
steps of the proposed process were distinguished into two main stages, the compu-
tational design and the digital fabrication. In the first stage, that of computational
design, the following steps were undertaken:
. Topology Optimization diagrams
Diagrammatic results of analysis based on Topology Optimization (TO) princi-
ples were provided to 28 students enrolled in this course. A structural beam was
used as the case study under investigation, which was subjected to one specific
boundary and six different loading conditions. These diagrams were produced
using the plug-in TopOpt [19], a TO plug-in for Grasshopper plug-in [8] in
Rhino 3D NURBS modelling software [20], which was developed based on the
work by Sigmund [21]. The diagrams have been initially produced for developing
similar concept as part of the same course held during Fall 2019 and has been
presented in previous work done by the author [22]. The respective TO diagrams,
together with grids consisting of 100 cells and 2D domains, were provided as the
starting point for the development of the free-form beam shapes, following the
suggested computational design to digital fabrication process. The TO analysis
is not presented in this chapter since it is out of the scope of the current research
work. However, the analytical description of the geometrical boundary condi-
tions, input loading conditions and parameters used for TO analysis can be found
in a previous work published by the author [22]. Figure 1 demonstrates the six
diagrams developed through TO analysis in the form of contour curves.
Fig. 1 Six diagrams of TO analysis of case studies (CSs), demonstrating the material distribution
according to different conditions and parameters
52 O. Kontovourkis
. Variable values
According to the number of participants in the course held during Fall 2021, 28
digital 3D models in the form of design cases (DC) were expected to be produced.
As it has been already mentioned, the objective of this computational design
exercise was to provide an understanding on parametric design principles by
defining design variations according to the distance of each cell from the contour
lines of TO diagrams. Thus, the mass generated and distributed in the overall
shape is alternated according to the distance of cells from the contour curves.
Specifically, cells in further distance from the contour curves are minimizing
their mass, while cells in closer distance from the contour curves are maximizing
their mass. The accumulated distribution of mass according to the distance leads
to the generation of the overall shape and hence the development of the free-
form beam structures. This process refers to respective alternated distribution of
material in areas that is needed and according to the analysis results of TO, similar
to functional graded structures [23–25].
The allocation of the loading and mass distribution diagrams to the students was
followed by the three-dimensional development of their structures by defining a
list of variable values in order to achieve design variety in accordance with the 100
cells grid and the specific 2D domain provided. Further or closer perpendicular
distances between cells centre and contour curves in XY plane were recorded and
presented in the form of tables (Fig. 2).
In a subsequent stage, the data provided, through an inversely proportional rela-
tionship determined by simple mathematical equations based on each design case,
is converted into a height value in Z direction in order to achieve the increase and
decrease of the mass, depending on the corresponding closer and further distance.
Finally, points are created with a distance from the centres of the cells equal to
the Z height values.
Fig. 2 Diagram indicating the initial measurements as regard the distance between cells centres
and contour curves (Student Alina Neophytou)
Augmented Reality-Driven Prototyping for Error Elimination During … 53
Fig. 3 Design of basic lines or curves based on the height values (Students David Pereira, Christos
Anthimou and Rafail Sarmallis)
. Curves configuration
The introduction of intuitive design criteria by each student was considered as an
important step in the design development stage together with more objective ones.
Specifically, the main design objective was to achieve a design balance between the
free-form shape of structure and the formwork in order to reduce material, while
at the same time, maintain structural and construction efficiency. In this stage,
initiation of the design process through the definition of basic lines or curves was
achieved, with a view to the subsequent development of the three-dimensional
structure. Figure 3 demonstrates different basic lines or curves proposals.
Fig. 4 Main closed polysurface development (Students Styliani Ioakeim and Ana Maria Ioannou)
54 O. Kontovourkis
Fig. 5 a Final digital prototypes and b respective formworks (Students Alina Neophytou and
Christos Anthimou)
In the second stage, that of digital fabrication, the following steps were undertaken:
. CNC toolpath generation
In order to produce the physical prototypes through the gypsum casting process,
the two free-form formworks that were designed in each design case (DC) were
proceeded to the production process using a CNC milling machine. In order
to calculate the toolpaths for formworks CNC milling, the software DeskProto
Version 7.1 [26] was used. The formworks were milled at scale 1:20 and based
on the parameters indicated in Table 1. Also, Fig. 6a shows toolpaths and Fig. 6b
simulation results throughout the process. The parameters and calculation process
lead to the creation of a NC-Program file (.iso) that is sent to the CNC machine
for execution.
Fig. 6 CNC milling calculation toolpaths and simulation of a characteristic DC (Student Alina
Neophytou)
Fig. 7 CNC milling procedure and formwork results (Students Christos Anthimou and Styliani
Ioakeim)
Fig. 8 Formworks adjustment and gypsum casting (Students Styliani Ioakeim and Katerina Pavlou)
In order to detect the geometric deviations and errors that arise in the fabrication
process of M-FSs, this part of the methodology describes an accuracy evaluation
approach in the production of their physical prototypes by comparing the results
obtained with the digital ones.
Initially, the physical prototypes are scanned by using a structured light scanner.
The process refers to placing the physical prototype on a turn table at a set distance
Augmented Reality-Driven Prototyping for Error Elimination During … 57
Fig. 9 Selected physical prototypes based on manually driven fabrication (Students Christos
Anthimou, Rafail Sarmallis, Panayiotis Anastasiou)
Fig. 10 3D scanning of physical prototypes using the structured light scanning procedure
from the projector and cameras, which capture consecutive images of the prototype.
The main settings refer to 20 segments that make up a complete rotation around the
axis of the prototype, which are then integrated with the help of the software and
create the final 3D prototype in mesh form (Fig. 10).
For the accuracy evaluation of results, a visual algorithm that is developed in the
parametric design environment of Grasshopper plug-in is proposed. The purpose of
the algorithm is to mainly provide quantitative and qualitative results as regard the
geometrical differences between the mesh model of the physical prototype and the
polysurface model of the digital one.
Specifically, the digital file of each physical prototype is imported into the para-
metric environment as mesh geometry and checked in terms of its total volume
(mm3 ). Also, in the same algorithm, the digital file of each generated model, through
the computational design procedure in the form of a polysurface, is imported and
checked as regard its total volume (mm3 ). The two volumes can be checked and
compared, indicating the physical/digital percentage of volume differentiation (%).
Regarding the calculation of the deviation of all surfaces resulting from the phys-
ical and digital prototypes, the algorithmic steps are as follows. Firstly, due to the
arbitrary positioning of scanned mesh file on the digital environment in relation to
58 O. Kontovourkis
Fig. 11 Colour gradation of the minimum and maximum distance deviation between the physical
and the digital prototype of M-FS 10
the digital one, an accurate correlation of the mesh model of the physical proto-
type with the polysurface model of digital one is necessary. In order to achieve this,
a single-objective Genetic Algorithm (GA) is introduced, that calculates the exact
transformation (rotation) angles in the three axes (X, Y and Z) of a bounding box that
encloses the mesh geometry. The minimum box volume is used as fitness value in the
Galapagos evolutionary solver in Grasshopper plug-in [8]. The optimized rotation
angle of mesh bounding box and the orientation of digital model bounding box are
used for re-orienting the mesh geometry in order to accurately align with the digital
prototype. Secondly, the centre points of faces of each mesh geometry are extracted
by using a parametric component that finds the closest point on a BREP geometry.
As a result, the distance (mm) between the centre points and the closest point on
polysurface is calculated. Due to the large number of faces in each mesh, and hence,
the large number of distance values obtained, the calculation achieves the extraction
of high-resolution results over the entire extent of the surface coverage of the models.
Also, the algorithm provides measurable information regarding the minimum
and maximum distance (mm) of each face centre from the digital surface, as well
as the average distance value (mm) obtained from all surfaces. Finally, through
the colour gradation of the minimum to the maximum distances, it is possible to
qualitatively investigate the results and detect the areas with the least and the greatest
error (Fig. 11).
In the next step, a number of AR-driven Fabrication Studies (AR-FSs) are selected to
repeat the fabrication process, this time by applying AR technology. The difference
between this process and the previous one lies in the change in the way the two
formworks of each prototype are set up for gypsum casting. The steps regarding
the computational design and digital fabrication of formworks using CNC milling
process remain the same. The errors occurred in the phase of adjusting the position of
formworks for casting, and for this reason, this step is taken into further consideration.
Augmented Reality-Driven Prototyping for Error Elimination During … 59
As it has been mentioned in the second part of this chapter, the latest developments
as regard Augmented Reality (AR) technology with open source and plug-ins widely
available for use in well know parametric design platforms, for instance, the Fologram
plug-in, allows seamless and uninterrupted workflows to emerge within the same
digital environment. Together with the flexibility offered for real-time modification of
parametric variables, through parametric design platforms, for instance, Grasshopper
plug-in in Rhino 3D NURBS modelling software offers new opportunities to consider
alternative solutions using AR technology. Also, the possibility of using the Fologram
plug-in allows the projection and viewing of digital/parametric results through simple
devices such as mobile phones and tablets. This demonstrates additional advantages
such as the reduction of the cost of purchasing specific devices and their replacement
with widely available devices mostly used on a daily basis. The cost of purchasing
HMDs could be a deterrent to the application of the technology in some cases.
The present study refers to the use of the above-mentioned software, due to the
possibilities of their integration within a unified parametric design framework, which
is based on the previous investigation regarding the evaluation of the geometric accu-
racy in the production of the physical prototypes. Hence, in the digital environment
of Grasshopper and based on the algorithm developed above, Fologram components
are incorporated to handle the AR process. Specifically, the goal is to place the
digital hologram in the real space and overlay it within a proposed mechanism for
the highest possible accuracy of setting the position of the formwork in each case.
The already existing algorithm incorporates additional parametric components for
AR-driven formwork adjustment.
More specifically, the polysurface file produced during computational design is
used as the starting point. This prototype is appropriately rotated to facilitate the
casting of the material as well as the projection of the hologram in the physical
environment based on the specific structure and function of the proposed formwork
mechanism. Two main surfaces of the prototype appear as holograms, the upper
and lower surfaces in order to overlap and align them with the boundaries of the
formworks produced through CNC milling. For the projection of the two surfaces,
the parametric component of Fologram that deals with synchronized geometry is
used, which is controlled by a number slider according to the surface selected for
holographic projection. In order for the users/workers to select and control which of
the two surfaces are used during the fabrication process, the number slider control-
ling the projection of upper or lower surface is connected with the Fologram plug-
in through the use of sync parameters component. The surface holograms can be
viewed using the Fologram application in Smartphone or iPhone devices. Initially,
a QR code is generated in Rhino software that is scanned by the mobile device.
Then, through the Fologram app, the surface for projection is scanned and a specific
position for hologram placement is defined by the users/workers. Then, the mobile
is placed on a specially designed support system that together with the formwork
containers consists the basic structure of the proposed mechanism. This system allows
easy adjustment of formworks position according to the holographic projection of
prototypes surface (Fig. 12). This is followed by formworks position locking and
60 O. Kontovourkis
gypsum casting that results in the hardening of the material and its removal from the
formworks (Fig. 13).
As final step in the AR-driven fabrication process, the physical prototypes are
scanned using the structured light scanner and importer to the previously presented
algorithm for the new accuracy evaluation. The results obtained from the accuracy
evaluation based on manual adjustment and the results obtained based on AR-driven
adjustment are compared and discussed.
In this section, the results from both the first phase of experimentation, i.e. the anal-
ysis of the accuracy in the production of the prototypes by the manual adjustment of
formworks, and the results from the second phase, i.e. the analysis of the accuracy
of the produced results with the application of AR technology, are presented and
discussed. Specifically, the first part presents the results obtained from the analysis
of 10 out of 28 physical prototypes produced. The second part presents the results
obtained from the analysis of 3 selected examples based on the 10 prototypes exam-
ined in the first phase. The listing and comparison of the results leads to important
conclusions regarding the usefulness of the proposed process to be incorporated in
the suggested computational design to digital fabrication workflow. Undoubtedly,
the results are based on a number of assumptions, which can be volatile and change
from case to case. Thus, the evaluation of the process requires consideration of the
parameters and criteria set in each case.
From the 28 case studies that were originally produced during the undergraduate
course based on the suggested computational design to digital fabrication by inte-
grating manual adjustment of formworks, 10 samples of M-FSs are selected to be
studied further by checking their accuracy according to their geometry, in an effort
to cover solutions with different design criteria. Following paragraphs summarize
the quantitative and qualitative results of this comparison based on the algorithm
presented in the previous sections of the chapter.
Table 2 summarizes the results obtained from the first evaluation phase. As it can be
observed, the 10 M-FSs together with measurable information regarding deviations
62 O. Kontovourkis
Fig. 15 Volume deviation tendency. Digital prototypes with large volume have smallest percentage
of physical/digital volume deviation
The deviations observed in the above results of the 10 prototypes are re-examined
through their re-fabrication by using AR technology. From the 10 M-FSs examined
in the previous phase, 3 are selected for further processing. The 3 new results shall be
called AR-driven Fabrication Studies (AR-FSs). The specific examples are selected
based on quantitative and qualitative deviation results observed during the previous
investigation. Specifically, the samples selected are the M-FS 3, the M-FS 5 and the
M-FS 8, in an attempt to explore the possibility of reducing their relatively large
deviations in relation to other examples.
Augmented Reality-Driven Prototyping for Error Elimination During … 65
Fig. 16 Comparison between distance and physical/digital volume deviations of the 10 M-FSs
Fig. 17 Colour intensity of distance deviation between physical and digital surfaces in the samples
of the M-FS 10 and M-FS 8
As it has been mentioned in previous section, the 3 selected cases go through the
same CNC milling procedure for formwork development but with the only difference
being the formwork adjustment that involves AR technology by the user/worker. The
physical prototypes produced through the AR-driven approach are scanned again
based on the settings of structured light scanner introduced in previous part. Then,
by using the accuracy evaluation algorithm already introduced in the methodology
part of the chapter, deviation results are obtained.
Table 3 provides the deviation results of the selected M-FSs occurred in the
previous phase of the research and the results of respective AR-MSs prototypes
occurred during AR-driven fabrication procedure.
Figure 18 demonstrates the percentage physical/digital volume deviation of the
AR-MSs that have been produced based on AR-driven adjustment of formworks
in comparison with the results of volume deviation obtained in the previous phase
66 O. Kontovourkis
Table 3 Accuracy evaluation results of the 3 AR-FSs in comparison with the results of respective
M-FSs
AR-FSs Distance deviation (mm) Volume deviation (mm3 )
A/A Min Max Aver Digital Physical Physical/Digital
(%)
1 M-FS 3 1.2707E−06 9.319831 2.061577 181793.46 241870.665 133.04
AR-FS 4.81E−07 7.944864 1.346931 181793.46 178605.87 98.24
1
2 M-FS 5 1.1165E−06 7.845024 1.480337 203255.06 241870.665 118.99
AR-FS 2.61E−06 5.582048 0.940000 203255.06 208350.12 102.5
2
3 M-FS 8 2.6095E−06 11.36906 3.604892 247651.52 372097.324 150.25
AR-FS 4.13E−06 9979.444 2.154686 247651.52 284182.24 114.75
3
Fig. 18 Physical/digital volume deviation in percentage and comparison between M-FSs and AR-
FSs
Augmented Reality-Driven Prototyping for Error Elimination During … 67
Fig. 19 Comparison between average distance deviation of selected M-FSs and AR-M-FS
while the average distance deviation value was 2.06 mm, this has been reduced to
1.34 mm. Similarly, in AR-MS 2 and AR-MS 3, the average distance deviation values
were 1.48 and 3.6 mm and have been reduced to 0.94 and 2.25 mm, respectively.
Figure 20 demonstrates the qualitative results obtained from the comparison
between physical prototypes developed through manually driven formworks adjust-
ment and AR-driven ones. Specifically, the qualitative results are related to the colour
gradations of the perpendicular distance deviations between the surfaces of physical
and digital prototypes. Figure 20a shows the distance deviations occurred in the
case of M-FS 3 and AR-FS 1. In this case, the minimum deviation is 1.27E − 6
and 4.81E − 07 mm, respectively. The maximum distance deviation is 9.31 and
7.94 mm, respectively, with the average deviation to be 2.06 and 1.34 mm, respec-
tively. Figure 20b demonstrates the distance deviation for M-FS 5 and AR-FS 2. In
this case, the minimum deviation is 1.16E − 6 and 2.61E − 07 mm, respectively.
The maximum distance deviation is 7.84 and 5.58 mm, respectively, with the average
deviation at 1.48 and at 0.94 mm, respectively. Figure 20c compares the deviations
of cases M-FS 8 and AR-FS 3. It is observed that minimum, maximum and average
distance deviation for M-FS 8 is 2.6E − 6 mm, 11.36 and 3.6 mm, respectively, while
for AR-FS 3 is 4.13E − 6, 9.97 and 2.15 mm, respectively.
The colour intensity ranges between green, showing areas with minimum distance
deviation, to red, showing areas with maximum distance deviation. The comparison
between the pairs of cases examined above shows an improvement in the deviation
measurement indicators in the second column, where the prototypes are the result
of adjusting the position of formworks using AR technology. This indicates the
usefulness of the proposed procedure, even though the results do not achieve 100%
correlation of the physical with the digital prototypes.
68 O. Kontovourkis
Fig. 20 Qualitative results of geometrical deviations occurred in the case of physical prototypes
production based on manually driven and AR-driven adjustment of formworks. The colour intensity
shows minimum and maximum perpendicular distance between the surfaces of physical and digital
prototypes
5 Conclusions
The research presented in this chapter has focused on the development of a method-
ology for the production of small-scale prototypes using a computational design
to digital fabrication process. Emphasis has been given on evaluating the accuracy
of physical prototypes versus digital ones based on produced formworks through
the CNC milling process, and subsequently based on their adjustment for material
casting. In the last part of the process, two different production methods are proposed.
Initially, prototypes are produced based on manual positioning of formworks by the
Augmented Reality-Driven Prototyping for Error Elimination During … 69
Acknowledgements The results of the manually driven Fabrication Studies (M-FSs), which were
presented in this chapter, were developed within the context of the undergraduate course ARH-220
Digital Architectural Communication Media, taught by the author during the Fall semester of 2021.
The results of AR-driven Fabrication Studies (AR-FSs) have been produced at a later stage by the
author. I would like to sincerely thank all the students who attended the course and their works
are presented in this paper. Also, I would like to sincerely thank George Vessiaris and Dimitris
Stylianou from the FabLab UCY of the Department of Architecture, UCY, for their great help and
support during all phases of the process.
References
1. Chi HL, Kang SC, Wang X (2013) Research trends and opportunities of augmented reality
applications in architecture, engineering, and construction. Autom Constr 33:116–122. https://
doi.org/10.1016/j.autcon.2012.12.017
2. Dunston PS, Wang X (2005) Mixed reality-based visualization interfaces for architecture,
engineering, and construction industry. J Constr Eng Manag 131:1301–1309. https://doi.org/
10.1061/(asce)0733-9364(2005)131:12(1301)
3. Wang X, Dunston PS (2006) Potential of augmented reality as an assistant viewer for computer-
aided drawing. J Comput Civ Eng 20:437–441. https://doi.org/10.1061/(asce)0887-3801(200
6)20:6(437)
4. Mortice Z (2022) AR in architecture unites design and construction teams. Retrieved from
https://redshift.autodesk.com/articles/what-is-augmented-reality. Accessed on 18 Jul 2022
5. Song Y, Koeck R, Luo S (2021) Review and analysis of augmented reality (AR) literature for
digital fabrication in architecture. Autom Constr 128:103762
6. Fazel A, Izadi A (2018) An interactive augmented reality tool for constructing free-form
modular surfaces. Autom Constr 85:135–145. https://doi.org/10.1016/j.autcon.2017.10.015
7. Jahn G, Newnham C, van den Berg N et al (2020) Holographic construction. Impact: design
with all senses. Springer, Cham, pp 314–324
8. Davidson S (2015) Grasshopper algorithmic modeling for rhino. In: Add-ons grasshopper.
Retrieved from https://www.grasshopper3d.com/. Accessed on 12 Feb 2022
9. Fologram. Retrieved from https://fologram.com/. Accessed on 30 Jun 2022
10. Microsoft (2019) Microsoft HoloLens | Mixed reality technology for business. In: Microsoft.
Retrieved from https://www.microsoft.com/en-us/hololens. Accessed on 18 Jul 2022
11. Jahn G, Wit AJ, Pazzi J (2022) [BENT]. In: Proceedings of the 39th annual conference of the
association for computer aided design in architecture (ACADIA), pp 438–447
12. Jahn G, Newnham C, Berg NVD (2022) Augmented reality for construction from steam-bent
timber. In: POST-CARBON, Proceedings of the 27th international conference of the association
for computer-aided architectural design research in Asia (CAADRIA), vol 2, pp 191–200
13. Jahn G, Newnham C, Beanland M (2022) Making in mixed reality. Holographic design, fabri-
cation, assembly and analysis of woven steel structures. In: Proceedings of the 38th annual
conference of the association for computer aided design in architecture (ACADIA), pp 88–97
14. Lharchi A, Thomsen MR, Tamke M (2019) Towards assembly information modeling (AIM).
In: Simulation series, pp 51–56
15. Wu NH, Dimopoulou M, Hsieh HH, Chatzakis C (2019) A digital system for AR fabrication of
bamboo structures through the discrete digitization of bamboo. In: Proceedings of the interna-
tional conference on education and research in computer aided architectural design in Europe,
pp 161–170
Augmented Reality-Driven Prototyping for Error Elimination During … 71
The paper focuses on designing and developing an interface for interacting with a
mechatronic system through gestures. Using machine vision techniques, a method-
ology is developed that aims to record and locate the gesture through an image
capture system. Once the gesture is isolated and gesture recognition is possible,
the appropriate control commands of the system are determined. This makes it
possible to control the mechatronic system through a dictionary of movements,
called gesture vocabulary. The results of the methodology were applied to a real
mechatronic system and showed significant results. The efficiency and effectiveness
of the interface emerged from a satisfactory sample of users. It appeared quite easy
to use, but also presented some basic difficulties in its application. The conclusion of
the research is that controlling mechatronic systems through gestures can be a very
important interaction interface.
1 Introduction
3 Methodology
The implementation of the system requires the use of a simple computer video
camera and a suitable toolbox and library of robotic functions for driving the robotic
equipment. The creation of the interface for the wheeled vehicle was carried out in
the environment of the MATLAB™ application, using the GUIDE tool, while the
76 A. Tsagaris et al.
When testing various values for video resolution, it was observed that the higher
the value of the resolution, the lower the performance of the system, in terms of
computing power, resulting in a noticeable delay in the response of the graphical
environment. For this reason, the resolution of 320 × 240 pixels was preferred.
Design of Gesture-Controlled Interface for Mechatronic Systems: … 77
After the video recording is completed, the recorded video is split into the individual
digital images of which it is composed, each of which is then subjected to multiple
processing steps until the hand shape recognition is successfully performed. The
first step in recognizing the shape is to successfully identify the skin area. Later,
the extraction of the object that constitutes the handpiece is required, while finally,
according to the previous analysis and with the help of an appropriate algorithm, the
areas of the object that correspond to fingertips are located (Fig. 2).
The skin color model looks simple visually, but can become quite complicated when
it comes to programming it in machine language. Important factors to consider in
this process are: a) the device used for video recording, b) the lighting, which often
distorts the colors of the images, c) the movement of objects in an image, which can
cause blurring of the area of interest, d) the fact that skin color differs between people
and e) the color space used for analysis. Below are the characteristics of the RGB
and normalized rg color spaces tested, with full reference to the technique chosen
for skin detection.
The RGB color model consists of a combination of three color regions: red, green and
blue. These three components are combined to reproduce more colors. RGB includes
the brightness component, which depending on the intensity at which it exists, causes
the corresponding color alteration. In fact, the blue component does not provide any
valuable information in the case of hand recognition, as the skin color area is made
up of red and green components, mixed with brightness information.
To study the RGB color space [11] (Fig. 3), the SIdb (Skin Image Database) library
from the Cobris website was used, consisting of 357 images with differentiated skin
colors [12]. Based on this, a relevant model was extracted to separate the elements
of an image that constitute a skin area from the areas that constitute its background
(Kovac model). This model is divided into four sub-rules [12]:
1. R > 95 and G > 40 and B > 20
2. Max(R,G,B)–Min(R,G,B) > 15
3. |R–G|>15
4. R > G and R > B.
This model was later used to create a new method based on mixing and creating
new colors.
The rule of this method is characterized by the equation [11].
R−G B
0.0 ≤ ≤ 0.5 and ≤ 0.5
R+G R+G
RG Normalized Model
This model is a variant of RGB. The theory behind it is to simplify the components of
the colors red, green and blue, in order to eliminate the component of brightness that
Design of Gesture-Controlled Interface for Mechatronic Systems: … 79
Fig. 4 Mathematical
equation as a function of the
components of colors
affects their true value. The mathematical model that describes the normalization of
RGB space is understood from the following relations. The normalization of the blue
component is not particularly important, as it is not used by this model (Fig. 4).
To determine the area of interest of the rg shot, a ready-made image library
consisting of skin parts was used [3]. The conclusion from the library study is that the
color components occupy a very small area of color with little scattered points that
could geometrically look like a rectangle. Based on this geometric shape, the block
of the spaces for the r and g axes is determined in order to extract the mathematical
reference needed in the writing of the code. Having defined the area of interest, the
maximum and minimum intervals of the red and green components for rg can now
be determined [3]:
In practice, the normalized model is more complex than RGB, as rg encoding acts
as a filter, smoothing the colors and removing existing brightness and shading.
According to research by Tsagaris et al., it has been shown that the use of
the normalized RGB color space (rg) has many advantages in gesture recognition
applications [13] with image scale and rotation invariance. Therefore, it makes the
application even more robust and efficient [14, 15].
80 A. Tsagaris et al.
δλ = θλ + χλ ,
where:
• δλ , the binary mask resulting from the frame through skin detection,
• χλ , the binary mask that includes only the hand figure,
• θλ , the noise included in δλ .
Therefore, the hand figure component can be modeled as the difference function
[3]: χλ = δλ − θλ .
Noise Reduction
The noise encountered in such applications is often in the form of white areas or
spots that do not belong to the gesture, or black spots that appear on the skin and are
recognized as non-dermal areas. Also, the ‘trembling’ that is often created around
the figure of the hand is a form of noise. For effective recovery of the shape, noise
removal is necessary. This is achieved by adding special filters that correct the math-
ematical morphology of the binary mask. The combination of the following four
morphological filters, applied in serial alternation, effectively isolates the noise [3]:
• Dilation—Expansion of objects,
• Erosion—Contraction of objects,
• Opening—Smooths the closed line (border) of an object, breaks narrow channels
and eliminates fine areas,
• Closing—Merges narrow fractures and long thin cavities, eliminates small holes
and fills gaps.
The choice of filters to use is related to the way the user chose to locate the
fingertips. In the present work, the K-Curvature method was used, which aims to
detect the fingertips using overlapping vectors. This method is explained in the next
chapter. The morphological filters selected and related to the K-Curvature method
are the zoom in and out. Their combination results in the recovery of the hand figure
just as it was previously captured by the digital camera, without any distortion.
Finally, the binary filler was used. As mentioned earlier, there is a possibility of
black spots appearing on the gesture. This indication is incorrect, if the case of bare
Design of Gesture-Controlled Interface for Mechatronic Systems: … 81
skin is examined. The filtration filter, in effect, ‘fills’ the black areas that are points
of noise.
After editing and simplifying an image, it is possible to retrieve more than one
object, which is not actually a gesture. This is because of the presence of objects
of the same color area of interest in the background. The process of disposing of
unnecessary objects is carried out taking into account that the shape of the hand
prevails in most of the image. Therefore, using appropriate commands, the objects in
the image are detected, and then, by comparing the area of these objects, the object
with the largest area is selected, while the rest are deleted. In this way, the image
under study is freed from the extra noise.
Finding the fingertips requires the use of an appropriate geometric algorithm. Thus,
two fundamental methods of Euclidean geometry are used, the Convex Hull method
and the K-Curvature method.
Assuming that in a random space S there is a set of scattered points, the Convex
Hull of S is the smallest convex polygon containing all the points of this space
82 A. Tsagaris et al.
[16]. In the case of locating the fingertips, all the points that make up the outline
of the hand figure are considered as points of random space and the Convex Hull is
drawn around it, so that all the points of the outline are inside it, forming a closed
polygon (Fig. 6). The corners of this convex polygon are the fingertips. The Convex
Hull algorithm gives reliable results in case the chin is extended, while in the case
where the fingertips are gathered or some of them are extended, detection errors are
observed. As the Convex Hull method does not prove to be completely reliable for
the needs of this work, the K-Curvature method was chosen (Fig. 6).
The K-Curvature method is based on the calculation of the vector product between
the points that make up the outline of the gesture, in order to locate curves on the
contour line (Fig. 7).
First, it is necessary to take into account the fact that the outline of the gesture is
represented by a set of infinite points, even [16].
where P(i) is a random point and (x(i), y(i)) its coordinates at the Cartesian coordinate
system.
Each point P(i) emits two vectors that start from it and end at other points that
are at a distance k from point P(i) and are to its right and left, forming an angle, the
value of which is taken into account later. The design of these vectors in the Cartesian
space is calculated as follows: [P(i − k), P(i)] and [P(i), P(i + k)] [16], where k
is a selected integer. In areas where the angle between these two two-dimensional
vectors is sharper and at the same time the vector product of the angle is positive,
the fingertips are detected.
During the calculation of the outline of the hand figure, an appropriate table is
created with all the coordinates that make up the outline of the area of interest. Due
to the large volume of data stored in this table and, at the same time, taking into
account the best performance of the K-Curvature algorithm, it was decided to select
seventy-five elements from the contour of the area of interest, which are equidistant
from each other, to save computing power.
Starting from the first point and working in the 3D space, the vectors are created
that start from this point and end up in other points that are four points apart before
and after the reference point. Following this procedure, the smallest angle formed
between the two vectors is calculated. At this point, it should be noted that it is
necessary to define a threshold for the angle. Since sharper angles are more likely to
be a fingertip, based on this threshold, a check is made on whether the angle is sharp
enough so that it is not absorbed by the algorithm. Forty degrees (40°) was chosen as
the maximum value for the threshold. Therefore, any angles measured greater than
forty degrees are discarded (Fig. 8).
Then, the product of the two vectors is calculated. The algorithm that runs to locate
the fingertips checks the product of the vectors, and if it turns out to be positive, it
concludes that it is a vertex, while if it turns out to be negative, it turns out to be a
valley and is rejected. The coordinates of the points with a positive vector product
are stored in a new table, which contains only those points that were detected to be
the fingertips.
• If the difference between Ay and By is greater than 30 and the distance Ay is less
than the distance By (Ay-By > 30 and Ay < By), then the vehicle is turning right.
• If the difference between Ay and By is greater than 30 and the distance Ay is
greater than By (Ay-By > 30 and Ay > By), then the vehicle is turning left.
To define the rules representing the velocities of these two movements, it is
assumed that the fingers A and B are joined by an imaginary line. For each rotational
movement, the hand needs to move with a maximum of 90°, with the left move-
ment in the range 0–89° and the right movement in the range 90–179°. By relating
the vector belonging to the fingertips to the vector of the first vertical column of
the image and then measuring the angle formed between the two vectors (Fig. 10),
the velocity rules for the intervals of left and right motion are defined, as shown
in Table 1.
A wheeled vehicle made with the help of Lego components was used for the appli-
cation. The wheeled vehicle consists of two engines—MOTOR A and MOTOR
B—located at the rear, an engine at the front—MOTOR C—and the NXT™ Brick,
which is the computing unit of the vehicle, and its purpose is to convert incoming
commands to the corresponding actions assigned to it. The morphology allows the
vehicle to move in all directions (straight, right turn, left turn, reverse). Figure 11
shows the parts of the vehicle, according to the corresponding legend.
86 A. Tsagaris et al.
the reverse motion of the motor. Because the construction of the vehicle provides for
the connection of the engines opposite to the time we want the vehicle to move, the
gear variables were given negative values. The values given to the engines for the
three types of drives are shown in detail in Table 2.
Having completed the task, before the user exits the application, the code snippet
is executed to shut down the interface. This part is performed in two parts. Initially,
the motors are instructed to stop any work they
( are )currently performing. The engines
are shut down with the command: Move.Stop ' off' ; [10]. While all motors have been
switched off, the interface connection line remains off.
4 Application
A prerequisite for the user’s interaction with the robotic vehicle is the creation of a
graphical environment, which offers an interactive experience. The interface created
(Fig. 12) consists of a suitable virtual field, which captures the recording of live
video, so that the operator can control the movements of his hand while using the
application. At this point, it was possible to start and end the video recording, with
the help of a mode toggle button. In addition, an appropriate field was added to the
interface which captures the state of the binary mask of the gesture in video format,
so that through it the possible incorrect recognition of the movement due to non-ideal
lighting conditions can be perceived.
A table of Cartesian coordinates of the detected fingers was created, which consists
of two columns, x and y, corresponding to specific points on the Cartesian plane, and
4 rows, as well as the fingers that are detected. Also included were indications of
coordinates corresponding to the two distal fingers of the hand detected, points A
and B.
The interface also included a real-time display of the gesture, enriched with dots
where the fingertips are detected. Finally, the application contains appropriate fields
88 A. Tsagaris et al.
showing the number of fingers detected at any given time and the corresponding
movement status of the vehicle.
The control of the mechatronic system through the interface described above was
tested and evaluated in three cases of movement. Users were asked to perform the
following three movements with the help of gestures (Fig. 13).
In all three cases of control, it was proved that an effective control can be
performed. The application was tested by 20 people, 10 men and 10 women. Initially,
it is suggested to introduce the participants to the functions of the application and to
familiarize them for a few minutes with the application they are going to use. Each
user is then asked to perform a series of tasks as described in Fig. 13 and record the
results of their actions.
5 Conclusions
During the execution of the participants, some unconscious mistakes were observed,
which were made quite often by the users. The majority of the participants had
difficulty in the movements concerning the two turns (right and left movement), due
to the fact that as they twisted the palm of their hand, they hid the distal fingers,
as a result of which the movement was not visible and is recognized by the system
as unspecified. The cause of this problem, in some cases, may be due to the fact
that users had to watch the video recording and the movement of the robotic vehicle
at the same time, resulting in detuning and losing control of their movement. This
portion of people recognized this mistake and corrected it in time, while with the
passage of work the movements they performed showed improvement. A second
part of the participants who made this mistake showed that there was difficulty in
understanding the realization of the movement. Although the proportion of these
individuals was very small, they did not show improvement in movements even after
their explanation by the mediator of the evaluation.
Another problem that has been observed is the differences that are observed
between those users who have their right hand as their main hand and those who use
their left hand more. This problem was observed to apply to almost all participants
in the study and is addressed to the same degree in both categories of individuals.
More specifically, it was observed that users who used their right hand had difficulty
making the right turn, while, on the contrary, those who used their left hand had
difficulty with the left movement. This resulted in a negative rating of the respective
movements in terms of their degree of difficulty and the intensity of the feeling of
fatigue. There were also times when the difficulty of the corresponding movement
led the participants to use the right hand exclusively to make the left turn and the
left hand to make the right turn. However, the result was not satisfactory, as users
noticed more fatigue and therefore returned to the original way of moving.
This research, of course, did not lack those who found it difficult to accept the
system in terms of how the recognition of movements is carried out, either because
they had difficulty performing the movements or because they had used a similar
application in the past, thus adopting a preference of their own as to how to use the
application.
90 A. Tsagaris et al.
References
1 Introduction
Table 1 Additive
AM category Definition
manufacturing categories [2]
Material extrusion The material is extruded for a
heated nozzle
Material jetting The material is jetted through an
inkjet print-head
Binder jetting Deposition of bonding agent
droplets to material’s powder
Sheet lamination Sheets of material are
bonded/welded together
Vat photopolymerization Selectively curing liquid
photopolymer
Powder bed fusion Selective thermal fusion of
material’s powder
Directed energy deposition Melting of material that deposited
through a nozzle
Cold spraying Material’s powder is blown at high
speed to adhere to the part
Topology Optimization Utilizing Density-Based Approach for Additive … 93
mathematical formulation of the optimization problem. On the other hand, for the
discrete/truss-based approach, the architected materials were introduced accompa-
nied by their detailed classification. In addition, the unique properties of the archi-
tected materials were presented coupled with the most promising and cutting-edge
applications. Finally, in the last section of this chapter, a case study of topology opti-
mization was conducted for an automotive application and more specifically for a
Brembo® F 50 224 brake caliper, showing the significant mass reduction that could
be occurred in order to achieve lightweight final objects.
The density-based approach, also known as generative design, employs the afore-
mentioned conditional density method in order to remove material from a designed
94 N. Kladovasilakis et al.
object. The three most widespread algorithms for topology optimization via density-
based are the SIMP, the ESO, and BESO. The algorithms of ESO and BESO have
been both studied and developed intensively in recent years. ESO is based on the
simple concept of gradually removing inefficient material from the structure’s volume
domain. The resulting structure of the ESO method evolves toward its optimal shape
and topology. Although it is not guaranteed that such an evolutionary procedure
would always produce the best solution. However, the ESO technique provides a
useful tool for engineers who are interested in exploring structurally efficient forms
and shapes during the conceptual design stage of a project [11]. This optimization
algorithm can be utilized to optimize both large-scale structures but also to optimize
the design of micro-scale and nano-scale structures. The first appearance of the ESO
method was in 1992 by Professors Huang and Mike Xie [12]. The ESO technique is
called a hard destruction technique that repeatedly removes or adds a finite amount
of material. Heuristic criteria are used that can be based on well-defined sensitivity
information. Thus, ESO is relatively easy to implement, which is an advantage for
topology optimization problems involving complex physical processes. Utilizing
FEA, the ESO evaluates the load level in an arbitrary section of the object [9]. Inef-
ficient use of the material is determined by a low level of load (or deformation)
in the individual section of the object. Ideally, the load level in the object should
be the same all over the object, close to the limit but at a safe value. According to
the above condition arises the principle of material removal in which insufficiently
loaded material is removed, resulting in the removal of individual elements from
the finite element model. Comparing the stress σ e vm element with the critical or
maximum value σ max vm of the object determines the stress level of that element. If,
as a result of finite element analysis, the element satisfies the following condition
σevm
vm < RRi
σmax0
where RR is the limit value (subtraction rate) then this element is removed and the
control process starts a new control cycle. The test of the individual components is
performed into iterations until a steady state is reached, i.e., the state in which there
are no other components that meet this subtraction limit. According to the specific
evolution rate H i, the subtraction rate can then be increased by the equation:
R R(i+1) = RRi + Hi
Topology Optimization Utilizing Density-Based Approach for Additive … 95
The above procedure can be performed with an increased subtraction ratio, and
the analysis is performed until a new steady state is reached. The iterative procedure
is performed until the required result is achieved, for example, until all the material
is removed from those areas where the stress level does not exceed 20% of the
maximum material stress. A quantitative estimate of the change of stiffness of the
structure as a result of the subtraction of i-th finite element is the elasticity index,
defined for the mean elasticity as shown in the following equation [5], where ui is
the node displacement vector of element i and K i is the element’s stiffness matrix.
1
αie = ∗ u iT K i u i
2
The sensitivity function shows the decrease of the mean stiffness as a result of the
removal of the i-th element, which is equal to the elemental deformation energy of
the i-th element. To maintain the stiffness by removing the elements, it is necessary
to remove elements with the minimum value of the sensitivity factor [5]. The mathe-
matical formulation of the ESO algorithm is equally applicable for two-dimensional
(2D) and three-dimensional (3D) problems as it is quite simple and clear; at the same
time, the application of the software does not require complex programming. The
subtraction of elements is performed by assigning a zero value to their coefficient
of the equation, and as a result, they are ignored during the subsequent repetitions.
This iterative process of removing data leads to a reduction in the number of equa-
tions, thus reducing the computational demands of the problem, which is particularly
important for 3D problems. A major disadvantage of the ESO method is that it does
not allow the removed material to be recovered while this material may be efficient in
subsequent ones. Summarizing the above, it is obvious that the ESO method in some
cases does not provide the optimal solution and this disadvantage is eliminated with
the BESO method. The BESO algorithm allows the simultaneous removal and addi-
tion of material to the design volume. The fundamental difference between these
two methods is that the sensitivity index of the blank elements is determined by
linear extrapolation of the displacement field obtained as a result of finite element
analysis [13]. After that, the full elements with the lowest sensitivity index values
are removed from the structure and the empty elements with the highest sensitivity
values are filled with material. The numbers of elements removed and added at
each iteration are determined by two independent parameters: the RR subtraction
ratio and the RI inclusion ratio [14]. Although the ESO/BESO methods are quite
simple to implement, there is virtually no application of the ESO method to solve
production-oriented topology optimization problems [5].
The third and most widespread method is Solid Isotropic Material with Penaliza-
tion or simply SIMP algorithm. It is worth mentioning that the majority of commer-
cial design software utilizes this algorithm of topology optimization, and it was also
employed in this research for the purpose of the brake caliper case study. The funda-
mental idea of the SIMP method is to create a virtual density field that is proportional
to the actual characteristics of the understudy object [15]. This method aims to reduce
96 N. Kladovasilakis et al.
the flexibility of the structure due to the redistribution of the material in the exam-
ination area under specific limit conditions. The result of using the SIMP method
is that the object maintains the same stiffness values within the under-consideration
problem. This method is widely used in studies for additive manufacturing construc-
tions, as additive manufacturing enables the creation of objects with high complexity
shapes and geometries. The material density is utilized as a designed variable when
calculating the optimization. Thus, the optimal structure, within the planned area, is
achieved by redistributing the material based on the criteria given during the opti-
mization. The SIMP method is again based on dividing the examined volume into
voxels (finite elements), as it was above-mentioned. The properties of the material
are constant in each of these elements and depend on the relative density x i . After
completion of the optimization, the relative density of each element must be equal
to one or zero. To limit the intermediate relative density is used the rejection factor
p. As designed variables are taken the relative densities of the elements, and the
mean correspondence is chosen as the objective function. The problem of topology
optimization for minimum correspondence can be written as follows:
Find : X = {x1 , x2 , . . . , xi }T , i = 1, 2, . . . , n
↓
Σ n Σ
n
Min : C(X ) = F T U = U T K U = u iT ki u i = (xi ) P u iT k0 u i
i=1 i=1
↓
Σ
n
Subj. to : K U = F, V = f 0 V0 = xi vi
i=1
↓
with : 0 < xmin ≤ xi ≤ xmax ≤ 1
where
• C is the objective function and is defined as the mean correspondence.
• X is the vector of construction variables.
• F is the loading vector.
• U is the total displacement vector.
• K is the total stiffness strain.
• V is the material’s volume.
• F 0 is the volumetric ratio.
The final and the most effective density-based algorithm is the ESO-SIMP method
which is a combination of the ESO and SIMP methods and aims to compensate for
the disadvantages of these two topology optimization methods [16]. To solve the
optimization problem, the relative densities are used as the designed variables and
the mean correspondence is selected as the objective function. The optimization
problem for the minimum mean correspondence based on the ESO-SIMP algorithm
is as follows:
Topology Optimization Utilizing Density-Based Approach for Additive … 97
⎧
⎪
⎪ Find : X = {x1 , x2 , . . . , xi }T , i = 1, 2, . . . , n
⎪
⎪
⎪
⎪ ↓
⎪
⎪
⎪
⎪ Σn Σn
⎪
⎪ Min : C(X ) = U T
K U = u T
k u = (xi ) P u iT k0 u i
⎪
⎨ i=1
i i i
i=1
↓ .
⎪
⎪ Σ
⎪
⎪
n
⎪
⎪ Subj. to : K U = F, V = xi vi ≤ f 0 V0
⎪
⎪
⎪
⎪
i=1
⎪
⎪ ↓
⎪
⎩
with : 0 < xmin ≤ xi ≤ xmax ≤ 1
The difference between the ESO-SIMP and SIMP methods is the volume limi-
tation. During each iteration, elements whose relative density is less or equal to
the ejection factor are removed from the design volume, and all other elements are
inserted into the next iteration. In practice, this combination method proves to be
more suitable than ESO and SIMP separately in terms of efficiency and reliability.
The lattice structures concentrate the scientific interest from the ancient times when
humans observed cellular materials in nature and in structures such as corals and
foams and tried to imitate them in artificial structures, such as honeycombs [17].
Nowadays, the interest in artificial lattices has been revived due to additive manu-
facturing techniques that allow the production of complex lattices in relatively small
sizes. All lattice structures consisted of architected materials with numerous shapes
and topology characteristics. Currently, there are a vast number of different archi-
tected materials with their main physical characteristic the applied relative density
[18–20]. The applied relative density is the percentage of the volume that is filled
with the employed architected material, which is defined by the designer/engineer
and the ratio of structure thickness to unit cell length. Due to the ability to modify the
relative density of architected materials, therefore their mass and mechanical prop-
erties, the architected materials are employed in topology optimization processes
via the discrete/truss-based approach. It is worth mentioning that when the relative
density of an architected material is ultra-low (<5%), it is named as foam. Moreover,
if an architected material has a relative density between 10 and 60%, it is referred to
as a lattice structure. For the architected materials with relative densities above 70%,
the structure’s unit cells overlap each other revealing a discontinuous structure with
air gaps between the designed parts [21].
This plethora of architected materials led to the necessity of classification of
them based on their geometry, according to the existing literature [22, 23]. The
first level of classification focuses on the periodicity of the structure and divides
the architected materials into three categories: the stochastic, the periodic, and the
pseudo-periodic. The stochastic lattice structures are observed mainly in nature, or
98 N. Kladovasilakis et al.
they can be designed through random seeds and special algorithms, such as Voronoi
algorithm. The periodic structures have a basic element, namely a unit cell, which is
repeated in all three dimensions. On the other hand, the pseudo-periodic consisted
of periodic lattice structures that are interrupted or interactive with the boundaries
of the overall structures, such as the conformal and non-conformal lattice structures.
The second level of classification divides the stochastic structures into open and
closed cells of lattice structures and the periodic structures into 2.5D (honeycombs,
prismatic, etc.) and 3D lattices. The classification of the architected materials can
be performed at higher levels depending on the depth of the research. Through this
process, special categories of architected materials are derived, such as triply periodic
minimal surface structures (TPMS) and 3D strut-based structures. Figure 2 presents
a detailed classification of architected materials according to the existing literature
[23].
Furthermore, lattice structures offer a series of comprehensive advantages to the
objects in which they are applied. The first advantage is the lightweight structure that
can be adjusted with the proper trade-off between the applied relative density and
the desired mechanical behavior of the structure. Due to this, lattice structures are
employed in topology-optimized applications with significant mass reduction such
as the sandwich-like structures [24] and lightweight components [25] for the auto-
motive and aeronautic industries. In contrast with the density-based approach, the
discrete/truss-based approach offers some more advantages in the structure due to
the unique geometric characteristic of architected materials. One of these advantages
is the increased porosity of lattice structures which roses the thermal insulation of
the object enabling the application of architected materials in sophisticated indus-
tries, such as aerospace [26]. In addition, the pores in the closed lattices provide
the extra advantage of buoyancy making suitable materials for lightweight boats,
canoes, etc. Furthermore, in the same manner, the high-porosity lattice structures
are utilized in acoustic insulation applications, operating as sound absorber material
Topology Optimization Utilizing Density-Based Approach for Additive … 99
[27]. Combining the high porosity with the high surface area to volume ratio that
some architected materials offer, like the TPMS structures, the architected materials
have high potential in biomechanical applications. According to the existing litera-
ture [28–30], the high porosity of a structure facilitates the diffusion of blood and
nutrients and, coupled with the high surface area, creates the proper circumstances
to accelerate the tissue regeneration processes.
It is obvious that the discrete/truss-based optimization methods are superior to
the density-based approach due to the fact that offer multiple advantages besides
the lightweight. However, the discrete/truss-based approach is a very complex and
time-consuming process with increased difficulty in manufacturing, even for the addi-
tive manufacturing methods. These occurred because of the density-based approach
which is the first step of the discrete/truss-based approach in order to find the region
that could be replaced with lattice structures. The next steps involve the implemen-
tation of the proper architected materials, the calculation of the optimum trade-off
between the relative density and the mechanical response of the lattices, and the
ability to fabricate the selected lattice structures with the possessed additive manufac-
turing equipment (dimensional accuracy, micro-porosity, support generation, etc.).
Thus, due to the rapid topology optimization process and the lower complexity, the
density-based approach is usually employed in applications with the main objec-
tive of the mass reduction of the components, such as the use case of the following
section.
absorbers but also by the unsprung/sprung mass ratio, as the lower ratio leads to a
lower transfer of loads. Thus, the driving characteristics of a vehicle are affected
by the unsprung/spring mass ratio. Hence, it is clear lightweight components in the
unsprung mass region enhance the overall vehicle performance. Therefore, the car
manufacturers had found the solution for the reduction of the unsprung masses,
abandoning, where possible and economically justified, steel and cast iron in favor
of light and reliable alloys, among which the most important place is occupied by
aluminum, which is becoming more and more in suspension parts. The next step in
components was the lightweighting of the components of the unsprung mass through
the modification of the component shape and geometry via the topology optimization
procedures. Hence, the use case of the current study focuses on the mass reduction
via topology optimization of the brake caliper parts, which belong to the unsprung
mass region of a vehicle. Figure 3 portrays indicative images of automotive brake
calipers.
Fig. 4 Indicative images of a Brembo® F 50 224 brake caliper with a four-piston configuration
consisted of three pieces
Fig. 5 3D CAD models of the components of brake calipers: a base part and b side part
102 N. Kladovasilakis et al.
Fig. 6 Contours of the initial design of brake caliper for: a factor of safety (FOS) and b equivalent
von Mises stresses
base part. On the other hand, the hydraulic pressure is applied to the piston’s holes
on the side part with a value of 6.5 MPa. The next step was the development of a
computational mesh and the discretization of the brake caliper into finite elements.
The final fine mesh was derived after mesh-sensitivity analysis based on the results
of the maximum von Mises stress and different types of mesh elements were utilized.
In detail, the final mesh consisted of 305,798 tetrahedral elements with element sizes
ranging between 1.7 mm and 8.9 mm. The static FEA of the designed brake caliper
showed that the screw regions of the assembly concentrate the higher stresses with
maximum stress concentration at 340 MPa, i.e., a factor of safety for the aluminum
parts of 1.486. Figure 6a and b illustrate the factor of safety (FOS) and equivalent von
Mises stress concentration contours for the assembled brake caliper, respectively. It
is worth mentioning that the rest of the structure revealed significantly lower stress
concentration leading to the conclusion that mass removal and topology optimization
process of the original design could be performed.
The simulation of the initial brake caliper design via finite element analysis was
employed in order to understand the mechanical response of the items and highlight
Topology Optimization Utilizing Density-Based Approach for Additive … 103
the regions where the topology optimization process could be performed. Thus, in this
section, the topology optimization process is described. The topology optimization
was performed at SolidWorks™ design software employing the SIMP algorithms.
The finite element model of the previous section was utilized as the initial model for
this process coupled with the construction materials. Furthermore, the same mesh was
applied in this process in order to achieve mesh-independent results. The objective
of the optimization process was to minimize the mass of the objects maintaining the
factor of safety above the one. It is worth noting that the final design was achieved
after 50 iterations and the whole process was completed after almost 2 h. Figure 7
shows a rendered image of the final design for the topologically optimized brake
caliper. The final mass of the component was 1297 gr for the base of the brake
caliper and 438 gr for each side part. Table 3 lists the exact initial and final mass for
each part coupled with the percentage mass reduction. To conclude, the final design
of the brake caliper has an overall mass of 2173 gr achieving a total mass reduction
of 1022 gr, i.e., 32%.
After the topology optimization process, a final FEA was performed in order to
verify the mechanical strength of the proposed design. Figure 8a and b illustrate the
factor of safety (FOS) and equivalent von Mises stress concentration contours for
the topologically optimized brake caliper, respectively. It is obvious that the stress
concentration regions were increased, especially for the aluminum parts. However,
the maximum developed stress was 383 MPa way below the 505 MPa, which is the
maximum strength of the construction material.
Fig. 8 Contours of the topologically optimized brake caliper for: a factor of safety (FOS) and b
equivalent von Mises stresses
4 Conclusions
References
1. Gibson I, Rosen DW, Stucker B (2010) Additive manufacturing technologies, 1st edn. Springer,
New York
2. Kladovasilakis N, Charalampous P, Kostavelis I et al (2021) Impact of metal additive manu-
facturing parameters on the powder bed fusion and direct energy deposition processes: a
comprehensive review. Prog Addit Manuf 6:349–365
3. Frazier WE (2014) Metal additive manufacturing: a review. J Mater Eng Perform 23:1917–1928
4. Frazier WE (2010) Digital manufacturing of metallic components: vision and roadmap. Solid
free form fabrication proceedings. University of Texas at Austin, Austin TX, pp 717–732
5. Tyflopoulos E, Flem DT, Steinert M, Olsen A (2018) State of the art of generative design and
topology optimization and potential research needs. In: DS 91: Proceedings of norddesign,
Linköping, Sweden
6. Kladovasilakis N, Tsongas K, Tzetzis D (2020) Finite element analysis of orthopedic hip
implant with functionally graded bioinspired lattice structures. Biomimetics 5:44
Topology Optimization Utilizing Density-Based Approach for Additive … 105
7. Bendsøe MP, Sigmund O (2003) Topology optimization: theory, methods, and applications.
Springer, Berlin, New York
8. Ehrgott M (2005) Multicriteria optimization. Springer, Berlin
9. Querin OM, Steven GP, Xie YM (2000) Evolutionary structural optimisation using an additive
algorithm. Finite Elem Anal Des 34:291–308
10. Xie YM, Steven GP (1997) Evolutionary structural optimization. Springer-Verlag, London
11. Lee S, Shim T, Cho B (2006) Development of a brake system for lightweight vehicle. In:
Proceedings of the ASME 2006 international mechanical engineering congress and exposition.
Design engineering and computers and information in engineering, Parts A and B. ASME, pp
229–238
12. Huang X, Xie M (2010) Evolutionary topology optimization of continuum structures methods
and applications. Wiley
13. Olmeda E, Garrosa M, Sánchez SS, Díaz V (2020) Development and characterization of a
compact device for measuring the braking torque of a vehicle. Sens 20:4278
14. Hardwick AP, Outteridge T (2015) Vehicle lightweighting through the use of molybdenum-
bearing advanced high-strength steels (AHSS). Int J Life Cycle Assess 21:1616–1623
15. Tcherniak D (2002) Topology optimization of resonating structures using SIMP method. Int J
Numer Meth Eng 54:1605–1622
16. Jiao H, Zhou Q, Fan S, Li Y (2015). A new hybrid topology optimization method coupling
ESO and SIMP method. In: Proceedings of china modern logistics engineering. Lecture notes
in electrical engineering, vol 286. Springer, Berlin, Heidelberg
17. Gibson LJ (1989) Modelling the mechanical behavior of cellular materials. Mater Sci Eng A
110:1–36
18. Kladovasilakis N, Tsongas K, Kostavelis I, Tzovaras D, Tzetzis D (2021) Effective mechanical
properties of additive manu-factured strut-lattice structures: experimental and finite element
study. Adv Eng Mater 24(3):2100879
19. Kladovasilakis N, Tsongas K, Kostavelis I, Tzovaras D, Tzetzis D (2022) Effective mechanical
properties of additive manufac-tured triply periodic minimal surfaces: experimental and finite
element study. Int J Adv Manuf Technol
20. Deshpande VS, Fleck NA, Ashby MF (2001) Effective properties of the octet-truss lattice
material. J Mech Phys Solids 49(8):1747–1769
21. Gibson LJ, Ashby MF (1997) Cellular solids structure and properties. Cambridge University
Press, Cambridge, UK
22. Pei E, Kabir I, Breški T, Godec D, Nordin A (2022) A review of geometric dimensioning and
tolerancing (GD&T) of additive manufacturing and powder bed fusion lattices. Prog Addit
Manuf
23. Kladovasilakis N, Tsongas K, Karalekas D, Tzetzis D (2022) Architected materials for additive
manufacturing: a comprehensive review. Mater 15:5919
24. Kladovasilakis N, Charalampous P, Tsongas K, Kostavelis I, Tzetzis D, Tzovaras D (2021)
Experimental and computational investigation of lattice sandwich structures constructed by
additive manufacturing technologies. J Manuf Mater Process 5:95
25. Li C, Kim IY, Jeswiet J (2015) Conceptual and detailed design of an automotive engine cradle
by using topology, shape, and size optimization. J Struct Multidiscip Optim 51:547–564
26. Ferro CG, Varetti S, De Pasquale G, Maggiore P (2018) Lattice structured impact absorber with
embedded anti-icing system for aircraft wings fabricated with additive SLM process. Mater
Today Commun 15:185–189
27. Wu PK, Lee CW, Sun WH, Lin CL (2022) Biomechanical analysis and design method for
patient-specific reconstructive implants for large bone defects of the distal lateral femur. Biosens
12:4
28. Heinl P, Müller L, Körner C, Singer RF, Müller FA (2008) Cellular Ti–6Al–4V structures with
interconnected macro porosity for bone implants fabricated by selective electron beam melting.
J Acta Biomater 4:1536–1544
29. Kantaros A, Chatzidai N, Karalekas D (2016) 3D printing-assisted design of scaffold structures.
Int J Adv Manuf Technol 82:559–571
106 N. Kladovasilakis et al.
1 Introduction
Masonry walls are among humanity’s oldest construction methods including some of
the most famous ancient structures in Mesopotamia, Egypt, Ancient Greece, Rome
as well as the Americas, Africa and Asia. All sorts of building types were built
by utilising blocks made of mud, clay, stone, or marble. According to Fiala et al.
[1], bricks have been the main construction material for the last 10,000 years, with
their history reaching back to 8,300 BC. Masonry structures have been continuously
used wildly up to our days in all sorts of buildings and functions incorporating new
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 107
P. Kyratsis et al. (eds.), Computational Design and Digital Manufacturing,
Management and Industrial Engineering, https://doi.org/10.1007/978-3-031-21167-6_5
108 A. Agkathidis et al.
materials and techniques such as concrete, glass, and metals as well as new fabri-
cation and assembly technologies including robotic fabrication. The first attempts
to produce bricks on an industrial scale came with the industrial revolution in 19th-
century England. Most early brick-making machinery development was focused on
the process of moulding the bricks, the task considered to be the most difficult and
required the most skill [2]. Early attempts at machine-laid masonry came around
the turn of the 19th to the twentieth century by different individuals inventing brick-
laying machines, such as John Thomson and John Knight [3]. According to Potter
[3], ‘these machines would (theoretically) run along the top of a wall, set down a
layer of mortar, and place bricks one at a time. They couldn’t sense anything about
their environment, or measure where a brick needed to go-they simply extruded a
layer of mortar and mechanically placed a brick at regular intervals’. Most of these
machines remained at a conceptual level, very few of them were actually built and
none of those built did work properly. Similar concepts arose in the next few decades
of the 1960s and 1970s with some of these machines being built and being able to
operate.
The first attempts at automated bricklaying based on robotic technology appeared
in the later 1980s and early 1990s. As Potter [3] describes ‘instead of mindlessly
repeating the same motions, these arms would combine a high degree of freedom;
robotic arms with sensors and control systems to “see” the brick, see where it needed
to go, and deliberately grab it and place it there’. Slocum and Schena [4], were among
the first to publish their work on the robotic construction of cement block walls in
1988.
However, the use of mortar and other adhesives would always complexify the
automated assembly processes as their application often requires human labour in
order to apply it to the bricks or blocks, making the entire process less efficient.
Furthermore, in an era facing a continuous climate crisis, using materials such as
mortar which includes a high amount of cement, whose production on the other
hand causes high carbon emissions, means that reducing its use in masonry wall
construction will support the reduction of carbon emissions required for the entire
process.
Interlocking bricks also appeared in the early 1900s inspired by toy systems such
as Meccano and Lego [5], with Lego being the system having a direct reference to
a construction brick. According to Kintingu [5] ‘since the 1970s the interlocking
mortarless bricks/blocks for house construction, made from sand-cement, stabilised
soil and burnt/baked soil, have been pioneered in Africa, Canada, the Middle East and
India. Today, they exist in various shapes and formats, solid or hollow, but somehow,
they never managed to become more popular than conventional bricks. Furthermore,
the potential of combining interlocking mortarless brick design in relation to the
possibilities arising from digital design methods and robotic fabrication techniques
has not been sufficiently explored.
Our paper investigates the possibilities arising by incorporating algorithmic tools,
robotic fabrication techniques and interlocking brick technology in order to develop
a file-to-fabrication framework which can enable the robotic assembly of mortar-free
masonry structures. In particular, our paper answers the following research questions:
Rethinking the Brick: Developing a File to Fabrication Framework … 109
The study began by looking into the related work of other researchers in order to
inform our research of the latest developments in the field. Rihani and Bernold [7]
described some early attempts at robotic masonry brick wall assembly in 1996. They
examined basic pre-fabricated wall structures using a pneumatic gripper in either
global control or local control systems.
In their ‘R-O-B’ masonry wall installation presented at the Venice Architecture
Biennale in 2008, Gramazio and Kohler [8] were among the first researchers to
explore the robotic assembly possibilities of brickworks. Their curvilinear wall was
assembled by a mobile robotic unit placed in a container, which can be transported on-
site and is capable of putting together standard bricks, rotated in different directions
using adhesives. The same applies to their research in the ‘Programmed Column’
project [9], where they examined the design and fabrication of complex geometric
brick columns, whereby the robot would pick and apply adhesive glue, before placing
the bricks.
Helm et al. [10] explore a similar approach in 2012 utilising a robotic arm mounted
on a vehicle allowing assembly of complex wall structures by standardised bricks
to take place in situ in human–robot collaboration. Furthermore, the robot is equipt
with a camera sensor, allowing it to operate in unknown environments and recognise
human movement and obstacles.
Oliveira and Sousa [11] experimented with the robotic assembly of brick columns
inspired by the work of Raul Hestnes Ferreira. They have utilised a vacuum gripper
mounted on a Kuka robotic arm and have programmed pauses into the pick and place
process to allow the addition of adhesive glue.
Willmann et al. [12] explored the assembly of a brick wall by using a swarm
of areal drones. The drones are equipped with vacuum grippers and are capable to
move freely overcoming size limitations linked to robotic arms. The construction
system was broken down into four subsystems; the blueprint, the foreman, the crew
system, and the pickup station. ‘The blueprint, which contains a list of sequential
placement instructions; the foreman, which manages the overall construction process
110 A. Agkathidis et al.
by interpreting the blueprint, issuing build orders, and tracking the construction
progress; the crew system, which is responsible for executing the foreman-issued
build orders to fabricate the structure; and the pickup station, which provides building
elements to the crew. These processes run on an external computer’. However, the
drones have only a limited time of flight capacity as they need to be recharged after
several flights.
Fingrut et al. [13] explored the complexities of automated bricklaying. They
have used a cable-driven parallel robot to assemble masonry walls including open-
ings using mortar. Their software pallet includes Rhinoceros/Grasshopper as well
as ‘Python’ [14] in which they simulated the structural stability of the masonry
structures. The paper summarises possible reasons for failing bricklaying including
the uneven application of the mortar layer, inconsistent mortar mixture, too little
or too much mortar, poor placement of control joints, inadequate accommoda-
tion for drainage, improper structural calculations, overextension of corbelling, and
poor/inconsistent materials selection.
Ariza and Gazit [15] demonstrate a method for the robotic assembly of digitally
generated double-curved, self-supported structures out of interlocking, male–female
components. The components were fabricated by a 3 axes CNC router and assembled
by a Kuka robotic arm equipped with a vacuum gripper. However, the structures
consisted of various types of components which increased the complexity of the
pick and place and feeding assembly process.
Parascho et al. [16] described the robotic assembly of a complex vault geom-
etry made of standardised glass and concrete bricks without using falsework. The
fabrication method included two robots alternating between placement and support,
starting with the assembly of a stable central arch based on an interlocking diagonal
brick sequence.
Rossi and Tessmann [17] presented a series of studies regarding the robotic
assembly of discreet elements into aggregated structures. They employed ‘Python’,
Rhinoceros/Grasshopper as well as the Grasshopper plugin ‘Wasp’ [18] to conduct
their design and fabrication algorithms. They have examined standardised inter-
locking units with different geometries, not necessarily seeing them as ‘bricks’ but
as entities, which they call digital materiality. Their design process appears to be
additive and strongly linked to the geometry and properties of the units. They don’t
seem to include any simulation software in their file-to-fabrication framework.
Finally, in a similar attempt Duballet et al. [19], investigated the assembly of
polyhedral standardised ‘bricks’ utilizing two robotic arms, whereby one of them
picks and places the blocks while the other one is extruding mortar around them.
The set of edges then forms a space truss, of great mechanical efficiency. ‘Printable’
mortar is crucial to the system for the blocks could not withstand the hydrostatic
pressure of fresh mortar without additional formwork.
By summarising the projects presented above, it becomes obvious that most of
the research conducted in the field focuses on non-interlocking, bricks which require
mortar or other adhesives for their assembly. To resolve the issue of adding adhesives,
the researchers either required human labour or utilised other mechanisms such as
additional robotic arms or mortar addition stations to complete these tasks, increasing
Rethinking the Brick: Developing a File to Fabrication Framework … 111
the complexity of the entire process. Most researchers have developed their fabri-
cation workflows by using Rhinoceros/Grasshopper, in some cases including aggre-
gation plug-ins like ‘Wasp’ or the scripting and programming platform ‘Python’.
The use of stability simulation software is mentioned only by Fingrut et al. [13].
They, however, have investigated the assembly of non-interlocking bricks. Rossi and
Tessmann [17] who have experimented with interlocking blocks, were not directly
referring to bricks, nor used stability simulation software. Most researchers have
used Kuka robotic arms, while others used ABB and Universal Robots. Additional
hardware such as robotic gripper end-effectors, cameras, and CNC routers have also
been applied.
and presented in phase four. The brick feeding process was programmed to take place
from piles of bricks positioned within the reach radius of the UR10e, aiming for a
completely automated process. All physical assembly experiments were conducted
with mockup bricks made of styrofoam and ignored the physical properties of the
clay bricks, as they focused only on the geometry-related properties.
Phase C, our verification process includes three design experiments whereby each
of which was built upon the feedback received from the previous experiment, accel-
erating their degree of complexity. In particular, design experiment one investigated
a plain brick and a curvilinear wall, and design experiment 02 included an inter-
locking brick assembled into a curvilinear wall, while design experiment three tested
an interlocking brick and a twisted column.
Our first experiment was conducted with a plane, non-interlocking brick and the
design of a curvilinear parametric brick wall structure (variant 01) designed in
Rhinoceros/Grasshopper as described in Fig. 1. The wall’s design, curvature, height,
and width could be changed and adjusted to the user’s requirements. The main aim of
design experiment 1 was to test and inform our methodology before proceeding with
more complex, interlocking bricks and wall geometries. Some of the wall geometries
failed the stability simulation and collapsed as shown in Fig. 2.
.
We repeated the physical assembly of the same wall with the robotic arm, where the
wall collapsed during the assembly process, verifying the simulation results deriving
from PhysX as shown in Fig. 3.
We modified the wall geometry until the simulation indicated a stable structure.
We then proceeded with the robotic assembly of the wall which again remained stable
during the assembly process and was completed successfully as shown in Fig. 4.
Our findings from experiment one, have indicated that both, the stability simu-
lation as well as the assembly simulation and the actual physical assembly with
‘Robots’ worked well. The feedback loops informing our design phase A proved
to be valuable as they helped us to optimise the wall’s design and prevent it from
collapsing, during and after the fabrication process. The final wall was able to stand
and support its own weight, however, the lack of adhesives would compromise its
overall stability.
Our second design experiment was conducted with an interlocking brick equipped
with a tenor mortice interlocking mechanism enabling the brick’s rotation on the x,y
plane up to 60 degrees, as shown in Fig. 5.
The design and stability simulation phases were adjusted to correspond to the
brick’s geometry, producing a curvilinear wall, reproduced by the layer-by-layer
rotation of each brick, varying from 0 to 60 degrees rotation in the x,y plane.
Having successfully passed the assembly simulation, we proceeded with the physical
assembly to find out that the process suddenly stopped. The experiment failed due to
the tolerances adding up on the assembly of each layer, leading to the upper brick’s
tenon not being in the exact position to fit into the mortise of the lower brick.
The experiment was repeated after adding a total of 8 mm tolerances, 4 mm on the
tenon plus 4 mm on the mortise and it proved to be successful (variant 2). We then
repeated the tolerances untill we found the minimum possible tolerance number for
the assembly to be completed successfully (variant 3), which was a total of 4 mm,
2 mm on the tenon plus two on the mortise (Fig. 6). The smaller tolerance led to a
more stable wall structure, thus variant 3 was more stable than variant 2.
The findings deriving from design experiment two highlighted that the addition
of the joint brought forward the tolerances relation issue, which was not predicted by
the simulation processes. Overall, by adding the right amount of tolerances, in this
case, 4 mm the final product was completed successfully. Overall, the experiment
proved that the joint geometry can enable the assembly of either a standard straight
wall or a complex curvilinear structure both of them being structurally stable.
Our third design experiment examined the design and fabrication of a twisted column
made out of interlocking bricks as described in Fig. 7. Their mortise-tenor type joint
allows the interlocking to take place both in horizontal and vertical directions. Four
slightly modified bricks were designed for the column’s final layer, where the vertical
interlocking mechanism was left out.
The column design passed both stability and assembly simulations successfully.
The bricks were fabricated with a tolerance of 4 mm in total, this 2 mm on the tenon
and 2 mm on the mortice, thus the physical robotic assembly was successful as shown
in Fig. 8, with the same tolerances utilised in design experiment two.
The findings deriving from experiment three indicate that our framework operates
successfully even with the construction of more complex interlocking structures,
where bricks are interlocking in horizontal and vertical directions. The column is
very stable and appears to be more stable than the wall constructed in experiment
two; however, the column’s geometry is bound to the geometry of the brick and does
not allow the assembly of different shaped columns.
5 Findings
Our findings regarding the three design experiments can be summarised as follows:
a) the proposed design-to-fabrication framework has operated with success, being
able to design and assemble a variety of interlocking brick geometries, varying from a
simple straight wall up to curvilinear walls with rotating bricks and twisted columns;
b) The incorporation of tolerances into the interlocking brick geometry is crucial
for the successful completion of the robotic assembly of the brick and has been
calculated at 4 mm in total (2 mm on the tenon and two on the mortice); c) the
smaller the tolerances the more stable the brick structure is; d) The geometries of
brick joints, interlocking structures and their stability are highly linked to each other.
Structures built out of custom-made bricks (e.g. in experiment three) are more stable
than structures built from universal shapes bricks, or bricks capable to be used for
a wider range of structures (e.g. experiment two); e) the brick feeding process is
sensitive and can influence the tolerances required for a successful assembly.
6 Conclusions
Our conclusions will focus on answering our research questions. So how can we
develop an algorithmic file-to-fabrication framework for the robotic assembly of
mortar-free masonry structures? It appears that the proposed, parametric design-to-
fabrication framework (Fig. 1) developed in a Rhinoceros/Grasshopper environment
incorporating the PhysX and Robots plug-ins is a fully operative framework capable
Rethinking the Brick: Developing a File to Fabrication Framework … 117
of designing and fabricating a wide range of geometries, from very simple straight
walls to twisted columns.
How does the geometry of the brick influence the overall geometry of the masonry
structure? The brick design and particularly its joint geometry has a strong impact
on the brick structure’s shape and stability. As evident in design experiment two,
a universal interlocking brick mechanism is possible however it is still bound to a
range of wall geometry solutions. However, as evident in experiment three, if the
complexity of the column/wall geometry increases, a specifically customised brick
geometry is required to describe it.
What are the tolerances required for the accomplishment of the entire design-to-
fabrication process? The robotic assembly of interlocking brick elements is subject to
tolerances in order to be accomplished successfully. In both design experiments, two
and three, 4 mm tolerances were the minimum necessary. However, in both exper-
iments, the structures were assemblable manually without any tolerances required.
Tolerances can be influenced by external parameters such as the assembly plane
and the feeding mechanism. A less accurately levelled assembly plane causes higher
tolerances than a perfectly levelled base. The same applied to the base of the brick-
feeding stockpiles. Overall, the higher the tolerances, the smaller the stability of the
interlocking structure.
One of the most important limitations of this research was that the experiments
have been conducted with mockup bricks made of Styrofoam who’s weigh signifi-
cantly less than a brick made of clay. Additional weight will have a significant impact
on the structure’s overall stability, which however appears to be high already due to
the interlocking mechanism. Further investigation and additional experiments with
clay-made bricks are planned in our future research. During those experiments, other
parameters such as the compatibility of the tenor-mortise interfaces applied in our
experiments to the properties of clay require further investigation as well.
Finally, further optimisation techniques, such as topology optimisation and
voxelization could be embedded into our form-finding algorithm, aiming to minimise
the required amount of bricks in order to achieve the desired form.
Acknowledgements We would like to express our gratitude and acknowledgement to Afshana Ali,
Zefeng Zhuang and Chang Zhang for executing design experiment one, Jinbi Jiang and Tianyang
Li for executing design experiment two and Pattanan Inharwararak, Aman Mahmood Khan and
Dengwei Tao for executing design experiment three. Finally, our warm thanks go to the entire team
of the Creative Workshop of the Liverpool School of Architecture without whose efforts and support
this research would not have been possible.
118 A. Agkathidis et al.
References
1. Fiala J, Mikolas M, Junior JF, Krejsova K (2019) History and evolution of full bricks of other
European countries. In: IOP conference series: materials science and engineering, vol 603, no
3. IOP Publishing, p 032097
2. Potter B (2021) Bricks and the industrial revolution. In: Construction physics. Retrieved from
https://constructionphysics.substack.com/p/bricks-and-the-industrial-revolution. Accessed in
July 2022
3. Potter B (2021) Where are the robotic bricklayers? In: Construction physics. Retrieved
from https://constructionphysics.substack.com/p/where-are-the-robotic-bricklayers. Accessed
in July 2022
4. Slocum A, Schena B (1988) Blockbot: a robot to automate construction of cement block walls.
Rob Auton Syst 4(2):111–129. ISSN 0921-8890. https://doi.org/10.1016/0921-8890(88)900
20-6
5. Kintingu SH (2009) Design of interlocking bricks for enhanced wall construction, flexibility,
alignment accuracy and load bearing. Ph. D., thesis, University of Warwick
6. Retrieved from https://www.grasshopper3d.com/. Accessed in July 2022
7. Rihani RA, Bernold LE (1996) Methods of control for robotic brick masonry. Autom Constr
4(4):281–292
8. Gramzio F, Kohler M (2008) Retrieved from https://www.dezeen.com/2009/09/16/pike-loop-
by-gramazio-kohler/. Accessed in July 2022
9. Gramazio F, Kohler M (2010) Retrieved from https://gramaziokohler.arch.ethz.ch/web/e/lehre/
175.html. Accessed in July 2022
10. Helm V, Ercan S, Gramazio F, Kohler M (2012) Mobile robotic fabrication on construction
sites: DimRob. In: 2012 IEEE/RSJ International conference on intelligent robots and systems.
IEEE, pp 4335–4341
11. Oliveira R, Sousa JP (2016) Building traditions with digital research-reviewing the brick
architecture of raúl hestnes ferreira through robotic fabrication
12. Willmann J, Augugliaro F, Cadalbert T, D’Andrea R, Gramazio F, Kohler M (2012) Aerial
robotic construction towards a new field of architectural research. Int J Archit Comput
10(3):439–459
13. Fingrut A, Crolla K, Lau D (2019) Automation complexity-brick by brick. In: CAADRIA,
Wellington, pp 93–102. https://doi.org/10.52842/conf.caadria.2019.1.093
14. Retrieved from https://docs.microsoft.com/en-us/windows/python/scripting. Accessed in July
2022
15. Ariza I, Gazit M (2015) On-site robotic assembly of double-curved self-supporting structures.
In: SiGraDI, Florianópolis, SC, Brasil pp 746–753
16. Parascho S, Han IX, Walker S, Beghini A, Bruun EP, Adriaenssens S (2020) Robotic vault: a
cooperative robotic assembly method for brick vault construction. Constr Robot 4(3):117–126
17. Rossi A, Tessmann O (2017) Collaborative assembly of digital materials. In: ACADIA,
Cambridge, MA, pp 512–521. https://doi.org/10.52842/conf.acadia.2017.512
18. Retrieved from https://www.food4rhino.com/en/app/wasp. Accessed in July 2022
19. Duballet R, Baverel O, Durrenberger J (2019) Space truss masonry walls with robotic mortar
extrusion. Struct 18:41–47. ISSN 2352-0124. https://doi.org/10.1016/j.istruc.2018.11.003
20. Retrieved from https://www.food4rhino.com/en/app/physxgh. Accessed in July 2022
21. Retrieved from https://www.food4rhino.com/en/app/robots. Accessed in July 2022
Knowledge-Based Design:
A Function-Knowledge Reasoning Model
for Product Conceptual Design
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 119
P. Kyratsis et al. (eds.), Computational Design and Digital Manufacturing,
Management and Industrial Engineering, https://doi.org/10.1007/978-3-031-21167-6_6
120 Y. Jiang et al.
Product conceptual design is an early stage of design, and the main role of product
conceptual design is to generate the conceptual scheme (CS) of the product that meets
the user requirements [1–3]. The quality of the formed CS plays a decisive role in the
final quality of the product, so this is why 70% of the total cost of the product life cycle
is determined by the product conceptual design stage [4]. With the advancement of
technology and society, the difficulty of product conceptual design is also increasing.
It is mainly reflected in the difficulty of capturing the user requirements that exist in
the market.
A user is someone who wants to buy and use a new product, and a requirement
expresses any one characteristic that a user expects from a product [5, 6]. The ability to
understand and meet user requirements is the key to the success of product conceptual
design. Usually, there are certain differences in user groups. The different users
use products differently, and their use purposes are also different. Therefore, it is
necessary to understand the differences between user requirements. The types of
user requirements are as follows [6, 7]:
• Direct requirements: When users are asked about their requirements, such require-
ments can be directly expressed by users. Such requirements can be obtained
directly through some methods, such as interviews with users or questionnaires.
• Latent requirements: Although users have clear desires, they cannot express these
requirements explicitly. Such requirements are difficult to obtain directly from
users, but most of the consumption comes from these kinds of requirements.
As Harvard Business School Professor Theodore Levitt said:
‘People don’t want to buy a quarter-inch drill. They want a quarter-inch hole!’
1.4 Knowledge
Knowledge is a broad, complex, abstract, and even vague concept that has different
definitions from various perspectives [5]. We don’t have to seek a uniform definition
of it, and the essence can be understood from the following aspects: (1) Knowledge
is the summation of human experience gained in practice about society, thinking
phenomena, and the natural world. (2) Information is the carrier of knowledge, and
a part of it needs to be preserved and circulated using material carriers. (3) From a
static point of view, knowledge is represented as a knowledge product with a certain
structure; from a dynamic point of view, knowledge is generated, transmitted and
used in a continuous flow.
Knowledge plays a key role in product conceptual design, and product conceptual
design proceeds with the flow and application of knowledge. The basis of product
conceptual design depends on the amount and the degree of utilization of knowledge.
Because product conceptual design is a complex and iterative process, the knowledge
used in it involves multiple disciplines and fields, which is tedious and complex.
To sum up, the knowledge used in product conceptual design has the following
characteristics, as shown in Table 1.
122 Y. Jiang et al.
Product conceptual design also needs a knowledge support system, which can
provide designers with the methods and knowledge they need during the design
process, that is, knowledge reasoning. Knowledge-based design (KBD) is a kind of
KBS that provides designers with the methods and knowledge they need during the
design process [13]. The key technologies in KBS are knowledge retrieval and knowl-
edge reasoning. Knowledge retrieval is the way of “people looking for knowledge”,
while knowledge reasoning is like “knowledge looking for people”. Through KBS,
more knowledge can be reasoned to stimulate the creative thinking of designers. It
can guide designers to integrate and transfer multi-domain and multi-disciplinary
knowledge and improve designers’ design capabilities.
Scheme output
To better carry out the product conceptual design process, it needs to be supported
by design methods. The design methods need to be derived from basic laws and prin-
ciples. This section will introduce some typical design methods, including Quality-
function Deployment (QFD), Axiomatic Design (AD), and the Function-Behavior-
Structure model (FBS).
1. Requirements acquisition and transformation-QFD
The core content of QFD is to acquire and transform requirements, in addition to
this, it also includes analysis tools such as part configuration and process planning.
QFD takes house of quality (HoQ) as a typical model, as shown in Fig. 2, which can
intuitively connect user requirements with product performance.
The structure shows the correlation between different user requirements (Fig. 2A).
Each requirement is represented by ‘ + ’, ‘−’ and ‘Blank’ at the junction of the ceiling
to indicate a positive correlation, negative correlation and no correlation. The basic
building blocks in the HoQ are shown in Fig. 2B: The ‘left wall’ of the HoQ is the user
requirements and requirements weight; the ‘ceiling’ is the technical characteristics
knowledge; the ‘roof’ is the correlation matrix; The ‘room’ is the relationship matrix;
Knowledge-Based Design: A Function-Knowledge Reasoning Model … 125
The ‘right wall’ is the feasibility assessment matrix, and the ‘floor’ is the technical
assessment matrix. These six parts make up the HoQ model for product planning. This
model takes user requirements as input and translates discrete, vague requirements
into explicit engineering and technical characteristics.
2. Functional decomposition and solution-AD
AD divides the design world into four domains: user domain, functional domain,
physical domain and process domain. Each domain has its elements, namely user
requirements, product functions, design parameters and process variables. AD carries
out product conceptual design by performing zigzagging mapping transformation
between two adjacent design domains and uses design axioms to judge the rationality
and optimization of the design during the mapping process. Compared with other
design theories, AD does not simply complete its design in each design domain. It
performs top-down mapping transformation between two adjacent design domains,
fully considering the mutual relationship. The structure and mapping principles of
AD are shown in Fig. 3.
126 Y. Jiang et al.
PF DP
In AD, the eigenvectors of domains move from domains on the left to domains on
the right during the product conceptual design process. This process is iterative, as
the feature vectors can go back to the left domain through the ideas generated by the
designer in the right domain. The domain on the left represents “what do we want
to achieve” relative to the field on the right, while the domain on the right represents
the solution, that is, “how do we propose to meet the requirements specified by the
domain on the left”.
The mapping from a functional domain to a physical domain will be used as
an example to illustrate the mapping relationship of each domain in AD. Designers
should determine what kind of function the products should have at first, to determine
the overall product function (PF0 ); then, starting from the PF0 , determine the overall
design parameter (DP0 ). The PF0 is decomposed according to the PF0 , and then
the design parameters of this level are determined according to the sub-product
functions. When each sub-product function of a certain level is fully satisfied, the
next level sub-product functions are decomposed. After the zigzagging mappings,
the hierarchical structure tree of functional hierarchy and design parameters, as well
as the relationship between design parameters and product functions can be obtained.
3. Mapping from function to structure-FBS
Knowledge-Based Design: A Function-Knowledge Reasoning Model … 127
In FBS, there are three types of variables: Function variables, behavior variables and
structure variables [17]. These three types of variables’ characteristics are as follows:
• Function variables express the purpose of a product being designed, i.e. what a
product is for.
• Behavior variables express the attributes of a product being designed. These
attributes are abstracted or expected to be abstracted from the structure variables.
• Structure variables express the parts and their relationships to a product being
designed.
The FBS provides an efficient means for the product conceptual design process. It
explores the product conceptual design space by using the behavior variables of the
product, which act as a bridge between function and structure variables. Therefore, it
can realize the transformation from the required function to the product structure and
can reduce the abstract thinking of the designer, which is beneficial to the acquisition
and transformation of the original understanding.
Product functions, as the embodiment of a product conceptual design goal, need
to be satisfied in the final generated conceptual scheme. Various product functions
in product conceptual design are realized through corresponding knowledge. The
methods described above effectively assist designers in the product conceptual design
stage and have been widely used. To further advance the application of knowledge
in product conceptual design, a knowledge deployment method to help designers
acquire knowledge in the product conceptual design stage is needed.
scheme generation process again, and the generated scheme will be evaluated again
until the design requirements are met.
K i = f (i, j ) ⊕ Si (i ∈ N )
where Si represents the structural parameters, and product function f (i, j ) will be
implemented by it, K i represents knowledge required for product conceptual design.
Semantic constraint (SC f ) and single-function predicate are used in the expression
of product functions. The scope, object or mode of action of a functional predicate are
supplemented by SC f . The nouns and their constraints (C S ) represent the structure.
Sub-product functions are independent after decomposition based on AD, and they
have the following relationship with the constrained knowledge model (C K ):
⎧
⎪
⎪ K1 ∩ K2 ∩ · · · ∩ Kn = ∅
⎨ Σ
K1 ∪ K2 ∪ · · · ∪ Kn = K
⎪
⎪ { }
⎩
C K = f i ∃C f ⊕ S j {∃C S }(i, j ∈ N )
1
Sn( f, K ) =
|a − b| + 1
where Sn( f, K ) represents the similarity between knowledge and a sub-product func-
tion, a represents the similarity value between a sub-product function and product
attribute, and b represents the similarity value between knowledge and product
attribute. When the difference between the values of a and b is smaller, it proves that
the new sub-product functions and the knowledge are more similar in this product
attribute. When product attribute is represented by character values, then,
130 Y. Jiang et al.
⎧
1 a=b
Sc( f, K ) =
0 a /= b
where Sd( f, K ) represents the similarity between knowledge and a sub-product func-
tion, c represents the largest of these discrete values. When the difference between
the values of a and b is smaller, it proves that the new sub-product functions and the
knowledge are more similar in this product attribute.
The global similarity comparison method will be used in the model to realize
knowledge reasoning after determining the similarity between each sub-product func-
tion and knowledge [19, 20]. The similarity between product function and knowledge
can be shown as follows:
p Σp
Sim( f (i, j ), K ) = (ωi × S( f, K (i, j ) ))
m + n + p i=1
System architect
Conceptual
Conceptual Data base of
schemes
design knowledge feature
standards ontology relation
Issue solving
Modification Data base of
of technical engineering feature
terms ontology relation
Issue analysis
Knowledge
Domain expert management platform Designer
innovative
Knowledge Scientific effect Applied Tacit
method
source layer knowledge knowledge knowledge
knwledge
User interface
User layer
Knowledge entry Knowledge search Knowledge search
MySQL HTML
Storage layer OWL document Ontology document OWL document
model library source library
Knowledge
management
When a designer starts a conceptual design process, he or she first needs to determine
‘what to do’. After this, further clarification of ‘how to do’ is required for the designer.
Finally, the designer needs to evaluate and manage his or her schemes designed
themselves. The function modules and their relationships are shown in Fig. 7.
The role of each function module is (1) Task analysis module: Including user
requirements analysis and design issue definition. It is an activity to identify new
user requirements to improve the competitiveness of existing products and identify
design issues for new products for the subsequent design process. Its basic point lies
in user expectations and requirements; (2) Scheme design module: Including issue
analysis and issue solving. It is a design activity for new products, and the focus of
the design needs to be centred on solving issues and generating conceptual schemes.
The function-knowledge reasoning model will be applied in this module; (3) Scheme
management module: It contains the functions of storage and evaluation for initial
design schemes. These initial schemes will be optimized or used in future product
conceptual design work; (4) Knowledge management module: Its main function is
to provide knowledge support for the aforementioned function modules. It mainly
realizes the function by calling the knowledge management platform. In addition, this
module can also directly operate the knowledge base in the knowledge management
platform.
When designers use the system for product conceptual design, they need to have
a specific operation process as a guide.
Login
System administrator
Authentication
New project
Confirm design
tasks
Designer
Knowledge management N
moduel Satisfied?
Record scheme
generation report Y
Knowledge management
platform calling component
Logged out
Knowledge management
platform modification
component
system, designers can call various knowledge resources at any time during the design
process.
management modules are used to guide designers through the entire conceptual
design process. The report module is used to present past design tasks, task flows,
and conceptual proposals. On the right side of the interface is the workspace window,
which is used for specific operations of the projects. At the bottom of the interface
is the thinking record window. At the bottom of the interface is the thinking record
window. Designers can record creative ideas during the design process in the input
pane, including but not limited to pictures, hand-down information, text, and so on.
The figure shows the functional decomposition of a friction and wear tester using
the product conceptual design system. Using the QFD and AD in the function-
knowledge reasoning model, we can divide the friction and wear tester into three
function modules: power module, transmission module, driver module, control
module, etc., which are sub-product functions. Then the mapping between the decom-
posed sub-product functions and knowledge is implemented in the system using the
function-knowledge reasoning model. The power module is mapped to a switching
power supply, adjustable power supply, battery supply and machine tool power
supply. The transmission module is mapped to hydraulic transmission, mechanical
transmission, and pneumatic transmission. The driver module is mapped to servo
moto, stepping moto, linear motor, and steering gear. The system will recommend
schemes with higher similarity according to the similarity calculation. When clicking
the knowledge reasoning component extension button on the right, the combination
plan will be displayed, as shown in Fig. 10.
On the left side of the knowledge reasoning window, the relationship between the
knowledge required for different product sub-functions obtained by the similarity
algorithm is displayed. On the right side of the knowledge reasoning window, the
two most relevant knowledge combination plans are pushed by the system according
to the knowledge relationships. Plan A uses a combination of PLC, servo motor, fixed
nozzle and oil-based lubricating fluid, and plan B uses a combination of FPGA, servo
motor, rotary nozzle, and water-based lubricating fluid. The system will also intro-
duce the specific application of knowledge, such as PLC and FPGA for control, servo
motor for driving, fixed nozzle and rotary nozzle for diversion, oil-based lubricating
fluid and water-based lubricating fluid for lubrication. Designers can assist them-
selves in conceptual scheme design according to the plan provided by the system to
improve design efficiency and feasibility of conceptual schemes.
5 Conclusion
Knowledge plays an important role in the product conceptual design stage. In this
chapter, a function-knowledge reasoning model to help knowledge acquisition and
application during the product conceptual design stage is introduced. The main
contents are as follows: (1) The concepts and characteristics of product concep-
tual design, user requirements, product function and knowledge are introduced. In
addition, the concepts and functions of KBS and KBD are also explained; (2) The
process of product conceptual design and the knowledge used in each step in the
138 Y. Jiang et al.
process is articulated. Some design methods that support product conceptual design
are described, including QFD, AD, and FBS; (3) A function-knowledge reasoning
model is introduced, including its components and the role of each part. The model
realizes the mapping from product function to knowledge. The similarity algorithm
of functional-knowledge inference models is also reported; (4) A knowledge-based
product conceptual design system is proposed and explained, including its system
architecture, knowledge management platform, function modules of the system and
its operational process. The system interface is presented, and the functions of the
modules and components on the interface are detailed.
Knowledge-Based Design: A Function-Knowledge Reasoning Model … 139
References
1 Introduction
Ch. Tzivelekis
School of Engineering, Newcastle University, Stephenson Building, Newcastle NE1 7RU, UK
e-mail: babis.tzivelekis@newcastle.ac.uk
A. A. Krimpenis (B)
Core Department, National and Kapodistrian University of Athens, Psacha Eyvoias, 34400
Athens, Greece
e-mail: akrimpenis@uoa.gr
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 141
P. Kyratsis et al. (eds.), Computational Design and Digital Manufacturing,
Management and Industrial Engineering, https://doi.org/10.1007/978-3-031-21167-6_7
142 Ch. Tzivelekis and A. A. Krimpenis
few microns in size and average surface roughness at the nanometre scale [1]. For
this reason, micromachining is popular in rapid tool manufacturing for casting or
light patterning processes [2–4]. However, removing material at the microscale, is
not just a downscale of traditional machining.
The fundamental orthogonal cutting mechanism that dominates at macro scale,
only partially represents the cutting process at micro-scale, as the cutting edge of
the tool is not perfectly sharp but presents a radius [5]. When uncut chip thickness
becomes comparable to that radius, plastic deformation occurs, and material is not
removed. The radius of the cutting edge results in a negative rake angle that is asso-
ciated with an increase in slipping and ploughing forces, along with power consump-
tion, specific cutting energy1 and temperature [6]. These factors significantly affect
surface quality by inducing material softening, due to increased temperature [7]. The
minimum chip thickness (MCT) below which chip is not be formed, is suggested at
30% of re [5], where re is the cutting-edge radius, or 10–25% for the case of ductile
metals [6]. Considering the minimum chip thickness constraint, the manufacturing
process can be programmed in CAM software similarly to conventional milling and
be post-processed to extract the respective G-code. As in conventional machining, in
micromachining, critical cutting parameters, such as cutting depth and width, spindle
speed, tool diameter and geometry and cutting feed rate, influence at a great extent
the produced surface quality and must be carefully considered, according to part
material and geometries to be produced. Literature in micro-milling process moni-
toring and optimization shows that depth of cut, feed rate and spindle speed are the
main parameters that affect surface quality [8, 9].
Sufficient cooling of the machining area and efficient chip removal should
compensate for temperature induced deformations and removal of cut chips to avoid
engagement with tool and/or piece. Vortex tubes for have also been suggested for
machining under freezing temperature, where plastic deformation is minimum and
has been proven very efficient for machining thermoplastic polymers and prolong
tool’s life [10]. Technically, cutting width is not maintained constant but does not
exceed the user defined value [11]. In roughing toolpaths, a 40% cutting width
can contribute to increased stability of the process, however it might increase
machining time. For finishing toolpaths, such percentage can optimize surface quality
in finishing toolpaths as the rest 60% of cutting tool’s diameter will re- machine
the previous cutting pass. Spindle speed and tool diameter determine the cutting
velocity which represents the rate at which material is removed. Cutting velocity
generates the shear force required to effectively remove material producing a clean
cut and consistent chip. Recommended cutting velocity values for specific mate-
rials are available in the literature, particularly for metals, however they should
be optimized for specific setups and material grades. Cutting velocity is propor-
tional to tool diameter and spindle speed. Therefore, machining hard materials with
micro-endmills requires high spindle speed to generate adequate cutting force, effi-
ciently remove material and prevent material chips from clogging cutting flutes of
the tool. For this purpose, air bearings are commonly used in high-speed spindles
1 The ratio of the cutting force acting on the tool and the formed chip section.
Equation Driven Micro-Milling of 2D Free Form Models and Off-Line … 143
for CNC micro-milling and support spindle speeds up to 200 krpm, while compen-
sating for tool vibrations and deflections, which can critically affect part dimen-
sions, surface quality and texture. The high cost of air bearing spindles significantly
contributes to increased costs for a micromachining setup. Theoretically, any solid
material can be machined using appropriate cutting parameters and specific cutting
tool type. However, very hard materials would require unrealistic spindle speeds
and/or expensive cutting tools, such as cubic boron nitride (CBN) tools, and there-
fore non-conventional processes, such as Electrical Discharge Machining (EDM) or
Electro-chemical Machining (ECM), might be more appropriate.
Cutting feed rate is an additional decisive factor for the produced surface quality
and optimal values are dependent on material, cutting tool, and spindle speed. Main-
taining an acceptable constant chip load per cutting tooth to limit excessive cutting
forces or maintaining a certain Material Removal Rate (MRR) are well-established
methods of feed rate variation. Curvature-based variation of feed rate also has been
shown to increase machining efficiency, in terms of both machining time and surface
quality. Geometry-based variation of feed rate during machining can be imple-
mented based on the parametric representation of a toolpath. Parametric splines
as Simpson, Bezier or NURBS can be used for parametric design of common engi-
neering components and for off-line variable feed rate G-code generation for complex
geometries that can be mathematically represented with one or more curves [12–15].
Such methods can be easily implemented in parametric design and manufacturing
applications [16, 17]. Some modern CAM systems incorporate feed variation in
post-processed toolpaths, however, costly licensing does not allow for simple exper-
imentation on such methods. Custom developed applications interfaced with CAD
software through its Application Programming Interface (API) enable parametric
design, and off-line toolpath generation for machining of engineering components.
Parametric design and manufacturing have particularly interesting applications in
high volume manufacturing of customized engineering components [8, 18].
This study describes the equation-driven design and manufacturing methodology
of parametrical 3D models and considers the process planning for micro-milling the
miniature components made of brass. Cutting conditions are summarized and their
impact on surface quality and material removal rate (MRR) is discussed. The back-
ground of curvature-based feed rate variation is explored and briefly implemented.
The rest of the article is structured as follows: Sect. 2 presents the mathematical
background and application of designing micro-parts through parametric equations,
as well as the methodology to create the G-codes for their CNC manufacturing. In
Sect. 3, the specifics of applying the proposed methodology of Sect. 2 are given, and
results are presented and commented on in Sect. 4. Section 5 discusses the effective-
ness and applicability of the proposed methodology, as well as it introduces concepts
for its further development. Finally, the study concludes with the study findings in
Sect. 6.
144 Ch. Tzivelekis and A. A. Krimpenis
An impeller’s blade can be described using Simpson’s rule for a parametric curve
[16]. The angle of each vector tangent to the curve is given by Simpson equation
Eq. (1).
dr
dϕ = (1)
r tanβ(r )
∫r 2
180 1
ϕ(r ) = dr (2)
π r tanβ(r )
r1
β2 − β1
β(r ) = β1 + (r − r1 ) (3)
r2 − r1
Fig. 1 a Inserting Curve Points in the CAD model, b micro-impeller 3D CAD model, c spur
micro-gear 3D CAD model
Equation Driven Micro-Milling of 2D Free Form Models and Off-Line … 145
The involute (parametric) curve of a spur gear (Fig. 2c) can be expressed by Eq. (4)
and (5).
Db
xinv = (cost + tsint) (4)
2
Db
yinv = (sint − tcost) (5)
2
with
/
Ro2
0≤t ≤ −1 (6)
Rb2
where Ro is the outer circle radius and Rb the base circle radius. To parametrically
design a spur gear, the user inputs should determine the pitch diameter of the gear
Fig. 2 a The process of obtaining the involute curve for a gear. C is circular pitch. b mirroring the
involute curve for the other face of the gear teeth. c trimming the sketch and patterning circularly
according to the teeth number, d the respective 3D CAD model after extrusion equal to the gear
width
146 Ch. Tzivelekis and A. A. Krimpenis
and the module number to determine the circular pitch and therefore the number of
teeth. The base circle diameter can then be determined as
Db = D pi cosa (7)
where Db : base circle diameter, Dpi : pitch circle diameter, a: pressure angle (typically
14.5° or 20°). In practice, the involute curve for specific gears is designed by a series
of steps. Initially, the circular pitch is defined as P = mπ , where m is the module of
the gear, defined as the pitch circle diameter to the number of teeth.
To design a spur gear in practice, the root, pitch, base and outer circle diameters
have to be defined. Then, the involute curve coordinates can be approximated by
Eq. (4) and (5) by looping variable t, as defined in Eq. (6). From an arbitrary point
on the base circle (Fig. 2), an arc is drawn counterclockwise with length equal to
one circular pitch. From this point, a tangent line is drawn with length equal to 1xP.
From this new point, an arc of length of one circular pitch is drawn and from this new
point a tangent line to the base circle is drawn with length equal to 2xP. The process
is repeated for 6 tangents, with lengths up to 6xP. The endpoints of those lines, if
interpolated, represent the involute curve of the designed gear, which is mirrored to
obtain the other side of the tooth. The involute curve is trimmed at root and outer
diameter to form the tooth. After the involute curve has been designed, the sketch
is extruded at thickness equal to the desired gear face width value and patterned
N
circularly according to the number of teeth defined and distribution equal to 360 ◦ ,
The generalized flowchart for designing and creating the toolpath for the machining
of a parametric curve micro-part, to perform variable feed machining, is depicted in
Fig. 3. The proposed methodology can be developed using any typical programming
language, but Visual Basic (VB) or its variations is usually preferred, since it is
supported by the API of most CAD/CAM software, offering a direct link between
the software and the application.
The consecutive steps of the proposed methodology, as performed sequentially
by the developed application are:
Step 1. Define parametric curve equation.
Step 2. Create curve control points from equation at desired resolution.
Step 3. Calculate curve coordinates.
Step 4. Use CAD software to design the curves and model the part.
Step 5. Define desired chord error, sampling time-step, tool radius.
Equation Driven Micro-Milling of 2D Free Form Models and Off-Line … 147
Fig. 3 Off-line G-code generation flowchart for variable feed rate machining of parts defined by
parametric curves
Step 6. For each curve segments calculate radius of curvature, chord error, feed
values.
Step 7. Calculate toolpath points.
Step 8. Generate G-code and write NC file.
In the next section, we will apply this methodology for an example of B-spline
part with all the relevant details.
148 Ch. Tzivelekis and A. A. Krimpenis
Σ
k−1
C(u) = P i Ri,n (u) (8)
i=0
Ni,n (u)wi
Ri,n (u) = Σk−1 (9)
j=0 N j,n (u)w j
u − ui u i+k − u
Ni,k (u) = Ni,k−1 (u) + Ni+1,k−1 (u) (10)
u i+k−1 − u i u i+k − u i+1
where k is the number of control points of the curve, u is the curve parameter, ui the
knots, Pi control point of the curve, wi the corresponding weight of the control point,
and N i,n basis function of the spline.
For method validity and demonstration reasons, two B-spline curves on a 20 mm
disk model were chosen, each defined by four control points (Pi ) as P1 : {9.7, 3}, P2 :
{11.5, 1.5}, P3 : {11.5,-1.5}, P4 : {9.7,-3} for the right curve and P1 : {−9.7, 3}, P2 :
{−11.5, 1.5}, P3 : {−11.5,−1.5}, P4 : {−9.7,−3} for the left curve (Fig. 4a).
The two main curves were offset in both directions to generate sets of coordi-
nates: points and toolpaths for different cutting passes, which are to be performed
in different heights depending on the user-defined cutting depth. To test the curve
coordinates against the CAD model (Fig. 4b), a VB application was created. The
application calculates the curve coordinates, inserts them as individual points in the
CAD model and saves them into a text file. Based on the generated coordinates and
the user-defined parameters of cutting depth and width, desired chord error, feed rate,
and spindle speed, a linearly interpolated G-code was produced and saved into a text
file.
Equation Driven Micro-Milling of 2D Free Form Models and Off-Line … 149
Fig. 4 a B-spline control points, b B-spline points calculated out of the user defined control points,
inserted through API in CAD software
To vary the machining feed rate according to curvature, the variation objective,
and the upper and lower boundary must be set. Feed rate of the cutting tool in the
linear segment between C(ui ) and C(ui+1 ) of two consecutive cutter locations of a B-
spline toolpath and neglecting acceleration and deceleration is expressed by Eq. (11)
(constant feed rate along a linear segment).
L
f = (11)
tS
where f is feed rate in mm/min, L length of linear segment in mm, t S sampling time
of the CNC loop control in seconds.
Chord error of tool path in a segment of the parametric curve is described as the
vertical distance between actual and nominal tool location and depends on the feed
rate and the local radius of curvature [14], given by Eq. (12).
/
Li 2
εi = ρi − ρι2 − ( ) (12)
2
where i is the integer number of point, εi the chord error in mm, ρ i the radius of
curvature of the parametric spline, and L i the linear segment length2 in mm.
Curvature-based feed rate variation can be implemented for improving MRR,
while aiming at similar or better surface quality. Considering circular approximation,
by substituting Eq. (12) into Eq. (11) and by taking into consideration the radius of
curvature of the tool’s cutting edge, feed can be adjusted based on the user defined
maximum allowable chord error and the local radius of curvature as in Eq. (13).
2In case the knot vector of the spline is uniform, the spline is reduced to uniform rational B-spline
and thus L is constant.
150 Ch. Tzivelekis and A. A. Krimpenis
/
2
f = (ρi − re )2 − ((ρi − re ) − εi )2 (13)
tS
where C˙x and C˙y is the x and y component, respectively, of the first derivative of C,
C̈ x and C̈ y is the x and y component, respectively, of the second derivative of C.
Based on this method the upper and lower feed rate limits are determined only by
the curvature and the average feed rate value set according to the material properties.
Considering the minimum chip thickness criterion [13], the lower feed rate limit
should be kept above a value, so that it satisfies Eq. (15) (minimum chip thickness
criterion for micromachining).
where f feed rate in mm/min, z number of teeth (cutting flutes), S spindle speed in
rpm and re cutting edge radius in mm.
In the developed application, the maximum allowed feed was calculated for
each linear segment of the variable feed toolpath, tested against maximum and
minimum allowed feed values, and implemented in the G-code. User-defined chord
error, sampling time and tool tip radius were set as 0.0001 mm, 10–3 s and
0.0005 mm, respectively. Under these conditions the resultant feed rate value ranged
at 50–58 mm/min.
In order to assess and compare machining results for constant and variable feed
machining, as proposed in Sect. 2, two workflows were implemented. The first work-
flow utilizes a well know CAM software, proven to work well on any 2D or 3D
machining process, which applies constant feed for G-code creation. The second
workflow involves the creation of variable feed G-code through the methodology
presented in Sect. 2.3, which makes use of part coordinates and minimum chord
error for feed calculations.
Equation Driven Micro-Milling of 2D Free Form Models and Off-Line … 151
Fig. 5 For extracting the G-code from the CAM software, the two models, micro-impeller and
micro-gear, where combined. a Micro-impeller after CAM simulation, b Micro-gear after CAM
simulation (not shown in proportion to a, but in detail). Stock material for the micro-gear was the
cylinder left in the center of the micro-impeller, as seen in a
After applying the methodology in Sect. 2.3, we obtained variable feed toolpaths
(Fig. 6) and created the respective ISO G-codes for execution in the same machine
as before. This model was machined out of a cylindrical brass block of 25 mm
diameter and 15 mm height (Fig. 7). The same tools were used, as in the constant
152 Ch. Tzivelekis and A. A. Krimpenis
Table 1 Machining Time and parameter values for constant feed micromachining of combined
demonstration model. Legend: MT = Machining Time in hh:mm:ss, D = Cutting Tool Diameter in
mm, F = Cutting Feed rate in mm/min, S = Spindle speed in 1000 rpm, ap = Cutting depth in mm,
ae = Cutting width in mm, ft = feed per tooth in 10–3 mm/tooth, Vc = Cutting Velocity in m/min
Combined model MT D F S ap ae ft Vc
A: micro-gear
B: micro-impeller
Roughing A 01:32:58 0.5 80 30 0.15 0.2 1.33 47.1
Finishing A 00:02:03 0.5 60 40 0.1 0.15 0.75 62.8
Roughing B 01:59:48 1 120 25 0.2 0.4 2.4 78.5
Finishing B 00:30:53 1 80 35 0.1 0.4 1.43 109.9
Total 04:05:42
Retract rate: 400 mm/min, Plunge rate: 40 mm/min, cutting tools are uncoated carbide flat end mills,
Part material: brass
feed machining method. Figure 6 shows the correlation between the local radius of
the part and the feed adaptation, calculated by the developed application.
By comparing Tables 1 and 2, we can see that the application of variable feed rate
methodology leads to a significant reduction in machining time values. Specifically,
there is a reduction of 15.7 and 8.9% in roughing and finishing time, respectively,
of the micro-gear feature, while roughing and finishing time of the micro-impeller
feature was reduced 14.4% and 15.0% respectively. This is significant improvement
in productivity, since the feed rate values calculated by the proposed methodology
are in the same vicinity with the constant feed runs. What remains is to compare
surface quality of the produced parts with the two workflows. This is presented in
Sect. 4.
Table 2 Machining Time and parameter values and for variable feed micromachining of the
combined demonstration model, based on proposed methodology. Legend: MT = Machining Time
in hh:mm:ss, D = Cut-ting Tool Diameter in mm, F = Cutting Feed rate in mm/min, S = Spin-dle
speed in 1000 rpm, ap = Cutting depth in mm, ae = Cutting width in mm, ft = feed per tooth in
10–3 mm/tooth, Vc = Cutting Velocity in m/min
Combined model MT D F S ap ae ft Vc
A: micro-gear
B: micro-impeller
Roughing A 01:18:25 0.5 71–82 30 0.15 0.2 1.18–1.37 47.1
Finishing A 00:01:52 0.5 49–63 40 0.1 0.15 0.61–0.79 62.8
Roughing B 01:42:36 1 99–125 25 0.2 0.4 1.98–2.50 78.5
Finishing B 00:26:15 1 73–84 35 0.1 0.4 1.04–1.20 109.9
Total 03:29:48
Retract rate: 400 mm/min, Plunge rate: 40 mm/min, cutting tools are uncoated carbide flat end mills,
Part Material: Brass
No contact white light interferometry and focus variation microscopy was used for
evaluating the surface quality of the machined features. Combined measurements
were performed on a Zyngo Newview 5000 and an Alicona Infinite Focus SL. To
compare constant to variable feed workflows, both MRR and surface quality were
addressed. Table 4 depicts the obtained values for the two workflows on the demon-
stration part. The Ra values presented in the table are average values obtained at ten
different areas of the part.
154 Ch. Tzivelekis and A. A. Krimpenis
From Table 3, Figs. 8 and 9, two basic conclusions can be drawn regarding the
proposed methodology: 1. The machining time for the demonstration part micro-
machining was improved both for each separate single machining phase (about 9–
15%) and as a total (14.6%). 2. The measured average Ra for both the side and the
bottom micro-machined surfaces, when the proposed variable feed methodology was
implemented, led to an improvement of about 10%.
The average surface roughness as measured in the bottom of the machined surface
of the combined model was measured with white light interferometry a little better
than the one measured on the sides. Analysis on the individual micro-parts that
Fig. 8 Aa Micro-gear and micro-impeller combined model machined out of brass. b Height map
and c respective magnified surface image as acquired in Zyngo white light interferometer for the
side milled blades’ surface
Equation Driven Micro-Milling of 2D Free Form Models and Off-Line … 155
Fig. 9 a Height map and b respective optical microscopy image as acquired in Zyngo white light
interferometer for the side-milled gear’s surface
comprise the combined part reveals that although higher cutting velocities values
were used in the micro-impeller, similar average surface roughness but rougher
texture was obtained for the finished surface of the spur micro-gear part, which
was machined at lower cutting velocity values.
This level of surface quality for micro-components addressed to several applica-
tions and is achieved at a decent productivity rate. Surface quality of the bottom traces
of machining of the curved toolpaths under constant and variable feed was evaluated
on Alicona. Overall surface texture and average roughness value was affected by
varying the feed according to curvature at this specific experiment; however it is a
matter of further processing of the results and conducting more extensive experimen-
tation in order to verify that the 10% improvement of Ra is a significant one for this
type of micro-parts. For more complicated toolpaths and high-volume production,
such level of machining time reduction significantly improves productivity.
5 Discussion
The application of the proposed methodology for obtaining variable feed G-code
based on the parametric equations of parts can be summarized in the following:
(A). Reduced time from design to manufacturing with equation-derived
designs.
Equation-derived designs can be instantly modified according to user-
defined parameters, by exploiting built-in routines in CAD software or custom
routines developed through CAD API in generic programming languages.
Reducing design times through automation can accelerate and improve exper-
imentation, eliminate design errors and ultimately improve both quality
and productivity for various manufacturing methods and particularly CNC
applications.
156 Ch. Tzivelekis and A. A. Krimpenis
(B). Cutting conditions for parametrically designed models and their impact
on surface quality and MRR.
Cutting conditions clearly play the most important role in determining
machining time and surface quality levels of the produced parts. The spur gear
and micro-impeller combined model was machined out of brass using two indi-
vidual finishing toolpaths initially computed in CAM software with constant
feed rates. The former was machined at significantly lower cutting velocity
values from the latter (63 m/min and 110 m/min respectively). This resulted
in the improvement of MRR, but also surface smoothness for the impeller
model. Obviously, specific speeds and feeds and consequently MRR values
corresponds to an optimum level of achievable surface quality at a given setup.
Machining outside recommended feeds and speeds boundaries for each mate-
rial, machining can become unstable resulting in low surface quality, excessive
tool wear, energy and material waste and additional computational times.
Variable feed machining according to curvature was successfully imple-
mented computationally for a set of simple parametric parts and ISO NC
code was programmatically generated and modified for feed variation using
VB.NET. A set of curved toolpaths were machined out of brass using curvature-
based feed variation. The same computationally derived curved toolpaths were
machined using constant feed.
The Ra values obtained using focus variation microscopy for the variable
and constant feed machined curves were different, improved with variable feed
rate philosophy. Different curve shapes that result in a wider range of curvature
values and increase in curve thickness could help for better evaluation of this
method in terms of surface quality. Curvature-based feed variation resulted to
a reduction of machining time for a short toolpath, fact that hints that such
methods can practically impact very significantly machining time in highly
curved models. The variable feed rate method can be implemented with success
for all toolpaths that can be parametrically described.
The overall performance of machining is satisfying with an indicative
average Ra value not exceeding 1.3 μm in the case of brass micro-milling.
Higher spindle speed values and therefore cutting velocity values could have
resulted in an even greater increase of surface quality It is noticed that decrease
in cutting tool diameter should be compensated by an increase in spindle speed
values which justifies the need of high-speed spindle for micro-machining.
Machining time can be greatly affected by the cutting tool diameter and
machining strategy, along with the significant effect of machining feed rate.
A constant machining feed-rate as set according to the material hardness and
geometry typically lower than the maximum possible within a defined toler-
ance value or the allowable the deviation of the actual toolpath. Varying the
feed rate according to curvature can slow down the feed when a highly curved
surface is machined and increase it when machining occurs in a linear trajec-
tory. For highly curved components, curvature-based feed variation may result
in improved surface quality and smoother machining. Varying the feed within a
range determined by the material and cutting tool may further reduce machining
Equation Driven Micro-Milling of 2D Free Form Models and Off-Line … 157
6 Conclusions
References
1. Dhanorker A, Liu X, Özel T (2007) Micromilling process planning and modeling for micromold
manufacturing. In: Proceedings of the ASME 2007 international manufacturing science and
engineering conference. Atlanta, Georgia, USA. October 15–18, pp 759–769. https://doi.org/
10.1115/MSEC2007-31070.
2. Filiz S, Xie L, Weiss L, Ozdoganlar O (2008) Micromilling of microbarbs for medical implants.
Int J Mach Tools Manuf 48:459–472
158 Ch. Tzivelekis and A. A. Krimpenis
1 Introduction
The quality of machined surfaces plays an instrumental role in the functional perfor-
mance of manufactured components. Face milling is one of the processes used
to reference surfaces in order to allow for high accuracy assemblies. Establishing
N. Tapoglou (B)
Department of Industrial Engineering and Management, International Hellenic University, Sindos,
57400 Thessaloniki, Greece
e-mail: ntapoglou@iem.ihu.gr
C. Efstathiou
Department of Production Engineering and Management, Technical University of Crete, Chania,
Greece
A. Tzotzis · P. Kyratsis
Department of Product and Systems Design Engineering, University of Western Macedonia, Kila,
50100 Kozani, Greece
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 159
P. Kyratsis et al. (eds.), Computational Design and Digital Manufacturing,
Management and Industrial Engineering, https://doi.org/10.1007/978-3-031-21167-6_8
160 N. Tapoglou et al.
process models offers a means to analyse, visualize, and optimize the cutting process
in order to achieve the best balance between time, cost, and quality of the manu-
facturing process investigated. Simulation models offer a cost-effective means to
perform such optimization tasks as they avoid lengthy experimental trials. The simu-
lation of manufacturing processes, therefore, has gained a lot of traction in industry.
Researchers have used a series of techniques in order to predict key performance
indicators of manufacturing processes. Analytical, geometrical, experimental and
machine learning approaches have been used in order to simulate the cutting process
and predict metrics of the process such as the surface roughness, cutting forces, tool
wear and residual stresses on the component [1–4].
In the field of face milling simulation researchers have performed machining trials
and developed robust simulation models in order to optimize the process. Kim and
Ehmann [5] were amongst the first to introduce a cutting force model for the face
milling operation. Their model was based on the analytical description of the cutting
edge and all the motions of the process and the results included verified cutting forces.
Felho et al. [6] developed a method for predicting surface roughness values of face
milled components using the analytical description of the cutting edge. Franco et al.
[7] focused on the role of tool runout on the resulting surface quality characteristics.
Their model used analytical equations in order to simulate the effect of runout on
surface roughness. Tapoglou and Antoniadis [8] presented a CAD-based approach
on the simulation of face milling. Their model included the calculation of cutting
forces and the resulting surface quality metrics. Arizmendi and Jimenez [9] used a
combination of analytical equations and a regularly spaced grid of points in order
to model the cutting process and predict the surface characteristics, including multi
pass cutting.
Extending the research presented above the model presented in this research
focuses on the integration of runout errors, both axial and radial, in a CAD-based
simulation approach. In an effort to further understand the issues that result from
runout, the model developed is used to generate process capability maps in which
the acceptable limits for tool runout are presented.
The remainder of the paper is structured as follows; Sect. 2 presents the architec-
ture of the developed simulation platform as well as the structure and results of the
algorithms implemented. Section 3 presents the validation of the simulation platform
while Sect. 4 presents the development of the process maps. Section 5 introduces the
concluding remarks and future work.
The kinematics of face milling is well established and has been analysed in detail
by many researchers. The primary cutting motion is the rotation of the cutting tool
around its axis. The primary feed motion is along a vector that sits on the plane
normal to the axis of rotation. The axial and radial depth of cut are defined in relation
to the axis of rotation of the tool The kinematics of the process is presented in Fig. 1.
Study of the Topography of Face Milled Surfaces Using CAD-Based … 161
In multi insert tools, that are commonly used in industry due to the high productivity
they offer, the position of the cutting edges with respect to each other can introduce
deviations in the final surface generated and contribute to the deterioration of the
cutting edge and the decrease in the final surface quality. These deviations can occur
in the axial and radial directions. The so-called axial and radial runout of the tool are
also presented in Fig. 1.
In order to simulate the cutting process a novel simulation platform was developed.
The simulation platform is based on a CADmodelling kernel in order to leverage the
best-in-class accuracy of such platforms. The flowchart of the simulation process is
presented in Fig. 2.
The inputs to the simulation platform include geometrical as well as numerical
data. One of the primary inputs is the geometric definition of the cutting edge that
is imported by the user as a 2D profile. The numerical data of the process can be
divided into three categories, namely, cutting tool data, workpiece data and process
parameter data. Once all the input variables have been correctly defined the simulation
algorithm uses the geometrical core to generate the trajectory of each cutting edge
during machining. The surfaces created are used to split the volume of the initial
workpiece into two solids, the chip and the workpiece after that machining step. The
process is repeated until all the teeth machine fully the surface. After the end of the
simulation, the final topography of the workpiece can be further analysed to obtain
the surface roughness values and the profile on section planes along or perpendicular
to the federate direction. Moreover, the chip geometry can be further analysed to
extract the cutting forces expected during machining.
Figure 3 presents in more detail the tooth trajectory calculation algorithm. As can
be observed the first step in the process is the location of the axis of rotation of the tool
162 N. Tapoglou et al.
in the 3D space. As the tool rotates the centre of the tool also advances with a value
proportional to the feedrate used. The axis of the tool is used to position the cutting
edge profile in 3D space with the tool rake and side rake angles that are defined by
the user. At this stage, the axial and radial runout values for each individual tooth
are applied to the tooth profile by adjusting its position. As the tool rotates a series
of profiles in 3D space are created and are used to create a 3D surface that includes
the full kinematic chain of the process as well as the tool axial and radial runouts.
3 Validation
After the development of the algorithm, several cases were simulated in order to
test the robustness of the developed approach. The first step of the validation process
included trials in which only one insert was used. This was selected in order to validate
the results of the simulation process before runout errors are integrated into the
process. The simulation results were compared with experimental trials performed
by Franco et al. [7]. The results of this validation step are presented in Fig. 4. As can be
observed there is good agreement between the machined and simulated surfaces with
the lowest error observed in lower cutting feeds. This deviation has been reported by
other researchers and is attributed to the stochastic phenomena during the machining
process.
The second step of the validation included cases where multiple inserts were
employed in machining a surface. The simulation results were compared with exper-
imental results from Arizmendi and Jimenez [9]. The cutting parameters as well as
the axial and radial runout were identical to the machining trials performed. The
validation results are presented in Fig. 5. As can be observed the simulation results
closely follow the experimental results. The effect of the runout can be observed in
all three cases investigated.
4 Parameter Investigation
Fig. 6 Effect of radial (top) and axial (bottom) runout on surface topography
deteriorates at higher, more productive feedrates. At the highest federate the surface
roughness increases sevenfold when the runout of the tool is increased by 12um.
5 Conclusions
Face milling is one of the most commonly employed methods for referencing large
surfaces. Its use in automotive aerospace and general manufacturing is of key impor-
tance in the success of any manufacturing of engineering components. In an effort
to provide an increased understanding into the process, a simulation platform was
developed that is in a position to model the movement of the tool and incorporate
166 N. Tapoglou et al.
the axial and radial runout of the tool. The simulation model was validated with the
use of experimental data both for single insert trials, where runout is not applicable
and for multi insert trials incorporating the tool runout values. The simulation model
was used to develop a series of process maps that provide an understanding of the
effect of runout on the final surface characteristics.
References
Abstract The chapter presents the using mode of some digital tools in the process
of design and manufacturing of an engine block used from lightweight four-stroke
engine system for competition kart. The combustion engine from automotive industry
has the enlarged future because in the green world is grooving the combustion solution
with Liquefied Natural Gas (LNG) and, in the last time, Hydrogen. Different digital
environments, including CATIA™, provide a large number of tools for design in
order to obtain the best constructive solution for a product and later to manufacture
it. The design for disassembling is implemented starting with the concept level of
the engine block. For the collision check is used the CATIA™ DMU Space analysis
applied on the 3D engine block model. All parts for engine assembly are moved in
functional position using the digital tool from Assembly Design module. Another
possible issue, the wrong access for assembly tools is validated using the 3D models
for the tools safety spaces. To check the most important designed parts in real world
are generated the 3D files and are manufactured the real parts using two additive
manufacturing systems. The G code for multi-axis machining of the engine block is
generated from CATIA™ Machining module and the CNC files are also simulated
to be validated in virtual environment.
1 Introduction
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 167
P. Kyratsis et al. (eds.), Computational Design and Digital Manufacturing,
Management and Industrial Engineering, https://doi.org/10.1007/978-3-031-21167-6_9
168 S.-A. Haba and G. Oancea
According to actual industrial environments, the terms like design for manufac-
turability, design for manufacturing and assembly, design for additive manufacturing,
design for disassembly, digital manufacturing and so on [2, 3] are currently used.
The design and manufacturing of automotive parts are directly changed by the
digital tools’ era. These digital tools are also implemented in many other domains,
including motorsport industry.
This book chapter allows to the reader to find out how the digital tools are used
in the process of design and manufacturing of an engine block from a lightweight
four-stroke engine. It is a multi-purpose four-stroke lightweight engine; the engine
design is accommodated with the green environment and the digital tools assure the
updating of construction parameters in real time.
Gradually, the main stages of work are described, as well as the concepts and
digital tools used: design for manufacturability, design for assembly and disassembly,
virtual assembling of engine block, digital collision check tools, tool accessibility
stage, virtual engineering for engine block and the additive manufacturing systems
used like a bridge between the digital environment and real life.
the outer chamfer (2)—is used to assure simply assembling the cylinder casing with
corresponding engine block hole.
In the bottom area of cylinder casing, Fig. 2b, there are the inner assembling
chamfer (1) which is used to assure the insert procedure for engine maintenance and
the outer chamfer (2) used for the assembly of the base of cylinder casing in the
corresponded hole from crankcase.
Similarly, the engine block is designed with self-centring chamfers (Fig. 3), here
are inner chamfer (1), outer chamfer (2) and guide holes with chamfers (3) and (4).
The semi-finished engine block is manufactured through the under-pressure mould
injection process from lightweight alloys.
In order to assure the optimal manufacturing process for this engine block, several
radii are provided (Fig. 4). The (1) and (4) radii are designed to assure the uniform
170 S.-A. Haba and G. Oancea
flow of molten material in mould tool, and the (2) and (3) radii have the role in the
assuring of easy extraction of the part form the mould tool.
The engine block and the cylinder casing cutting procedures are grouped in two
main groups, in this way is reduced the fixing errors.
All the standard added geometry for engine block (assembly base for exhaust
manifold, intake-manifold added surfaces for seal trip and so on) are included in
external parts, in this way is reduced the auxiliary tools in manufacturing process.
The main design objective for all modern systems is the embedded maximum func-
tionality properties in minimum space (minimum box for maximum features). This
modern concept is implemented due to reduce the dimensions and the weight of the
devices. This concept leads to the following technical issue.
. disassembly procedures become too complicated;
. increasing the times resource allocated for disassembly;
. increasing the price for dedicated tools;
Study on Design and Manufacturing of an Engine Block Using Digital … 171
For the total engine maintenance, the air intake and fuel admission area (3) are
dismounted trough a translation movement along OX axis, due to the digital tools is
possible to check the real movement without any collision (Fig. 6).
In order to check the possibility of clash between the engine parts is used the
CATIA™ Clash detection digital tool. Because exist a lot of parts in the engine
structure, it is used multiple selections from selection tree. The interferences are
coloured in red [6].
In opposite position is the exhaust manifold (Fig. 7). For increasing the
dismounting possibility, this exhaust manifold is designed with detour geometry
In order to assure an easy and cheap dismounting process for the engine block is
implemented only three disassembly directions along OX, OY and OZ axis, in any
other case, the disassembly direction increases the effort of the human operator in
service.
For assuring the extract of the engine block from engine assembly, is estab-
lished from the design stage the safety space for dismounting the nuts from
cylinder head; the dismounting direction is along OZ axis (Fig. 9).
From the designed stage, according to the principle of disassembling, it is needed
to be implemented the following facilities:
. safety space between the engine wiring and dismounting tools;
. safety space for extract the engine ignition spark;
. the extraction pockets for tools;
. reducing residual engine oil losses when the cylinder head assemble is moved
along OZ direction (Fig. 10);
. eliminating the possibility of screws and nuts escaping from tools to falling in
inaccessible places of the engine;
. the dismounting trajectory for dismounting tools must be simple and easy to be
made by the human operators;
. occupational safety and health for human operators.
174 S.-A. Haba and G. Oancea
Fig. 9 Dismounting
direction for nuts
from cylinder head
Fig. 10 Dismounting
direction for cylinder head
Study on Design and Manufacturing of an Engine Block Using Digital … 175
Because the piston bolt is in the bottom position of the engine assembly (Fig. 11)
from design stage is necessary to have the external bolt surface position over the
cylinder base level. The dismounting direction is established form a translation along
OY axis, and the bolt has no possibilities to drop under the piston area.
The green parts (1) presented Fig. 12 are designed with groove to position the
extract tools without the possibility of its accidentally moving.
An important aspect is that accidentally moving of dismounting tools can generate
the injury of the human operator.
Figure 13 shows the area in left and right positions (rounded in red) for two
dismounting tools. It is visible the differences between the diameters of cylinder head
and engine block in order to create two steps. These two steps are used for positioning
the extraction tools and for disassemble step is necessary only for rotating the tools
with a small rotation angle.
Fig. 11 Dismounting
direction for piston bolt
In order to assure the correct assembling process in factory and the corresponding
assembling time, similarly with plastic parts, all the parts from engine must be
designed using the rules of the Design for Assembly (DfA) [7]:
. decrease the number of parts of an assembly;
. design parts from optimum automatic or manual handling;
. decrease the assembling time spent.
Virtual assembling of engine block is done in CATIA™ digital environment, and
it is used the 3D models of engine parts, at 1:1 scale.
The assembling is governed by a complete set of geometric constraints imposed
by the functional roles [8].
To realize the digital assembling of complex systems, the digital tools are the best
solution, because it is possible to move all the components in functional positions,
using real mounting trajectories.
In this case, CATIA™ provides the Smart Move digital tool, it is included in
Assembly Design module.
In the development process of a complex project, the assembly stages are grouped
in a document generally called assembly concept.
Finally, using this assembly concept, the manufacturing process design team
creates the assembly frameworks.
For this engine block assembly created in the digital environment, the next steps
to be followed are briefly described below:
. pressing the guides for engine block base;
. inserting the piston assembly;
. pressing the piston bolt;
. screwing up the four mounting long studs;
Study on Design and Manufacturing of an Engine Block Using Digital … 177
Fig. 14 Direction to
pressing the guides for
engine block base
178 S.-A. Haba and G. Oancea
long studs (Fig. 17). This action is done at the step of screwing up the four mounting
long studs.
In order to move more parts together, exist the following possibilities:
. creating the selection sets;
. creating the groups;
. selecting all the parts together;
. moving the entire product with its environment.
The next step of digital assembling is positioning the engine block (Fig. 18), in
this case is moved down the product of engine block (the product consists of engine
block and cylinder casing).
Study on Design and Manufacturing of an Engine Block Using Digital … 179
The digital moving tool allows to select and move down along the necessary
mounting trajectory the parts without common area or without contact between
(Fig. 19). For example, in CATIA™ environment, the property named check
connectivity is not checked by the user.
The last step of digital assembling is positioning the cylinder head assembly
(Fig. 20), for moving down along OZ axis, the digital tool uses the enlarge flat
surface marked by the axis system.
The engine assembly that is studied in digital tools context in this chapter is
designed to be used on a competition lightweight kart.
Figure 21 shows the engine assembly positioned in the kart chassis in virtual
assembling.
This competition kart is designed according with the green environment and is
used a four-stroke engine with high-speed rotation and exhaust filtering system, not
a two-stroke engine, to decrease the exhaust gases toxicity.
The kart chassis components are numbered follow:
. 1—rear bumper (from body kit);
180 S.-A. Haba and G. Oancea
In this case of digital collision check, in CATIA™ digital environment, the user
applied the analysis in kinematic mode, with active option for collision detecting.
The system, in real time, generates the swept spaces, using the 3D models for
previously maintained parts combined with functional joints. This swept space is
checked with statics parts from immediate vicinity.
For example, another area to study against collision or wrong safety space for
movement is the joint between piston assembly and conrod (Fig. 23, the area of
interest is rounded). The possibility of clash exists between the small end of conrod
and the piston barrel.
Here is an interesting analysis because the conrod describes a rotation movement
and the piston describes a linear movement, and the digital collision check has possi-
bility to perform the analysis between the components in movement with different
trajectory.
The digital tools for collision and wrong distances analysis have no possibility to
detect the difference between the collision classes.
For assembling point of view, the collisions have three possibilities:
. collisions corresponding to interference fits—technological collisions;
184 S.-A. Haba and G. Oancea
This report shows a long clash between two static components (presented in
Fig. 24), after the study of situation, is established this clash is correct interference
fits, because between the engine casing and its engine block is a shrunk fit. Finally,
these collision analysis reports are attached to the digital project in project Product
Lifecycle Management (PLM) database.
The report of clash analysis digital tool is not enough in order to identify the
situation from 3D digital prototype, and the system assure attached to each analysed
entity a detailed image. For example, in clash line between the engine block and
cylinder casing, the correct clash for interference fit is shown a direction mark with
common distance and the common surfaces are highlighted (Fig. 25).
The complete rules of digital collision check are shown in Fig. 26. It is presented
all category of the situations with relevant status (relevant item means the item was
verified by the user):
. clearance (3 mm imposed safety space for analyses);
. clash (for interference fits);
Study on Design and Manufacturing of an Engine Block Using Digital … 185
Fig. 25 Clash line between the engine block and cylinder casing
186 S.-A. Haba and G. Oancea
The collision checks digital tools have also the role of validating the assembly
concept. In assembly, concept parts with contact and tightening are provided.
Figure 27 detailed the items from check clash report:
. clash—interference fit: between the engine block (2) and its cylinder casing (3);
. contact—in the top area:
between the guides for cylinder head (highlight rounded) and engine block (2);
between the engine block (2) and top seal (1);
. contact—in the bottom area: between the guides for crankshaft case (highlight by
rounded) and engine block (2).
6 Tool Accessibility
The tools accessibility is the bridge between the Design for Manufacturing/Design
for Manufacturing and Assembly (DfM/DfMA) and Design for Disassembly (DfD).
To assure this procedure, the engineers have two possibilities:
. analogic procedures—using real tools (embarrassing procedures and time
consuming);
. digital procedures—using digital environment (it works in real time, with low
costs and high-performance results).
The tools accessibility check procedure is an important section in performing
project database. To perform this extremely important stage of work in the engine
digital prototypes, the following three main items are needed:
. digitized 3D models for all used tools (tools set for factory, and tools set from
services);
. digital prototype for studied system (in this case for engine system);
. digital environment (with 3D moving capabilities).
All the digitized 3D models for used tools must be accompanied by the necessary
spaces for work when the tools are in position, the so-called safety space.
For this stage, the designers must collaborate with the tool makers to obtain the
information about dimensions and safety spaces associated to each tool.
From safety spaces point of view, the industrial rotary tools require less space
than standard tools. For example, a solid wrench needs more safety space for works
that an electric screwing tool.
In the tool’s accessibility check procedures, it is extremely important to have all
possibilities for simulation:
. industrial assembling process by robotics arm;
. industrial assembling process by human operator;
. service disassembling by human operator (in services and repair shops, there are
still workers nowadays, and the disassembling/assembling time is too long than
robotics systems from factories).
Figure 28 shows the safety space for tools (1). The model for tools is marked in
3D environment as cons because these cons included the all-tool wiring area, and
the assembling tools have four mounting long studs and engine piston in the vicinity.
188 S.-A. Haba and G. Oancea
The following remark is made regarding this figure, the differences between the
assembling procedure and disassembling procedure:
. industrial assembling process (by robotics arm or by human operators)—pressing
the engine block guides in their corresponding hole:
assembling tool type: pneumatic pressing tool;
electromechanical device;
. services disassembling process by human operators—hanging and pulling out the
guides form their corresponding hole:
disassembling tool type: manually extraction grip.
Figure 29 shows the safety space for tools (1). The model for tools is marked in
3D environment as cons, the level of access it upper than in figure above and the
trajectory of the assembling tools is easier.
Figure 30 shows another task for digital simulation, the assembling of complex
system in its mechanical environment, there are the following items to check:
. tool accessibility in mechanical environment (in this case a competition kart
chassis);
. mounting trajectory (in this case is a translation along OY axis, if change the
trajectory along OZ, it involves mounting the kart chassis on a rotating table);
. human hands safety space (in order to assure the health and safety work
conditions);
[11]. The engine manufacturing trends are accommodated with the VE strategy in
order to perform high-tech goals:
. create environment-friendly products;
. decrease the time spent to manufacturing;
. reduce the energy consumption;
. cost cut for engineering;
. reduced the technological fails.
In order to assure the virtual engineering goals, the machining processes are the
big challenge in nowadays industrial environment governed by the Industry 4.0.
The machining process is one of the most important components of virtual engi-
neering, and for the complete and correct 3D prototypes of complex systems, it has
to be used.
Machining process is described by complex and nonlinear relationship between
input and output parameters, for example, in the case of cutting process, the input
variables include cutting tool material, workpiece material, speed, feed rate, depth
of cut, etc. and the output parameters are tool life, tool wear, tool wear rate, cutting
forces, material removal rate, etc. [12].
Virtual manufacturing (VM) technology could be considered one of the most
important parts in the manufacturing systems [12, 13].
The main components of machining error are from clamping error and machine
error, from this reason are done the studies of simulation model of machining process
based of virtual modelling technologies [13].
Digital model used for engineering and manufacturing simulations at the engine
block used three main entities:
. digital prototype for engine block;
. digital machining environment (tools and trajectories);
. clamping devices.
192 S.-A. Haba and G. Oancea
Figure 33 shows the 3D prototype for engine block ready for manufacturing
simulations. The prototype is characterized by the following items:
. part body—3D model for engine block;
. material—aluminium alloy;
. design table—for parametric changes;
. construction—geometrical set for surfaces construction.
In order to assure a properly cutting cost for the manufacturing process in case
of small and unique production, the main rule is to group and performed machining
phases in minimum operations to reduce the number of machine tools and clamping
devices. From this reason, the engine block is machined in two machining operations:
. engine block in functionary position: machining top and inner areas;
. engine block rotated with 180 degrees: machining bottom surfaces and long holes.
CATIA™ digital environment allows the manufacturing simulation using a digital
assembly composed of the two major parts (Fig. 34):
. semi-finished 3D model for engine block (1);
. 3D model for modular clamping device (2).
The time to change the production type is also short because of the modular
clamping devices assure a quick reconfiguration.
The modular devices are designed in contextual environment, the digital prototype
for part is the core of input data and the minimum and maximum overall dimensions
of the parts family are also considered [14].
The digital tools dedicated to surface machining have many tasks for performing
the complex bodies using milling or lathe machining.
Figure 35 shows the assembly composed by the engine block semi-finished part
and the clamping devices. It is shown the collision check sequences and is used the
digital tool surface machining from CATIA™ V5.
This is an example of virtual simulation for combined clamping systems (base
one and second on the top area at semi-finished part in working position). It is also
simulated the tool clamping dimension against collision with environment.
In highlight, rounded area is shown the upper modular clamping segment, and
the tools approach trajectories and the working trajectories which are without any
collisions.
To perform collision check task, the user must set the resource list:
. machine type (including the postprocessor emulator);
. home point for system;
. tools list (generally imported from the software tools databases).
The digital environment provides to the user on demand the intermediate positions
for milling tool to obtain a visual information about the safety level compared to the
clamping device (Fig. 36).
The approach trajectory and the direction along OZ component are shown in
Fig. 37. At this level of simulation, the main task is to determine the cutting path
194 S.-A. Haba and G. Oancea
to reduce the machining shock, because it decreases the shock and protects guide to
reduce the machine wear and to increase the tool life.
Another situation for machining simulation is shown in Fig. 38, where the
assembly is composed by the following components:
. semi-finished part (1), general settings opacity 50%;
. final part (2), general settings opacity 100%;
. clamping device (3), general settings opacity 100%.
Study on Design and Manufacturing of an Engine Block Using Digital … 195
The area for machining is free from clamping devices, and it used the central
engine block hole for an elastic auto-centred device.
196 S.-A. Haba and G. Oancea
The machining simulation in digital environment (Fig. 39) assures to the user
a detailed representation about the tool for current step (1) and the no-machining
surfaces (2).
Figure 40 shows an intermediate stage, after the milling process, the long drill (1)
is shown in working position, the first two fully machined holes are rounded.
It is important to note that the user possibility to direct check the cutting depth
with a digital measurement tool. Similarly, the user can check by measurement on
the virtual model the final diameter of the pre-drilled holes.
As mentioned above, the first contact between the designers and the result of their
digital work is to have real projected parts, obtained by using additive manufacturing
systems. The additive manufacturing systems create the bridge between the digital
world and real world.
The parts from project under construction manufactured using additive processes
allow to the designer to perform:
. general shape analyse;
. surfaces design comparison;
. assembling test;
. embedding request spaces test;
. tool access check (with rapid prototypes of used tools);
. human operators’ position (for real test of mounting trajectories).
It is well known that the main additive manufacturing standard processes are the
following (grouped by the principle of works) [15, 16]:
. sheet lamination: sheets of material are bonded to form an object; the processes
are: ultrasonic additive manufacturing (UAM) and laminated object manufac-
turing (LOM);
. binder jetting: a liquid bonding agent acts as an adhesive between layers to join
powder material;
. material extrusion: the material is distributed through a nozzle where it is heated
and then it is deposited layer-by-layer (Fused Deposition Modelling—FDM or
Fused Filament Fabrication-FFF process;
. material jetting: droplets of build material (photopolymer and wax) are selectively
deposited in a similar manner of a two-dimensional ink jet printer;
. directed energy deposition: different energy sources like laser, electron beam or
plasma arc are used to melt the materials;
. vat photopolymerization—liquid photopolymer in a vat is selectively cured by
UV light polymerization;
. powder bed fusion: thermal energy selectively fuses regions of a powder bed;
the processes are: Direct metal laser sintering (DMLS), Electron beam melting
(EBM), Selective heat sintering (SHS), Selective laser melting (SLM) and
Selective laser sintering (SLS).
Nowadays, fused deposition modelling (FDM) is used at large scale in different
industries, including automotive, with more and more applications such as [17]
testing models, lightweight tools and final functional components.
The powder bed fusion processes produce 3D complex shapes from the powder
material, including metal powder, in a layer-by-layer manner. The processes have
the potential to be more flexible, to obtain a wider range of shapes with different
working parameters, and use more challenging materials [18, 19].
All additive manufacturing systems work according to the following principles:
198 S.-A. Haba and G. Oancea
with solid external surface and low-density honeycomb core, in order to obtain the
low consumption of allocated resources, in this case is strong reduced the printing
process time.
Figure 44 presented the cylinder casing obtained by using the FDM process and
ABS filament.
Another task for the designer was the cycle closing alignment, in presented case,
this linear deformation along OZ axis was eliminated using the digital option to circle
closing in helical path.
In order to perform the real section trough, the engine block and cylinder casing,
several parts were made in different configurations (Fig. 45): engine block sectioned
in OZY plane (1), entire cylinder block (2) used for assembly testing and cylinder
casing (3).
The real sectioned assembly between the engine block assembly and its cylinder
casing is shown in Fig. 46.
Fig. 45 Engine block and cylinder casing—parts were made in different configurations
Fig. 47 Final
assembly—manufacturing
by FDM process [2]
Figure 47 presented the final assembly between the engine block and its cylinder
casing; it can be said that this is the first physical result of the development of the
considered digital project.
Referring to the assembly between the parts made from FDM process, it has to
be pay attention to perform the 3D models dedicated to additive manufacturing, and
these parts have to be designed with geometrical allowance equals or bigger than the
FDM system processing tolerances (in other case, in 3D, all assembly allowances
are checked, but the printed models cannot be assembled).
9 Conclusions
This chapter presents the applications of digital technologies in the design and manu-
facture of an engine block from construction of competition kart, four-stroke engine.
Parallel to the use of large-scale digitization in design and manufacturing, other activ-
ities are also presented: industrial frameworks, tools accessing and possible issues,
mounting concept, assembling trajectories, safety and health at work and some differ-
ences between robotics assembly systems and human workers. All these are imple-
mented using specific tools from CATIA™ environment and the concepts: design
for manufacturability, design for assembly and disassembly, virtual assembling of
engine block, digital collision check tools, tools accessibility, virtual engineering for
engine block and the additive manufacturing.
First section presented how the main rules of design for manufacturability and
assembly are applied in the engine design process and in the second section is showed
the using of the design for disassembly tool to obtain a product according to the
objective of decreasing the service technical requests.
The third section presents the application of the digital environment in order
to assure the virtual assembling of the engine block. Virtual assembling is done in
CATIA™ digital environment, and it is used the 3D models of engine parts. There are
202 S.-A. Haba and G. Oancea
also presented the digital features for moving all the components in the functioning
positions and the simulation for all real mounting trajectories.
In digital environment, using the CATIA™ kinematic mode, the fourth section
showed the complete procedure to check the digital prototype, determine the
collisions and safety spaces for the engine assembly.
The fifth section allows to the readers to see the check procedure associated to
the tools accessibility for complex 3D models (in this case the engine assembled on
the chassis).
The sixth section presented the virtual engineering used for the engine block,
focused to the manufacturing simulation. For these activities is used the three main
entities: the digital prototype for the engine block, digital machining environment
and the clamping devices.
The last section, the section seven, presents how, in this case, the additive processes
are used to obtain the real prototypes for the engine block and its associated cylinder
casing, in different scales to check the designed shapes and the assembling details.
References
1. Cevikcan E, Ustundag A (2018) Industry 4.0 managing the digital transformation, Series:
Springer series in advanced manufacturing, Springer, ISBN: 978-3-319-57870-5, 3319578707,
978-3-319-57869-9
2. Haba SA (2013) Digital manufacturing of single-cylinder engine block. PhD thesis, Transyl-
vania University of Brasov, Romania
3. Andersn D (2014) Design for manufacturability: how to use concurrent engineering to rapidly
develop low-cost, high-quality products for lean production. Taylor & Francis Group. https://
doi.org/10.1201/b16501, ISBN: 9780429255588
4. Anderson D (2020) Design for manufacturability: how to use concurrent engineering to rapidly
develop low-cost, high-quality products for lean production, 2nd ed. Taylor & Francis Group.
https://doi.org/10.4324/9780429285981, eBook ISBN9780429285981
5. Mital A, Desai A, Subramanian A, Mital A (2014) Design for assembly and disassembly, in
book product development, pp 159–202. Elsevier. https://doi.org/10.1016/B978-0-12-799945-
6.00007-7
6. Ghionea IG, Tarba IC, Cukovic S (2021) CATIA V5 parametric design and programming
applications. Printech Publishing, Bucharest
7. Kent R (2016) Design quality management in book: quality management in plastics processing,
pp 227–262. Elsevier. https://doi.org/10.1016/B978-0-08-102082-1.50008-3
8. Haba SA, Oancea G (2012) Virtual assembling of an engine block using CATIA environment.
Acad J Manuf Eng 10(3):74–79
9. Haba SA, Oancea G (2015) Digital manufacturing of air-cooled single-cylinder engine block.
Int J Adv Manuf Technol 80(5):747–759. https://doi.org/10.1007/s00170-015-7038-x
10. *** (2021) Solidworks—collision detection. https://solidworks.com/2021/English/SWConn
ected
11. Assad F, Konstantinov S, Rushforth EJ, Vera D, Harrison R (2021) Virtual engineering in
the support of sustainable assembly systems. Proc CIRP 97(202):367–372. https://doi.org/10.
1016/j.procir.2020.05.252
12. Adane TF, Nicolescu M (2018) Towards a generic framework for the performance evaluation
of manufacturing strategy: an innovative approach. J Manuf Mater Process 2(2):23. https://doi.
org/10.3390/jmmp2020023
Study on Design and Manufacturing of an Engine Block Using Digital … 203
13. Liu Y, Peng F, Liu H (2017) Final machining of large-scale engine block with modularized
fixture and virtual manufacturing technologies. Hindawi J Eng 3648954. https://doi.org/10.
1155/2017/3648954
14. Panc N (2021) Devices—conception and design. Publishing U.T. Press, Romania. ISBN 978-
606-737-515-2
15. *** (2015) Standard ISO/ASTM 52900—additive manufacturing. General Principles—Termi-
nology
16. *** (2022) https://www.lboro.ac.uk/research/amrg/about/the7categoriesofadditivemanufactu
ring/, Accessed on 9 Jul 2022
17. Gao X, Yu N, Li J (2020) Influence of printing parameters and filament quality on structure
and properties of polymer composite components used in the fields of automotive. Woodhead
Publ Ser Compos Sci Eng 2:303–330. https://doi.org/10.1016/B978-0-12-819535-2.00010-7
18. Buican GR, Oancea G, Lancea C, Pop MA (2015) Influence of layer thickness on internal
structure of parts manufactured from 316-L steel using SLM technology. Appl Mech Mater
809:369–374. https://doi.org/10.4028/www.scientific.net/AMM.809-810.369
19. Nandy J, Sarangi H, Sahoo S (2019) A review on direct metal laser sintering: process features
and microstructure modelling, lasers in manufacturing and materials processing, Vol 6, pp
280–316. https://doi.org/10.1007/s40516-019-00094
Automatization of CAD Model
Development of Slewing Bearing Using
Solid EdgeTM
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 205
P. Kyratsis et al. (eds.), Computational Design and Digital Manufacturing,
Management and Industrial Engineering, https://doi.org/10.1007/978-3-031-21167-6_10
206 R. Gella-Marín et al.
1 Introduction
features, and other elements in Standard for Exchange of Product (STEP) files. Li
et al. [15] Proposes a method to exchange design data between shipyards and equip-
ment suppliers who use different CAD systems. Reddy and Rangadu [16] Presents a
parametric CAD modelling system for Spur Gear Design using the AGMA standards.
This provides a faster and simpler gear design system compared to conventional
design systems.
This chapter proposes a methodology for the parametrization of slewing bearing
CAD design. Using the 2D dimensions set in a fictitious commercial brochure for
a wide arrange of models, a 2D sketch is created. Next, the sketch dimensions are
linked to a spreadsheet. Then, the 2D sketch is transformed to a 3D model. When the
dimensions in the spreadsheet are changed, using the brochure dimensions to create
another slewing bearing, the CAD model is updated. Using this methodology, the
designer can create CAD models focusing on the dimensions and not on manually
updating the model, providing a faster answer to the customer and minimizing errors
in the CAD development.
2 Methodology
Parametric CAD design involves creating a model where the model dimensions are
linked to constraints. If the value of the constrained dimensions is changed, the model
auto updates and it is not necessary to change the design manually. This allows to
automate repetitive tasks, which usually take place in families of products.
As we can see, parametrization is a very useful tool when the designer must
perform slight variations on a core design. This supports design teams that must
modify models on a regular basis. It also makes easier to see how the model will
behave when something is changed.
There are several benefits when developing a parametric model. The first one is
that it provides an easy definition and automatic creation of families of products. It
also allows for integration with manufacturing processes, resulting in a decrease of
the production time.
The main drawback is that the development of the 3D CAD parametric model
takes longer, as the designer must link the constraints to the models. He must be
also sure that the constraints do not interfere with the model when they are changed,
creating an incorrect model.
In this chapter, we will present the methodology to develop a parametric model
of a slewing bearing for a commercial company. We will show how to create a 2D
sketch with a slewing bearing profile. We will link the dimensions to constraints.
Then, we will extrude the 2D sketch to create the 3D model. Next, we will create the
bore holes in the outer and inner rings, and finally, we will link them to constraints.
This will generate a parametric 3D model. To show its benefits, we will show how
modifying the constraints using the bearings dimensions taken from a brochure of a
commercial company creates a 3D model equal to the one from the brochure.
208 R. Gella-Marín et al.
3 Development of Application
bearing from the brochure. “Solid Edge™ 2D sketch input variables” are the table
that we will link to the CAD model.
Using the VLOOKUP function in Excel™, we will select each one of the variables
from the “Brochure” spreadsheet and copy it to the “Brochure import” spreadsheet
(Fig. 3). Since the dimensions in the table are diameters, we will change them to
radius because the 2D Solid Edge™ sketch works with them. These values will be
copied to the “Solid Edge™ 2D sketch input variables” spreadsheet and will be linked
to the Solid Edge™ model.
First, we will create a 2D sketch with the slewing bearing profile. Since we will
revolve the 2D sketch to create the slewing bearing, we must create a symmetry axis.
All the radial dimensions will be linked to this axis (Fig. 4).
After linking all the radial dimensions to the axis of revolution, we will extrude
the 2D sketch around this axis to create the 3D model of the two rings.
210
Fig. 3 Dropdown list to select the slewing bearing from the brochure
Next, we will create a sketch of a bolt hole in the inner ring and in the outer ring.
We will include the bolts diameters and bolts circle diameter. Then, we will create
the holes by extruding the bolt holes sketches. Next, we will create a circular pattern
for the bolt holes in the inner ring and in the outer ring.
We have created a full 3D model. Now, we have to link the slewing bearing compo-
nents dimensions to the spreadsheet. If we right click, we can select “Variables”. This
opens a spreadsheet within Solid Edge™ showing all the variables in the 3D model
(Figs. 5 and 6).
After this, we can copy each dimension value from the fictitious brochure spread-
sheet and paste them to the Solid Edge™ spreadsheet. This also pastes the fictitious
brochure spreadsheet file path (Fig. 7).
The process is copying the value of each variable in the brochure spreadsheet
(Fig. 8) and pasting as link in the Solid Edge™ spreadsheet (Fig. 9).
212 R. Gella-Marín et al.
Fig. 4 2D sketch
Now, we have linked the variables from the fictitious brochure spreadsheet to the
CAD model. If we select a different model from the dropdown list, the CAD will
auto update. We can see how the CAD model is updated in Figs. 10 and 11 when
each of the different slewing bearings is selected from the brochure.
.
Automatization of CAD Model Development of Slewing Bearing Using … 213
4 Conclusions
This chapter shows how to link the dimensions of a CAD model to a spreadsheet.
This allows the designers to create CAD models of different slewing bearings from a
brochure faster than creating each model from scratch. This methodology can also be
extended to CAD models of other products. The process will start with the creation
214 R. Gella-Marín et al.
Fig. 9 Paste as link of the variable value in the Solid Edge™ variables spreadsheet
of a CAD model template. Then, we will develop a spreadsheet with the dimensions
of the different models that will be available in the brochure. Finally, we will link
the dimensions from the spreadsheet to the CAD model. This method automates the
CAD model development and reduces errors that can happen when developing a new
model.
Automatization of CAD Model Development of Slewing Bearing Using … 215
References
1. Gujarathi GP, Ma YS (2011) Parametric CAD/CAE integration using a common data model.
J Manuf Syst 30(3):118–132
2. Roller D (1991) An approach to computer-aided parametric design. Comput Aided Des
23(5):385–391
3. Myung S, Han S (2001) Knowledge-based parametric design of mechanical products based on
216 R. Gella-Marín et al.
The issue of generating surfaces which are ordered curl of surfaces, using profiled
tools, is a current concern of the international research teams. The performance of
the generating by machining depends by the cutting tool’s geometry, and the 3D
modelling allows a simple and rigorous analysis of the actual geometry of tool’s
cutting edges. Initially, the issue of surface generating by enwrapping has a graph-
ical approach and subsequently an analytical one. This analytical way to study the
generating by enwrapping processes is yet frequently used by researchers.
A fundamental contribution had F. Litvin which approaches the modelling of the
teethed wheel, in analytical form, as base of the general issue of design for this
type of parts [1]. Obviously, the analytical approach of the surface generation is
very important in this domain, but this can and has to be completed or, where the
methodology allows, replaced with alternative methods. Sometimes, these methods
are easy to use and induce minimum errors from technical point of view.
The continuous development of the graphical design environment allows returning
to the graphical or grapho-analytical approach for the surface’s enwrapping problems,
using capabilities offered by AutoCAD, CATIA, Solid Edge or other graphical design
programs.
At “Dunărea de Jos” University of Galat, i, in the department of Manufacturing
Engineering, a research team, of which the authors of this chapter are also part,
developed applications in the graphical design environment, solving problems of
the tool’s profiling for the generation of: ordered curl of surfaces associated with
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 217
P. Kyratsis et al. (eds.), Computational Design and Digital Manufacturing,
Management and Industrial Engineering, https://doi.org/10.1007/978-3-031-21167-6_11
218 V. G. Teodor et al.
1.1 Introduction
For generating pieces with symmetry as ordered curl of surfaces, in both cases
of manufacturing by plastic deformation or by machining, it is largely used the
enwrapping surface generation.
This generating method has some considerably advantages as: increased produc-
tivity; dimensional precision for obtained surfaces; and correctness of obtained shape.
These two last advantages are in closed connection with the surface obtaining way,
this surface not being a merely copy of the counterfort of tools but being obtained as
a large number of cutting or positioning of generating tool.
However, this way to obtain the surface also has a disadvantage, due to the neces-
sity to calculate with increased precision the shape of the active surface of generating
tool. Often, the needed calculations for tool’s profiling are difficult and significantly
use the available computing resources.
Over the years, some methods were established for surface generating by enwrap-
ping, from these being able to be mentioned [2]: the Olivier theorems, which
allows the study of linear and single-point contact between the enwrapping surfaces;
the Willis theorem also known as the “theorem of normal” for study of in-plane
enwrapping between profiles; and finally, the Nikolaev method for study of recipro-
cally enwrapping between revolution or cylindrical surfaces with cylindrical helical
surface with constant pitch.
Subsequent, in order to simplify the calculus effort for profiling the tools previ-
ously mentioned, some complementary methods were developed, characterized by
a limited applicability, but having the advantage of a simplest form for the enwrap-
ping condition, keeping the mathematical rigour [3]. Among the complementary
theorems, developed at “Dunărea de Jos” University of Galat, i, by a research team
CAD-Based Application in VBA for Tool’s Profiling 219
conducted by the respected professor Ph.D. eng. Nicolae Oancea, we can mention
[4]: the method of “minimum distance”; the method of “trajectories”; the method of
“substituting circles”; the method of “in-plane generating trajectories”; etc.
Using at large scale, the techniques for computer-assisted design, determining the
development of some methods, which allows to the user to take the advantage of
this techniques, when is necessary to determine profiles of tools which generate by
enwrapping. For this, were imagined and published profiling algorithms which can
be utilized for various CAD programs, as: AutoCAD™ , CATIA™ , Solid Edge™ , etc.
[5–10].
The script presented at the end of this chapter is part of this effort. It was developed
in VBA language and takes advantage of the CATIA™ design environment to profiling
tools of type: reach-gear, gear shaped cutter or rotary cutter, designed to generate by
enwrapping, using the rolling method, profiles known in analytical or discrete form.
Generating of helical surfaces or of ordered surface curl needs determining the active
surface of tool. A particularity of generation by enwrapping is that, both the gener-
ated surface and the generating one, are moving surfaces and, more, each of them
is envelope for the family generated by the other surface, in time of relative move-
ment regarding the envelope. For this reason, it is possible to imagine some specific
methods for determining the geometrical place of the contact points between the two
mentioned surfaces.
In the established vocabulary, this geometrical place of the contact points is named
“characteristic curve” and constitutes the starting element for identifying the form
of the enveloping surface. As we previously mentioned, both considered surfaces
must be in situation of reciprocally enwrapping. This allows determining the math-
ematical expression of a relation between surfaces and determining of a kinematical
enwrapping condition between the generated and generating surface.
Regarding the analytical expression of the generated surface and the move-
ment between the two surfaces, determining the kinematical enwrapping condition
assumes three approaches: for case when the surface’s family to be generated depends
by one parameter; for the case when the surface’s family to be generated depends
by two independent parameters; and the third case, when the movement between the
two surfaces is a complex motion.
If it is denoted with Σ the generated surface and it is analysed its movement regarding
a global reference system, it is possible to demonstrate that the movement equation
has form:
220 V. G. Teodor et al.
x = α T · X + A, (1)
⎛ ⎞
x
where x = ⎝ y ⎠ represents the matrix obtained with coordinates of a point, M,
z
⎛ ⎞
X
belonging to the Σ surface in a global reference system xyz; X = ⎝ Y ⎠ is the
Z
matrix obtained with the coordinate of the same point in a mobile reference systems,
XYZ, joined with the Σ surface; α = (αik (τ )) is the orthogonal transforming matrix
between the unitary vectors of the two systems, XYZ and xyz; and A = (ai (τ )) is the
matrix associated with the position vector of the point O, the vector →r O see Fig. 1.
Presuming known the analytical equation of the Σ surface in the reference system
joined with this:
F(X, Y, Z ) = 0, (2)
we can write the surface’s family generated in the movement given by the τ parameter:
F(x, y, z, τ ) = 0. (3)
Definition The geometrical place of the contact points between the enveloping of
the Σ surface’s family, generated in the movement (3), and the envelope of this family
is named “characteristic curve”.
The characteristic curve, C, onto the Σ surface is given by the equation system:
⎧
⎨ F(x, y, z, τ ) = 0;
C : Fτ' = Fx' · ddτx + Fy' · dy
+ Fz' · dz
= 0; (4)
⎩ dτ dτ
τ = const.
The condition (4) may be regarded as the scalar product of a two vectors, the
normal to the Σ surface, drawn from the M point:
[ ]
→ Σ = Fx' , Fy' , Fz' ,
N (5)
and the velocity vector of the M point, in the absolute motion of the Σ surface:
[ ]
d x dy dz
v→ = , , . (6)
dτ dτ dτ
In this way, we get the Gohman theorem according to which: “from the kinematic
point of view, a point onto the Σ surface belongs to the characteristic curve only if,
in this point, the normal to the Σ surface is perpendicular to the velocity vector, in
the absolute movement of the surface”.
Regarding (5) and (6), the equation system (4) can be written as:
⎧
⎨ F(x, y, z, τ ) = 0;
C: N→ · v→ = 0; (7)
⎩ Σ
τ = const.
F(x, y, α, β) = 0. (8)
In Eq. (8), the two independent parameters from which the surface’s family
depends are α and β. Similarly with the previous case, the enveloping of the surface’s
family described by (8) is given by the equation system:
222 V. G. Teodor et al.
⎧
⎪
⎨ F(x, y, α, β) = 0;
P : Fα' = Fα' x · dα + Fα' y · + Fα' z · dα
dy
dx dz
= 0; (9)
⎪
⎩ F' = F' · dx + F' ·
dα
dy '
β βx dβ βy dβ
+ Fβz · dβ = 0.
dz
If we made the analogy with the Eqs. (5) and (6), the system (9) can be brought
to the form:
⎧
⎨ F(x, y, α, β) = 0;
P: N → · v→ = 0; (10)
⎩ →Σ α
N Σ · v→ β = 0.
In Eq. (10), v→ α and v→ β represent the absolute velocity in the movements deter-
mined by the parameters α and β. In these two movements, onto the Σ surface are
determined two characteristic curves and their intersection point is called character-
istic point. In the characteristic point, the normal to the Σ surface is common with
the normal to the surface’s family envelope.
→ Σ · v→ β = 0.
N (11)
In this case, the surface’s characteristic curve can be determined from the equation
system:
⎧
⎨ F(x, y, α, β) = 0;
C: N→ · v→ = 0; (12)
⎩ Σ α
α = const.
The case of generation by enwrapping presents practical interest because the absolute
movements of the two surfaces are rotations around two axes joined with these
surfaces. In practice are known the surface to be generated, Σ, and the absolute
CAD-Based Application in VBA for Tool’s Profiling 223
(point O1 ), b→ for the position vector of the origin of the mobile reference system ξ ηζ
in the global reference system (the point O2 ) and →c1 and →c2 normal drawn from the
origin of the fixed reference system (point O) to the rotation axes I and, respectively,
II.
The matrix expressions of the vectors a→ , b, → →c1 and →c2 (the projections of these
vectors onto the axes of global reference system) are:
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
a1 b1 c11 c21
a→ = ⎝ a2 ⎠, →b = ⎝ b2 ⎠,→c1 = ⎝ c12 ⎠, →c2 = ⎝ c22 ⎠. (14)
a3 b3 c13 c23
X = α · (x − a), (15)
ξ = β · (x − b), (16)
X, ξ and x being the matrix obtained with coordinates of the current point in the
corresponding reference systems:
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
X ξ x
X = ⎝ Y ⎠, ξ = ⎝ η ⎠, x = ⎝ y ⎠. (17)
Z ζ z
The absolute movement of the Σ surface joined with the XYZ system moving can
be described by the transformation:
→ 1 unitary
ω n1 (ϕ1 ) being the notation for the rotation matrix around the axis with n
vector, rotation with the angle ϕ 1 .
Similarly, the absolute movement of the S surface can be defined, joined with the
reference system ξ ηζ:
the significance of the ω n2 (ϕ2 ) matrix being like this of the ω n1 (ϕ1 ) matrix.
From Eqs. (18) and (19), the relative motions between the mobile reference system
can be deduced. These movements represent the movement of the Σ surface regarding
the S surface:
CAD-Based Application in VBA for Tool’s Profiling 225
{ }
ξ = ω n2 (ϕ2 ) · β · α −1 · ω−1
n1 (ϕ1 ) · [X − α · (c1 − a)] + c1 − c2
+ β · (c2 − b), (20)
with u variable parameter, the surface’s family will have, in principle, the form:
|
| ξ = ξ (u, ϕ1 );
|
|
(Σ)ϕ1 : | η = η(u, ϕ1 ); (23)
|
| ζ = ζ (u, ϕ1 ).
N → ϕ1 = 0.
→Σ · R (24)
In the Eq. (24), N → Σ represents the normal to the Σ surface, drawn in the current
→
point, and Rϕ1 represents the direction of the tangent to the trajectory of this current
point, in its relative motion regarding the ξ ηζ reference system.
By identification of the enwrapping condition, the equations of the S surface can
be written. In this case, the equations will have form:
226 V. G. Teodor et al.
|
| ξ = ξ (u, ϕ1 );
|
| η = η(u, ϕ );
| 1
S:| (25)
| ζ = ζ (u, ϕ1 );
|
| ϕ = ϕ (u).
1 1
The contact surface [3] can be defined as: “the geometrical place, in the global
reference system, of contact points between the two conjugated surfaces, in the
movement assembly of these”.
The contact surface’s equations can be identified if the enwrapping condition is
associated with the absolute movement of the Σ surface:
x = α −1 · ω−1
n1 (ϕ1 ) · [X − α · (c1 − a)] + c1 . (26)
Machining through generating for ordered curls of surfaces may be done using tools
as: rack-gear, gear shaped cutter or rotary cutter. All these three types of tools generate
by method of rolling, which make that their profiling can be regarded as an in-
plane enwrapping issue. In specialized literature for solving this type of issues, are
presented various methods as: method of “minimum distance”; method of “in-plane
generating trajectories”; method of “substitutive circles family”; the Willis method
also known as “theorem of normal”; etc. Each of these methods is based on deter-
mining the enwrapping condition using one of the properties of curves in reciprocally
enwrapping as they were defined in the previous section.
Regardless of the used method, the profiling algorithms presented in literature
assume to follow the stages: writing the equations for the profile to be generated;
writing equations for the absolute movements of piece and tool; determining the
relative motions between piece and tool, based on the previous determined absolute
movements; finding the curve’s family generated in this relative motion; and identi-
fying the enwrapping condition, which allows selecting the points belonging to the
enveloping curve, from the multitude of points belonging to the family. Generally,
beside the conjugated curve identifying (curve which is the profile of the generating
tool) interests the finding of the contact curve between the two enwrapping profiles.
Definition: The contact curve represents the geometrical place, in a fixed refer-
ence system, of the tangency points, established in the movement assembly of two
reciprocally enwrapping curves [4].
CAD-Based Application in VBA for Tool’s Profiling 227
The method of “virtual pole” represents a reinterpretation for the method of normal
and uses the propriety of reciprocally enwrapping curves to admit, in the contact
point, a common normal which passes through the gearing pole. According to the
proposed method, corresponding to each point onto the profile to be generated is
identified one “virtual pole”, defined as intersection point between the normal to
profile, drawn in the current point, and the centrode associated with this profile, see
Fig. 3. In certain conditions, the virtual pole may become gearing pole. Considering
this observation, the enwrapping condition can be found identifying the conditions
under which the virtual pole becomes actual gearing pole.
These conditions can be identified founding the position vector of the virtual pole
(the point Pv in Fig. 3), which allows establishing the movement which brings the
virtual pole in the gearing pole. Once established this condition, and applied to the
generated profile, it is obtained the position when the current point onto this profile is
in contact with the generating tool. If the two conjugated profiles are in contact, their
tangency point belongs to the contact curve too, according to definition. Therefore,
applying the absolute motion necessary for the virtual pole to overlap the gear pole,
the current point on the generated profile is brought onto the contact curve.
Being known also the absolute motion of the generating profile means that the
coordinates of the contact point belonging to this profile can be identified too. This
approach for the in-plane issue allows elusion necessity to write the relative motions
between the generate and generating profiles, as so as the equations of the profile’s
family generated in these relative motions. In this way, is avoided a major source of
errors and the calculus effort is reduced significantly.
In Fig. 3, are represented the C 1 and C 2 centrodes, associated with the two profiles,
Σ, profile to be generated and S, generating profile. The absolute motions of the two
centrodes and consequently of their associated profiles are given by the rotations
ϕ 1 and ϕ 2 , carried around the points O1 and, respectively, O2 . The virtual pole is
represented by the point Pv , in its initial position and, respectively, P’v in rotate
position, when it is overlapped to the gearing pole (point P in Fig. 3). The vector r→
represents the position vector of the current point M, and the vector N→Σ is the normal
to the generated profile, drawn through the current point. The position vector of the
virtual pole was denoted in Fig. 3 with r→Pv .
In Fig. 3, the positions of elements after the motions carried on to bring the virtual
pole in the gearing pole were represented with dash line and the notations of these
elements were indicated with symbol ' (prime).
The comparison between the “virtual pole” method and a classical method (in this
case the Willis theorem) is presented in Table 1. The comparison is made for profiling
the rack-gear designed to generate a shaft-type piece, with squared section, see Fig. 4.
In Table 1, were used the following notations:
– Σ—profile to be generated (piece);
– a—half side of the squared section;
– u—variable parameter which describes the piece’s profile;
– ϕ—rotation angle of the reference system joined with the profile to be generated;
– ϕ u —particular value of the ϕ rotation angle, for which, in the absolute motion of
the piece, the virtual pole becomes the gearing pole. The value depends by the
considered point onto the profile;
– M—current point onto the piece’s profile (profile to be generated) which by inter-
secting with the centrode associated with piece (the circle with Rrp radius) defines
the gearing pole, P;
– X(u), Y(u)—parametrial equations of the profile, which for a certain value of u
give the coordinates of current point, M;
– ξ , η—axes of mobile reference system joined with the tool’s profile;
– x, y—axes of the global (fixed) reference system;
– X,Y —axes of the mobile reference system, joined with the piece’s profile;
– Rrp —rolling radius of piece;
– ω3 (ϕ)—rotation matrix;
– ω3T (ϕ)—transposed of the rotation matrix;
– A—translation matrix obtained with the coordinates of the mobile reference
system’s origin, in the global reference system;
– Ȧλ —derivative of the translation matrix;
CAD-Based Application in VBA for Tool’s Profiling 229
Table 1 Comparison between the “virtual pole” method and a classical method
Crt. Method of “virtual pole” Method of normals (Willis’ theorem) [3]
No.
1 Profile to be generate Σ [3]
|
| X (u) = −a;
|
Σ:|
| Y (u) = u.
→ Σ = X (u) · →i
→ Pv = →r Pv + N |
N | X = −R · cos ϕ;
( ) P : || rp
→ → →
+ Y (u) · j + +λ · Ẏu · i − Ẋ u · j | Y = Rr p · sin ϕ.
|
| X = −R · cos ϕ;
| rp
C 1 :|
| Y = Rr p · sin ϕ.
→ Σ ∩ C1 = PV
N P∈N →Σ
[ ]
( ) −Rr p · cos ϕ − X (u) · Ẋ u
ϕu = arcsin u
Rr p . [ ]
+ Rr p · sin ϕ − Y (u) · Ẏu = 0
( )
ϕ = arcsin Rur p .
(continued)
230 V. G. Teodor et al.
Table 1 (continued)
Crt. Method of “virtual pole” Method of normals (Willis’ theorem) [3]
No.
– in the tool’s reference system (rack-gear tool’s profile) [3]:
|
|ξ = x + R ;
| M M rp
Mξ η :|
| η M = y M + Rr p · ϕu .
−→
– N Σ —normal vector to the Σ profile;
−→
– Rϕ —velocity vector;
– r—position vector of the current point;
−→
– λ—scalar parameter representing the modulus of the normal vector, N Σ .
Next, will be presented the particular cases for profiling rack-gear, gear shaped cutter
and rotary cutter.
CAD-Based Application in VBA for Tool’s Profiling 231
The generation of surfaces by rolling, using rack-type tools, is a particular case for the
application of the Willis theorem. According to this theorem: “profiles that transmit
rotational motion between two parallel axes admit, at the point of contact, a common
normal that passes through the gear pole” [3].
When using the rack tool, only one of the profiles has a rotational movement,
respectively, the profile to be generated. The other, the generating profile, has a
translational motion. However, the Willis theorem remains valid if we consider that
the translational motion can be seen as a particular case of rotation, in which the
centre of rotation is located at infinite distance.
So, in the case of generation with the rack tool, it can be considered that the rolling
radius of the tool is infinite, which causes its centroid to transform from a circle to a
straight line. Usually, two mobile reference systems are used for profiling the rolling
generating tools, one joined with the generated profile, Σ, the other joined with the
generating profile, S, and a fixed reference system.
The three reference systems, as well as the centroids associated with the studied
profiles, are shown in Fig. 5. In this figure, the elements related to the piece were
represented in red, those related to the tool in blue and those related to the fixed
system in green.
In connection with Fig. 5 are defined: the fixed reference system, xOy, whose
origin coincides with the centre of the centroid of the generated profile, C 1 , which
is a circle of radius Rrp ; the mobile reference system, XOY, having the same origin
as the fixed system and which rotates with the generated profile, being joined with
it; the angular parameter of rotation of the generated profile, ϕ; the mobile reference
system joined with the generating profile, ξ O1 η, having, at the initial moment, the ξ
axis overlapped on the x and X axes and the η axis being overlapped on the centrode
of the generating profile, C 2 , which is a line; and the translation parameter of the
generating profile, δ.
In Fig. 5, the positions of the elements and of the axes after the execution of the
movement that brings the virtual pole overlapped with the gear pole were represented
with a dashed line.
In figure was marked also the gearing pole, P, representing the tangency point
between the two centrodes and the virtual pole, Pv , defined as intersection point
between the normal to the generated profile, N → Σ , drawn through the current point
M and the centrode associated with this profile, the circle C 1 . The position of the
current point is given by the position vector →r .
Admitting as known, in its own reference system, the parametrical equations of
the generated profile, Σ, in form:
|
| X = X (u) :
|
Σ:| (27)
| Y = Y (u),
232 V. G. Teodor et al.
with u variable scalar parameter, it is possible to determine the position vector of the
current point M, for a certain value of u:
The normal to the generated profile, drawn through the current point, will have
→Σ:
the direction given by the unitary vector n
→ Σ = Ẏu · →i − Ẋ u · →j .
n (29)
In Eq. (29), Ẋ u and Ẏu represent the partial derivatives of the X(u) and Y(u)
functions relative to variable u.
Denoting with λ the modulus of the normal vector, the expression of this becomes:
( )
→ Σ = λ · Ẏu · →i − Ẋ u · →j .
N (30)
where ϕ represents the angular parameter which describes this circle, and putting the
condition that the end of the →r Pv vector to belong to this centrode, it is obtained:
⎧
X = −Rr p · cos ϕ = X (u) + λ · Ẏu :
(33)
Y = Rr p · sin ϕ = Y (u) − λ · Ẋ u .
δ = Rr p · ϕ. (36)
If, in relation (36), the ϕ angular parameter takes the particular value ϕ u , giving
to the current point the absolute movements of the generated profile:
234 V. G. Teodor et al.
the coordinates x M and yM can be determined. Now, the current point is in contact
with the generating profile. Developing the relation (37) where the matrices x and
X are matrix given by the coordinates of the M point in the corresponding reference
systems, xOy and, respectively, XOY, and ω3T (ϕ) represents the matrix of orthog-
onal transformation between the unitary vectors of the above-mentioned reference
systems, we will obtain:
( ) ( ) ( )
x cos ϕu − sin ϕu XM
= · , (38)
y sin ϕu cos ϕu YM
resulting:
|
| x = X · cos ϕ − Y · sin ϕ ;
| M M u M u
| (39)
| y M = X M · sin ϕu + Y M · cos ϕu .
We have to mention that, in this position, the M point is also onto the contact
curve.
Keeping in mind the rolling condition, it is obtained the value of the δ parameter
for which the generated profile is in tangency with the generating profile, in the M
point, see Fig. 5:
δu = Rr p · ϕu , (40)
x = ξ + A, (41)
allows identifying the coordinates of contact point between the two profiles, in the
reference system of the rack-gear, ξ O1 η.
In the relation (41), the meaning of the x matrix is identical to that from the (37)
relation, the matrix ξ is given by the coordinates of the current point in the reference
system joined with the rack-gear, ξ O1 η, and the matrix A is given by the coordinates
of the tool reference system’s origin in the global reference system. Thus, developing
the relation (41) it will be obtained:
( ) ( ) ( )
x ξ −Rr p
= + , (42)
y η −δ
which allows finding the coordinates of the contact point between the two conjugated
profiles in the reference system of the rack-gear tool, in form, sub forma:
CAD-Based Application in VBA for Tool’s Profiling 235
( ) ( ) ( )
ξM xM −Rr p
= − , (43)
ηM yM −δu
namely:
|
|ξ = x + R ;
| M M rp
| (44)
| η M = y M + δu .
Mainly, the gear shaped tools are intended to generate internal surfaces as: internal
teeth; profiled holes, as K-type holes; polygonal bushes; etc. It is also not uncommon
to use this type of tool to generate exterior surfaces. The use examples are similar to
those presented above. Generation with gear shaped tools is characterized by the fact
that the axes of the two axodes are parallel and at a finite distance from each other.
This makes the absolute motions associated with both centrodes rotational motions.
Unlike the previous case, this time is needed to use four reference systems, as are
presented in Fig. 6: two fixed reference systems, xOy and x 1 O1 y1, and two mobile
systems XOY and, respectively, ξ O1 η.
As we mentioned before, the gear shaped-type tool can generate both external
and internal surfaces, these cases being presented in Fig. 6.a and 6.b.
Fig. 6 Reference systems at generating with gear shaped tool. Conjugated centrodes
236 V. G. Teodor et al.
The xOy reference system is selected in this way that its origin coincides with
the centre of the C 1 centrode, the circle with radius Rrp . The XOY reference system,
mobile, is joined with the piece and has, at the initial moment, the axes overlapped
with the axes of xOy system. As we mentioned before, the piece has rotational
movement, so the XOY will rotate with this. The generated profile is defined in this
system.
The rotation of the XOY system and, simultaneous, of piece, is described by the
angular parameter ϕ 1. The x 1 O1 y1 reference system is fixed, chosen so its origin
to coincide with the C 2 centrode, meaning the centrode joined with the generating
profile. This centrode is also a circle, this radius being Rrt in Fig. 6. The A12 distance
between the origins of the two reference systems depends by the values of the rolling
radii by formula:
A12 = Rr p ± Rr t , (45)
the sign “ + ” corresponding to the external surface’s generation and “-” corre-
sponding to internal surfaces.
The fourth reference system is the mobile system ξ O1 η where the generating
profile is defined. It is chosen that its origin coincides with those of x 1 O1 y1 system
and, initially, has axes overlapped to this system. Being joined with the generating
profile, the system ξ O1 η rotating with this profile the angular parameter being ϕ 2 .
The generated profile, Σ, is defined in system XOY by equations as (27). For a
certain value of the u parameter, it is obtained a current point onto the Σ profile
and the position vector of this point is given by the Eq. (28). When u go through
all its definition domain, the point defined by this parameter becomes current point,
describing the whole generated profile. The normal to the Σ profile is described by
Eq. (30).
As in the previous case, summing the vectors →r and N → Σ it is obtained the position
vector of the virtual pole, →r Pv , with expression given by (31). From the condition
that the end of the vector →r Pv belonging to the centrode joined with piece, it can be
deduced the enwrapping condition according to the Eq. (35).
Starting from the known position of the M current point, with the pair of coordi-
nates X M , Y M and the value of angular parameter ϕ 1 which brings the virtual pole in
the gearing pole, can be determined the position of point M, in the global reference
system.
For this, we have to keep in mind the rolling condition, meaning the condition
that the C 1 and C 2 centrodes roll one to another without slipping:
Rr p
Rr p · ϕ1 = Rr t · ϕ2 ⇒ ϕ2 = · ϕ1 . (46)
Rr t
/
Usually, it is used the notation Rr p Rr t = i, which bring the Eq. (46) at form:
ϕ2 = i · ϕ1 . (47)
CAD-Based Application in VBA for Tool’s Profiling 237
resulting:
|
| x = X · cos ϕ − Y · sin ϕ ;
| M M 1u M 1u
| (50)
| y M = X M · sin ϕ1u + Y M · cos ϕ1u .
We have to mention that, for various positions of the M point, the Eq. (50)
represents the contact curve.
Regarding the rolling condition, it is obtained the value of the ϕ 2 parameter, for
which the generated profile is in tangency with the generating profile, in point M:
allows identifying the coordinates of the contact point between the two profiles, in
the tool’s reference system, ξ O1 η.
In relation (52), the meaning of the value A12 is given by the relation (45), the
matrix ξ is given by the coordinates of the current point in the reference system joined
with the tool, ξ O1 η, and the matrix A is given by the coordinates of the origin of the
generating profile’s system in the fixed reference system x 1 O1 y1 . Hence, developing
238 V. G. Teodor et al.
which allows finding the coordinates of contact point between the two conjugated
profiles, in the reference system of the gear shaped tool, in form:
( ) ( )( ) ( ) ( ) ( )
ξM cos ϕ2 ± sin ϕ2 x1M x1M xM −A12
= ; = + , (55)
ηM ∓ sin ϕ2 cos ϕ2 y1M y1M yM 0
meaning:
|
| ξ = x · cos ϕ ± y · sin ϕ ;
| M 1M 2 1M 2
| (56)
| η M = ∓x1M · sin ϕ2 + y1M · cos ϕ2 .
As before, in Eq. (56) the sign “+” corresponds to external surfaces and the sign
“−” to internal surfaces.
Usage of the rotary cutter for generation by rolling of profiles can be regarded as
being the inverse of generation with rack-gear tool.
In this case, the tool has rotation movement, and the generated profile has
translation movement.
Mostly, the generated surface is a helical surface, and the translation of the gener-
ated profile is obtained from a helical movement around an axis parallel with the
piece’s centrode.
Usually, for profiling tools which generate by rolling are used two mobile reference
systems, one joined with the generated profile, Σ, the other one joined with the
generating profile, S, and a fixed reference system.
The three reference systems as so as the centrodes joined with the studied profiles
are presented in Fig. 7. In this figure, the elements linked with the tool were repre-
sented in red, those linked with piece in blue and those linked with fixed reference
system in green.
In connection with Fig. 7 are defined: fixed reference system, xOy, whose origin
coincides with the centre of the generating profile’s centrode, C 2 , which is a circle
with radius Rrt ; mobile reference system, XOY, where is defined the generated profile.
The X axis of this system is initially overlapped to the x axis of the fixed reference
system, and the Y axis coincides, throughout the entire movement with the centrode
of the generated profile, the line C 1 , the translating parameter being δ.
CAD-Based Application in VBA for Tool’s Profiling 239
where δ represents the linear parameter which describes the line, and putting the
condition that the end of →r Pv vector belongs to this centrode, it is obtained:
240 V. G. Teodor et al.
⎧
X = δ = X (u) + λ · Ẏu :
(58)
Y = 0 = Y (u) − λ · Ẋ u .
δ = Rr t · ϕ, (60)
x = X + A, (62)
can be determined the coordinates x M and yM where the current point is in contact
with the generating profile. Developing the relation (62) where the matrices x and X
are given by the coordinates of the M point in the corresponding reference systems,
xOy and, respectively, XOY, and A is given by the coordinates of the piece’s reference
system’s origin in the fixed system:
( ) ( ) ( )
x XM −δ
= + , (63)
y YM −Rr t
results:
|
| x = X − δ;
| M M
| (64)
| y M = Y M − Rr t .
CAD-Based Application in VBA for Tool’s Profiling 241
Taking into account the driving condition, it is obtained the value of the ϕ param-
eter, for which the generated profile is in tangency with the generating profile in the
point M, see also Fig. 7:
δu
ϕu = , (65)
Rr t
allows identifying the coordinates of the contact point between the two profiles, in
the rack-gear’s system, ξ O1 η.
In relation (66), the meaning of the x matrix is identical to those from relation (62),
the matrix ξ is given by the coordinates of the current point in the reference system
associated with the rack-gear, ξ O1 η, and the matrix ω3T (ϕ) represents the orthogonal
transformation matrix between the unitary vectors of the reference systems x and ξ.
Thus, developing the relation (66) it is obtained:
( ) ( ) ( )
x cos ϕ − sin ϕ ξ
= · , (67)
y sin ϕ cos ϕ η
which allows finding the coordinates of the contact point between the two conjugated
profiles, in the reference system of the rack-gear, in form:
( ) ( ) ( )
ξM cos ϕu sin ϕu xM
= · , (68)
ηM − sin ϕu cos ϕu yM
namely:
|
| ξ = x · cos ϕ + y · sin ϕ ;
| M M u M u
| (69)
| η M = −x M · sin ϕu + y M · cos ϕu .
In the language Visual Basic for Applications (VBA), several modules have been
developed to profile the types of tools mentioned above. The application was
implemented in the CATIA™ program being structured in three stages:
– input data;
– calculation of the generating profile, depending on the profile to be obtained and
the type of tool;
242 V. G. Teodor et al.
– optionally, export the coordinates of the points calculated for the contact curve
and for the generator profile, respectively.
To simplify the use of the application, its commands have been entered in a toolbar
called “Profiling”. The appearance of the toolbar is shown in Fig. 8
The data entry step is common to the three types of tools and consists of a form
that allows you to choose the type of tool you want, set the generation parameters
and the name of the working file. The form is shown in Fig. 9.
After choosing the respective elements, the characteristic elements used for the
generation are created: the reference systems, the origins of the reference systems
(points O and O1 ), the centrodes of the part and, respectively, the tool and the gear
pole (point P).
The design tree after the creation of the respective elements is shown in Fig. 10.
Part of the feature generation script is shown below.
The real type variables are defined, which will be used further:
Dim A12 As Double
Dim Rrs As Double
Dim Rrp As Double
Dim semn As Integer
Dim reference As Reference
For the gear shaped tool, depending on the type of generation, external or internal,
choose the sign needed to calculate the distance between the centres of the centroids,
the size A12 .
If cbTipGenerare Then
sign = -1
Else
sign = 1
End If
Rrs = CDbl(tbRrs.Text)
Rrp = CDbl(tbRrp.Text)
A12 = Rrp + sign * Rrs
The origins of the reference systems are generated, positioned according to the
selected tool type:
’Generation of origin points
Dim origineFix As HybridShapePointCoord
Set origineFix = elemente.AddNewPointCoord(0, 0, 0)
Else
Set polAngrenare=elemente.AddNewPointCoord(-Rrp,0,0)
End If
polAngrenare.Name = polAngrenare.Name & ".P"
corp.InsertHybridShape polAngrenare
Depending on the type of tool chosen, the centrode is drawn. For the rack tool, the
tool’s centrode is a line, and the workpiece centrode is a circle; for the gear shaped
tool, both centrodes are circles; and for the rotary cutter tool, the tool’s centrode is a
circle and the piece’s centrode is a line.
The name of the respective elements in the CATIA™ file has been changed,
keeping in name the type of the element (“line” or “circle”) and changing the code,
respectively, C1 for the workpiece centroid and C2 for the tool centroid.
Below is the script portion for drawing the piece, drawing the tool in a similar
way.
’Centrodes drawing
Dim reference1 As Reference
Dim linii As HybridShapeFactory
Set linii = reper.HybridShapeFactory
Dim directie As HybridShapeDirection
Set directie = linii.AddNewDirectionByCoord(0#, 1#, 0#)
Dim cercuri As HybridShapeFactory
Set cercuri = reper.HybridShapeFactory
Dim plane As OriginElements
Set plane = reper.OriginElements
Dim planReferinta As HybridShapePlaneExplicit
Set planReferinta = plane.PlaneXY
Dim referinta2 As Reference
Set
reference2=reper.CreateReferenceFromObject(planReferinta)
Dim startCentroida, endCentroida As Double
Dim centroidaLinie As HybridShapeLinePtDir
Dim centroidaCerc As HybridShapeCircleCtrRad
’Generating of piece’s centode
If obCutitRotativ Then
Set formaPunct = forme.Item("Point.O1")
Set
reference1=reper.CreateReferenceFromObject(formaPunct)
startCentroida = -Rrs
endCentroida = Rrs
Set
centroidaLinie=linii.AddNewLinePtDirOnSupport(refere
nce1, directie, reference2, startCentroida,
endCentroida, False)
corp.InsertHybridShape centroidaLinie
reper.InWorkObject = centroidaLinie
centroidaLinie.Name = "Line.C1"
Else
Set formaPunct = forme.Item("Point.O")
Set
reference1=reper.CreateReferenceFromObject(formaPunc
t)
246 V. G. Teodor et al.
Set
centroidaCerc=cercuri.AddNewCircleCtrRad(reference1,
reference2, True, Rrp)
corp.InsertHybridShape centroidaCerc
reper.InWorkObject = centroidaCerc
centroidaCerc.Name = "Circle.C1"
End If
.........
Some lines have been added at the end of the script that allows to change the file
path and select the option to save it.
If the path to that file is incorrect, a procedure has been introduced to resolve this
error so that the application does not crash (FileInexistent).
Dim cale As String
cale = tbCale.Text
Dim numeComplet As String
numeComplet = cale + "Profilare.CATPart"
If cbSaveFile Then
On Error GoTo FisierInexistent
fisier.SaveAs (numeComplet)
End If
frmDateIntrare.Hide
FisierInexistent:
Err.Clear
End Sub
.......
The calculation for the chosen tool type can then be made.
The user must have created the profile to be generated so that in the next step he
can select it.
The application recognizes line profiles, arc of a circle and spline curve.
An excerpt from the profile selection script is shown below.
...
Set selection = fisierPiesa.Selection
selection.Clear
ReDim strArray(2)
strArray(0) = "Line"
strArray(1) = "HybridShapeCircle"
strArray(2) = "HybridShapeCurveExplicit"
Dim sStatus As String
sStatus = selectie.SelectElement2(strArray, "Select
piece’a profile", True)
frmNrPctDisc.Show vbModal
...
CAD-Based Application in VBA for Tool’s Profiling 247
Note: It is important to note that the profile to be generated must have been
previously created as a wireframe element. Especially in the case of a spline curve,
introduced in the idea of profiling for known curves in discrete form, it must have
been isolated before its selection.
Immediately after selecting the profile, a form opens that allows the choice of the
number of points in which the discretization of the profile to be generated is made,
i.e. the number of points in which the calculation is made.
The appearance of this form is shown in Fig. 11.
By default, the discretization value is 10 points, but that number can be changed at
the user’s request. However, it should be noted that as the number of points increases,
the time required to calculate the tool profile will increase.
After choosing the discretization value, the calculation for the chosen value starts.
An area of the tool profile calculation script is shown below.
The variables inc and poz, of real type, have the role of establishing the increment
with which the current point moves on the profile curve to be generated. The current
point was obtained as a point on the curve, positioned at a certain percentage of the
length of the curve.
...
Dim inc, poz As Double
Dim numarPunct As Integer
inc = 1 / frmNrPctDisc.tbNrPcte.Value
poz = 0
numarPunct = 1
’ Profile calculus
Do While poz <= 1
Set pctCurent=puncte.AddNewPointOnCurveFromPercent
(referintaSigma,poz,False)
248 V. G. Teodor et al.
pctCurent.Name = "Point.Current"
Set setCurent = hybridBodies1.Item("ContactCurve")
setCurent.AppendHybridShape pctCurent
part1.Update
Set referintaCurent=part1.CreateReferenceFromObject
(pctCurent)
At the current point, the normal to the profile, the “normal” line, is generated. Its
length is 10 mm (parameter 10 #) of the “AddNewLineAngle” statement below.
Dim normala As HybridShapeLineAngle
Set normala=linii.AddNewLineAngle
(referintaSigma,referintaPlSuport,referintaCurent,False
,0#,10#,90#,False)
corp.InsertHybridShape normala
normala.Name = "Line.normala"
...
For the rack tool, choose the centrode of the workpiece, in this case the “Circle.C1”
circle. This element is to be intersected with the normal profile.
’ For rack gear tool
Set normala = forme.Item("Line.normala")
Dim referintaN As Reference
Set referintaN =
part1.CreateReferenceFromObject(normala)
Dim centroida As HybridShapeCircleCtrRad
Set centroida = forme.Item("Circle.C1")
Dim referintaC1 As Reference
Set referintaC1 =
part1.CreateReferenceFromObject(centroida)
Because it is difficult to assess the length needed for the normal to assure the
intersection with centrode, when the intersection was created we choose the option
to extend the normal to the intersection with the circle (intersectie.ExtendMode =
1).
setCurent.AppendHybridShape intersectie
part1.InWorkObject = intersectie
part1.Update
CAD-Based Application in VBA for Tool’s Profiling 249
The intersection obtained will represent the virtual pole of the gear. Since by
intersecting the normal with the circle-type centrode it is probable to obtain two
points of intersection, it is necessary to make the choice between these two points.
The selection criterion is the distance from the current point, being chosen the
point of intersection closest to the current point.
Set polVirtual = linii.AddNewNear
(referintaInters,referintaCurent)
polVirtual.Name = "Near.Pv"
setCurent.AppendHybridShape polVirtual
part1.InWorkObject = polVirtual
part1.Update
To be able to determine the angle between the position vector of the virtual pole
and the axis of the coordinate system, an auxiliary point is introduced, named in the
script “pctMasPv”.
Dim pctMasPv As HybridShapePointCoord
Set pctMasPv = linii.AddNewPointCoord(0#, 0#, 0#)
Dim referintaPctOrig As Reference
Set referintaPctOrig=part1.CreateReferenceFromObject
(polVirtual)
pctMasPv.PtRef = referintaPctOrig
setCurent.AppendHybridShape pctMasPv
part1.InWorkObject = pctMasPv
part1.Update
Dim punct As Variant
Dim coordonate(2) As Variant
Set punct = pctMasPv
Auxiliary point coordinates are extracted using the “GetCoordinates” method and
are stored in two real variables, “Xcoord” and “Ycoord”. Using these two variables,
calculate the value of the angle at which the part must be rotated so that the virtual
pole overlaps the gear pole. The value of this angle is stored in the real variable
“angle”.
punct.GetCoordinates coordonate
Xcoord = CDbl(coordonate(0))
Ycoord = CDbl(coordonate(1))
’Punct pe curba de contact
angle = Atn(Ycoord / Xcoord)
Set punct = pctCurent
punct.GetCoordinates coordonate
XpctCurent = CDbl(coordonate(0))
YpctCurent = CDbl(coordonate(1))
250 V. G. Teodor et al.
Using the equations of absolute motion of the part (37), the coordinates of the
current point in the fixed reference system are determined at the moment when the
virtual pole coincides with the gear pole. These coordinates are represented by the
real type variables “xFix” and “yFix”.
xFix=XpctCurent * Cos(angle) + YpctCurent * Sin(angle)
yFix=-XpctCurent * Sin(angle) + YpctCurent * Cos(angle)
coordonate(0) = xFix
coordonate(1) = yFix
A point is inserted at the calculated coordinates “xFix” and “yFi”. The z coordinate
of this point has always been considered 0.
Dim pctContact As HybridShapePointCoord
Set pctContact=linii.AddNewPointCoord
(coordonate(0),coordonate(1),coordonate(2))
setCurent.AppendHybridShape pctContact
part1.InWorkObject = pctContact
part1.Update
The contact point has been inserted in the previously created “ContactCurve”
geometric set, and the point name is changed to “Point.Cc #”, where “#” represents the
point index in the set. Thus, the points “Point.Cc1”, “Point.Cc2”, etc., are obtained.
pctContact.Name = "Point.Cc" & numarPunct
By applying Eq. (41), the coordinates of the contact point in the tool system can
be determined, the values of these coordinates being stored in the real variables “csi”
and “ita”.
’Point onto tool’s profile
csi = xFix + Rrp.Value
ita = yFix - Rrp.Value * angle
coordonate(0) = csi
coordonate(1) = ita
Dim pctScula As HybridShapePointCoord
Set pctScula=linii.AddNewPointCoord
(coordonate(0),coordonate(1),coordonate(2))
pctScula.PtRef = referintaOrigine
pctScula.RefAxisSystem = referintaSR
Set setCurent = hybridBodies1.Item("Tool")
setCurent.AppendHybridShape pctScula
The point calculated on the tool profile has been inserted in the geometric set
“Tool” created previously. For that point, the reference system “SR reference”
originating in point O1 was considered as the coordinate system.
Like the points on the contact curve, the name of the points on the generator profile
is changed so that it corresponds to the format “Point.S #”, where “#” corresponds
to the serial number of the point in the geometric set “Tool”.
part1.InWorkObject = pctScula
part1.Update
CAD-Based Application in VBA for Tool’s Profiling 251
Figure 12 shows the result of running the program for profiling the rack tool
intended to generate a straight segment profile.
The differences from the case of the generation with the rack tool appear at the
absolute movements performed by the part and the tool, respectively.
Equation (48) is used to move the part, resulting in the following way of calculating
the coordinates of the contact point in the fixed reference system:
...
’Point on the contact curve
angle = Atn(Ycoord / Xcoord)
Set punct = pctCurent
punct.GetCoordinates coordonate
XpctCurent = CDbl(coordonate(0))
YpctCurent = CDbl(coordonate(1))
xFix=XpctCurent * Cos(angle) + YpctCurent * Sin(angle)
yFix=-XpctCurent * Sin(angle) + YpctCurent * Cos(angle)
252 V. G. Teodor et al.
coordonate(0) = xFix
coordonate(1) = yFix
Dim pctContact As HybridShapePointCoord
Set pctContact=linii.AddNewPointCoord
(coordonate(0),coordonate(1),coordonate(2))
setCurent.AppendHybridShape pctContact
part1.InWorkObject = pctContact
part1.Update
pctContact.Name = "Point.Cc" & numarPunct
The observations regarding the geometric set in which the point is inserted and
the change of the name of the respective point are similar to those in the case of the
generation with the rack tool.
The absolute motion of the tool is given by Eq. (53), resulting in the corresponding
modification of the script.
...
’Ppoint on the tool’s profile
x0Fix = xFix + A12.Value
y0Fix = yFix
csi = x0Fix * Cos(unghi * rapTransm * semn) + y0Fix *
Sin(unghi * rapTransm * semn)
ita = -x0Fix * Sin(unghi * rapTransm * semn) + y0Fix *
Cos(unghi * rapTransm * semn)
coordonate(0) = csi
coordonate(1) = ita
Dim pctScula As HybridShapePointCoord
Set pctScula = linii.AddNewPointCoord(coordonate(0),
coordonate(1), coordonate(2))
pctScula.PtRef = referintaOrigine
pctScula.RefAxisSystem = referintaSR
Set setCurent = hybridBodies1.Item("Tool")
setCurent.AppendHybridShape pctScula
part1.InWorkObject = pctScula
part1.Update
pctScula.Name = "Point.S" & numarPunct
The observations regarding the geometric set in which the point is inserted and
the change of the name of the respective point are similar to those in the case of the
generation with the rack tool.
Figure 13 shows the result of running the program to generate a circular arc profile.
In the case of the rotary cutter tool, the centre of the workpiece is a straight line, and
the centre of the tool is a circle.
Following the intersection of the normal with the centroid of the part will result
in only one point; therefore, the attempt to choose as a virtual pole the result of the
selection based on the “Near” option will lead to a calculation error.
The script must be modified as follows.
CAD-Based Application in VBA for Tool’s Profiling 253
Fig. 13 Generating a circle arc profile with the gear shaped tool
...
polVirtual.Name = "Intersect.Pv"
Dim referintaInters As Reference
Set referintaInters =
part1.CreateReferenceFromObject(polVirtual)
setCurent.AppendHybridShape polVirtual
part1.InWorkObject = polVirtual
part1.Update
...
Equation (62) is used to move the part, resulting in the following way of calculating
the coordinates of the contact point in the fixed reference system.
In this case, the parameter that defines the movement of the part is a linear
displacement parameter, denoted below with “delta”.
Depending on its value, the value of the tool rotation parameter, the “angle” value
is calculated.
...
’Point on the contact curve
delta = Ycoord
angle = delta / Rrs.Value
Set punct = pctCurent
punct.GetCoordinates coordonate
XpctCurent = CDbl(coordonate(0))
YpctCurent = CDbl(coordonate(1))
xFix = XpctCurent
254 V. G. Teodor et al.
The observations concerning the geometric set in which the point is inserted and
the change of the name of that point are similar to those in the cases presented above.
The absolute motion of the tool is given by Eq. (66), resulting in the corresponding
modification of the script.
...
’Point on the tool’s profile
csi = xFix * Cos(angle) + yFix * Sin(angle)
ita = -xFix * Sin(angle) + yFix * Cos(angle)
coordonate(0) = csi
coordonate(1) = ita
Dim pctScula As HybridShapePointCoord
Set pctScula = linii.AddNewPointCoord(coordonate(0),
coordonate(1), coordonate(2))
pctScula.PtRef = referintaOrigScula
pctScula.RefAxisSystem = referintaSS
Set setCurent = hybridBodies1.Item("Tool")
setCurent.AppendHybridShape pctScula
part1.InWorkObject = pctScula
part1.Update
pctScula.Name = "Point.S" & numarPunct
The observations concerning the geometric set in which the point is inserted and
the change of the name of that point are similar to those in the previous cases. In
Fig. 14 is presented the generated profile in discrete form and the generating profile.
So that the generated auxiliary elements do not unnecessarily load the drawing
and the Part file, it was decided to delete them when they are no longer needed.
This operation is performed at the end of the calculation step for each tool, the
area of the script that ensures the deletion being detailed below.
...
’Deleting of auxiliar elements
Dim deSters As Selection
Set deSters = fisierPiesa.Selection
deSters.Clear
deSters.Add normala
deSters.Add polVirtual
deSters.Add pctCurent
deSters.Add pctMasPv
deSters.Delete
CAD-Based Application in VBA for Tool’s Profiling 255
Fig. 14 Generating a known profile in discrete form, using the rotary cutter tool
The steps presented for each tool are repeated until the current point traverses the
entire profile.
If desired, the points calculated on the contact curve or on the tool profile can be
exported to an Excel file, using the “ModuleExportCoord” mode.
The script for this module is shown below.
Sub CATMain()
Dim oERedimxcel
Set oExcel = CreateObject("Excel.Application")
Dim fisier As PartDocument
Set fisier = CATIA.ActiveDocument
Dim reper As Part
Set reper = fisier.Part
Dim corp As Body
Set corp = reper.Bodies.Item("PartBody")
Set selectie = fisier.Selection
selectie.Clear
ReDim strArray(0)
strArray(0) = "Point"
Dim sStatus As String
MsgBox "Select points!"
sStatus=selectie.SelectElement3 (strArray,"Select
points!",False,
CATMultiSelTriggWhenUserValidatesSelection,True)
Dim punct As Object
Dim coordonate(2) As Variant
256 V. G. Teodor et al.
oExcel.Visible = True
oExcel.workbooks.Add
oExcel.Cells(1, 1) = "Start"
Dim iCount As Integer
iCount = selectie.Count
Dim oK As Boolean
Dim nrPunct As Integer
nrPunct = 0
If iCount > 0 Then
For i = 1 To iCount
If selectie.Item(i).Type = "Point" Then
nrPunct = nrPunct + 1
Set punct = selectie.Item(i)
punct.Value.GetCoordinates (coordonate)
oExcel.Cells(i + 1, 1) = coordonate(0)
oExcel.Cells(i + 1, 2) = coordonate(1)
oExcel.Cells(i + 1, 3) = coordonate(2)
End If
Next
oK = True
Else
MsgBox "No selected points!"
oK = False
End If
selectie.Clear
oExcel.Cells(i + 1, 1) = "End"
End Sub
3 Conclusions
In this chapter, a new complementary method for profiling tools like rack-gear, gear
shaped or rotary cutter was presented. This method is called the method of “virtual
pole”. For the above-mentioned types of tools, was imagined the method’s applicating
algorithm and were developed in-house calculus programs, which allows numerical
determining of profile. The “virtual pole” is defined as the intersection point between
the normal to the profile to be generated and the centrode associated with the piece.
The method can be applied for profiling tools which generate by enwrapping,
by method of rolling and is based on a re-interpretation of the Willis theorem. This
assumes that, for accomplishing the Willis theorem, when a normal to the generated
profile passes through the gearing pole, that normal belongs both to the generated
and generating profile.
This variant for enwrapping condition determination has the advantage to allow
the need to write, in explicit form, the equations of the relative movement of the piece.
At the same time, the enwrapping condition remains rigorous from the mathematical
point of view.
Consequently, in the method of “virtual pole”, three curves are simultaneous
in contact. These are: the generated profile, the generating profile and the contact
curve. By definition, the contact curve is the geometrical locus of the tangency points
between the two mentioned profiles. The coordinates of contact point depend by the
absolute movements performed by the tool and piece. In this way, we can determine,
in succession, the points onto the tool’s profile, which will generate, known points
onto the piece’s profile. Using the method of “virtual pole”, they are simplified the
calculus needed for profiling the above-mentioned tool’s types, keeping in mind that
the determining of the relative movements between tool and piece can be a source
of major errors.
We must highlight that, in the profiling process, by the new algorithm, the influence
of the relative motions is not eliminated, but only the need to write these movements
is avoided.
References
6. Teodor VG, Baroiu N, Susac F, Oancea N (2016) The rack-gear tool generation modelling.
Non-analytical method developed in CATIA, using the relative generating trajectories method.
In: IOP conference series materials science engineering pp 1–6
7. Teodor VG, Păunoiu V, Berbinschi S et al (2015) The method of In-plane generating trajectories
for tools which generate by enveloping-application in CATIA. J Mach Eng 69–80
8. Teodor VG, Berbinschi S, Baroiu N, Oancea N (2014) Study of the enwrapping profiles asso-
ciated with rolling centrodes by the minimum distance method. Graphical solution developed
in the CATIA design environment. Appl Mech Mater 181–191
9. Baroiu N, Teodor VG, Oancea N (2015) A new form of plane trajectories theorem. Generation
with rotary cutter. Bull Polytech Inst Iasi 27–36
10. Nicus, or B, Silviu B, Gabriel TV, et al (2017) The complementary graphical method used for
profiling side mill for generation of helical surface. In: IOP conference series: materials science
and engineering. Sibiu
11. Costin GA, Teodor VG (2019) The virtual pole method—an alternative method for profiling
tools which generate by enwrapping. Ann Dunărea Jos Univ Galaţi 31–34
Index
C E
CAD-based, 159, 160 Enwrapping, 217–219, 222, 223, 225–227,
Caliper, 91, 93, 95, 99–104 229, 233, 236, 240, 258
CNC, 21, 50, 54–56, 58, 59, 62, 65, 68, 69, Equation driven, 144, 146
110, 111, 141, 143, 149, 151, 155,
168
Collision, 167, 168, 171, 180, 182–184, F
186, 193, 201, 202 Fabrication, 21, 22, 45–51, 54–57, 58–61,
Computational design, 1–3, 6, 8, 13, 15, 45, 64, 65, 68, 69, 92, 108–111, 113,
46, 49–51, 55, 57–59, 61, 68, 69 115, 117, 197
Computer Aided Design (CAD), 46, 101, Fabrication framework, 107–111, 116
143–146, 148, 149, 151, 155, 157, Feature fingertips, 84
161, 205–209, 212–215, 219 Finite Element Analysis (FEA), 91, 93–95,
Customization, 1–3, 5, 8, 11, 21, 34, 35, 206 101–104
Freeform parts, 146
D
2D model, 157, 206 G
3D model, 52, 143, 167, 176, 183, 184, Games, 2–5, 8–10, 12, 13, 20, 25–32, 35
187, 190, 192, 198, 201, 202, Gamified design, 6, 9
205–207, 209, 211 Generative design, 2–6, 8–15, 20, 23–27,
Design experiment, 107, 109, 111, 112, 29, 30, 32–36, 93
114–117 Gesture, 74–76, 79–82, 84, 87, 88
Design for Assembly (DfA), 168–170, 176, Grammatical design, 6, 9, 10, 12
201 Graphical User Interface (GUI), 206
© The Editor(s) (if applicable) and The Author(s), under exclusive license 261
to Springer Nature Switzerland AG 2023
P. Kyratsis et al. (eds.), Computational Design and Digital Manufacturing,
Management and Industrial Engineering, https://doi.org/10.1007/978-3-031-21167-6
262 Index
I R
Industry 4.0, 46, 167, 191 Reasoning, 6, 7, 12, 26, 119, 123, 127–130,
Interferences, 168, 171, 183, 184, 186 132, 134, 137, 138
Interlocking, 107, 108, 110–112, 114–117 Robotic, 48, 49, 74, 75, 84, 86, 87, 89,
107–113, 115–117, 187, 188, 201
Roughness, 142, 154, 155, 160, 161, 164,
K 198
Knowledge-Based Design (KBD), 122,
123, 137
Knowledge-Based System (KBS), 122, S
123, 137 Segmentation, 8, 80
Knowledge management, 131–136, 138 Shape optimization, 1, 3, 5, 9, 10, 12–15
Simulation, 3, 4, 7, 8, 13, 16, 19, 25–30, 32,
35, 54, 55, 100, 102, 110–115, 151,
M 159–166, 182, 187, 188, 190–193,
Machine vision, 73 195, 196, 202, 205, 206
Mechatronics, 73, 87, 88 Surface, 47, 53, 57–59, 62, 65, 67, 68, 98,
99, 101, 141–143, 149, 152–157,
Micro milling, 141–143, 154, 156
159–161, 163–166, 170, 175,
Milling, 50, 54–56, 58, 59, 62, 65, 68, 69,
177–179, 181, 182, 184, 192, 193,
142, 159–162, 165, 193, 194, 196
196, 197, 199, 217–220, 223–226,
231, 235, 236, 238
O
Optimization, 1–3, 5, 6, 8–11, 13, 15–17, T
19, 25, 34, 35, 91, 93, 94, 96, 97, 99, Tool profile, 247, 250, 251, 255
102–104, 119, 125, 142, 157, 160 Topography, 159, 161, 164, 165
Topology, 1, 3–5, 8–10, 12–24, 28, 91–100,
102–104, 117, 127
P
Parametric curves, 144, 146, 147, 149, 157
Polysurface model, 57, 58 V
Product conceptual design, 119–134, Variable feedrate, 143, 146, 147, 152, 156,
136–138 157
Prototyping, 45, 50, 198 Virtual Engineering (VE), 168, 190, 191,
201, 202
Visual Basic (VB), 144, 146, 148
Q Visual Basic for Applications (VBA), 218,
Quality Function Deployment (QFD), 119, 219, 241
124, 127, 137, 138 Visual Studio, 205