Download as pdf or txt
Download as pdf or txt
You are on page 1of 330

Meshing, Geometric Modeling and Numerical Simulation 3

Geometric Modeling and Applications Set


coordinated by
Marc Daniel

Volume 4

Meshing, Geometric Modeling


and Numerical Simulation 3

Storage, Visualization and


In Memory Strategies

Paul Louis George


Frédéric Alauzet
Adrien Loseille
Loïc Maréchal
First published 2020 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,
stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers,
or in the case of reprographic reproduction in accordance with the terms and licenses issued by the
CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the
undermentioned address:

ISTE Ltd John Wiley & Sons, Inc.


27-37 St George’s Road 111 River Street
London SW19 4EU Hoboken, NJ 07030
UK USA

www.iste.co.uk www.wiley.com

© ISTE Ltd 2020


The rights of Paul Louis George, Frédéric Alauzet, Adrien Loseille and Loïc Maréchal to be identified as
the authors of this work have been asserted by them in accordance with the Copyright, Designs and
Patents Act 1988.

Library of Congress Control Number: 2020942932

British Library Cataloguing-in-Publication Data


A CIP record for this book is available from the British Library
ISBN 978-1-78630-609-8
Contents

Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

Chapter 1. Data and Basic Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1. Basic data structures and basic techniques . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1. Basic data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2. Basic techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.2. Internal data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.2.1. What should be stored in an internal structure and how? . . . . . . . . . . . . 29
1.2.2. Internal structures, method by method . . . . . . . . . . . . . . . . . . . . . . 31
1.3. External data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.4. Data structures and memory access . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Chapter 2. Mesh Transformations, Patching, Merging and Immersion . . . . . . . . 39


2.1. Geometric transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.1.1. Conventional geometric transformations . . . . . . . . . . . . . . . . . . . . . 40
2.1.2. Cutting a non-simplicial element into simplices . . . . . . . . . . . . . . . . 46
2.1.3. Decomposition into simplices of a non-simplicial mesh . . . . . . . . . . . . 48
2.1.4. Decompositions for a complying connection . . . . . . . . . . . . . . . . . . 56
2.1.5. Decomposition of a high-degree element . . . . . . . . . . . . . . . . . . . . 57
2.2. Reconnection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.3. Merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.4. Immersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

Chapter 3. Renumbering and Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . 81


3.1. Vertex and node renumbering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.1.1. Numbering and storage of a matrix in profile mode . . . . . . . . . . . . . . 82
3.1.2. Numbering and algorithm performance . . . . . . . . . . . . . . . . . . . . . 84
3.1.3. Some methods for node renumbering . . . . . . . . . . . . . . . . . . . . . . 86
3.2. Renumbering of the elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
vi Meshing, Geometric Modeling and Numerical Simulation 3

3.2.1. Motivation examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92


3.2.2. Some methods for renumbering elements . . . . . . . . . . . . . . . . . . . . 92
3.2.3. Renumbering and mesh partition . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.3. Some examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

Chapter 4. High-Degree Mesh Visualization . . . . . . . . . . . . . . . . . . . . . . . . 105


4.1. Geometric operators and topological operators . . . . . . . . . . . . . . . . . . . . 106
4.1.1. Geometric operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.1.2. Topological operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.2. Representation of curved meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.3. Quick introduction to OpenGL and to the design of a graphics software program . 116
4.4. Some examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

Chapter 5. Visualization of a Solution Field Related to a High-Degree Mesh . . . . . 145


5.1. Element recursive subdivision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
5.2. Recursive subdivision of a solution field . . . . . . . . . . . . . . . . . . . . . . . . 152
5.3. Classic or adaptive tessellation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
5.4. Toward the design of graphic software based on OpenGL . . . . . . . . . . . . . . . 156
5.4.1. Palette definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
5.4.2. Cut definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.4.3. “Pixel-exact” or “almost pixel-exact” representation . . . . . . . . . . . . . . 169
5.4.4. Normals and shading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
5.4.5. Level lines and surfaces and application to “wireframe” plots . . . . . . . . . 171
5.4.6. Representation of non-scalar functions . . . . . . . . . . . . . . . . . . . . . . 176
5.4.7. Simplified scheme for a graphic software program . . . . . . . . . . . . . . . 177
5.5. Some examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

Chapter 6. Meshes and Finite Element Calculations . . . . . . . . . . . . . . . . . . . . 185


6.1. From continuous formulation to discrete notation . . . . . . . . . . . . . . . . . . . 187
6.2. Calculation of an elementary matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 189
6.2.1. The special case of the first-degree triangle . . . . . . . . . . . . . . . . . . . 190
6.2.2. A generic notation for all elements . . . . . . . . . . . . . . . . . . . . . . . . 191
6.2.3. The generic notation for the four chosen elements and heat equation . . . . . 197
6.2.4. Lagrange triangle of degree 1 with three nodes . . . . . . . . . . . . . . . . . 197
6.2.4.1. Lagrange quadrilateral of degree 1 × 1 with four nodes . . . . . . . . . 201
6.2.4.2. Straight-sided Lagrange triangle of degree 2 with six nodes . . . . . . . 209
6.2.4.3. Lagrange isoparametric (curved) triangle of degree 2 with six nodes . . 218
6.2.4.4. In practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
6.2.5. The generic notation for the four chosen elements, elasticity equation . . . . 233
6.2.6. Lagrange triangle of degree 1 with three nodes . . . . . . . . . . . . . . . . . 233
6.2.6.1. The other three elements . . . . . . . . . . . . . . . . . . . . . . . . . . 235
6.2.6.2. In practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
6.3. Matrix or right-hand side assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Contents vii

Chapter 7. Meshes and Finite Volume Calculation . . . . . . . . . . . . . . . . . . . . . 243


7.1. Presentation of the finite volume method with a first-order problem . . . . . . . . 243
7.1.1. Time discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
7.1.2. Spatial discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
7.2. Finite volume methods for two-dimensional Euler equations . . . . . . . . . . . . 249
7.2.1. Spatial discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
7.2.1.1. Finite volume cell definition . . . . . . . . . . . . . . . . . . . . . . . . 251
7.2.1.2. Calculation of upwind conservative fluxes . . . . . . . . . . . . . . . . 252
7.2.2. Time discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
7.3. From theory to practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
7.3.1. Data structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
7.3.2. Resolution algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
7.4. Numerical examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260

Chapter 8. Examples Through Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . 265


8.1. Reading, writing and manipulating a mesh . . . . . . . . . . . . . . . . . . . . . . 266
8.2. Programming a hashing algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
8.3. One point insertion operator per cavity, application to image compression . . . . . 272
8.4. Retrieving a connected component . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
8.5. Exercises on metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283

Chapter 9. Some Algorithms and Formulas . . . . . . . . . . . . . . . . . . . . . . . . . 293


9.1. Bernstein polynomials and Bézier forms . . . . . . . . . . . . . . . . . . . . . . . . 293
9.1.1. Bernstein polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
9.1.2. Bézier forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
9.1.3. Formulas (lengths, surface areas and volumes) for curved elements . . . . . 296
9.2. Localization problems in a curved mesh . . . . . . . . . . . . . . . . . . . . . . . . 301
9.2.1. Current point parameter values . . . . . . . . . . . . . . . . . . . . . . . . . . 301
9.2.2. Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
9.3. Space-filling curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
9.3.1. A Z-curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
9.3.2. A Hilbert curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309

Conclusion and Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Foreword

The “Geometric Modeling and Applications” set is made up of five volumes of research on
geometric modeling. The act of geometric modeling is an age-old one. To look at a few key
points in its history: geometrists in antiquity devoted themselves to it, though, of course, it had
a different form from that which we know today; Descartes, in the mid-17th Century, worked
on partitioning of planes; and Voronoi established concepts relating to this toward the end of the
19th Century.

But geometric modeling, as we understand it now, can be considered to be linked to the use of
computers, especially since this tool has become ubiquitous. We are tempted to fix the scientific
recognition of this activity to the date of the “First Conference on Computer Aided Geometric
Design”, which took place in March 1974 at the University of Utah. This was organized by R.
Barhnill and R. Riesenfeld, whose presentations may be found in the book “Computer Aided
Geometric Design, Academic Press, 1974”, which clearly associates geometric modeling with
the computer.

Since then, many detailed and high-quality books have been published in this field. One of
the pioneering works was, undoubtedly, that authored by D. Faux and M. Pratt, “Computational
Geometry for Design and Manufacture, Ellis Horwood Publishers, 1979”. Some books went
through several editions, taking note of the advances made in research in the field. In this context,
bringing out a new general book on the subject would mark no significant scientific progress.

Moreover, for any object or entity that is to be modeled, there is no single geometric model.
Instead, there are several geometric models, each being adapted to the processes to be carried
out. For example, the model to finely analyze the shape of an object is necessarily different
from models that are used for mechanical computations on this same object. We have, therefore,
chosen to focus on specific points of research in fields that we find particularly important. Thus,
the five references given here, written by specialists, bring you up to date on advances in research
in five domains:
• “Constraints and geometric modeling”, by Dominique Michelucci, Pascal Schreck and
Pascal Mathis, looks at geometric problems that can be approached by describing the constraints,
geometric or more complex, that a solution must verify, rather than directly giving the geometric
definition of such a solution.
x Meshing, Geometric Modeling and Numerical Simulation 3

• “Geometric modeling of fractal forms CAD”, by Christian Gentil, Dmitry Sokolov and
Gilles Gouaty, discusses the definition of fractal forms: this formalism makes it possible to reuse
smooth forms of geometric models that are classic today to define forms that have not yet been
explored till present.
• “Meshing, geometric modeling and numerical simulation”, in three volumes, by
Houman Borouchaki, Paul Louis George, Frédéric Alauzet, Patrick Laug, Adrien Loseille and
Loïc Maréchal, details methodologies advanced for the construction of meshes and geometric
modeling for numerial simulation with applications, especially in mechanics.
• ‘‘Geometric and Topological Mesh Feature Extraction for 3D Shape Analysis”, by
Jean-Luc Mari, Franck Hétroy-Wheeler and Gérard Subsol, studies the analysis of three-
dimensional-meshed shapes via the extraction of characteristics, both geometric and topological.
The fields of application are also detailed to illustrate the transdisciplinary nature of this work.
• “Relevant Triangulations and Samplings for 3D Shapes”, by Raphaëlle Chaine and Julie
Digne, discusses sampling, meshing and compression of free forms from a large quantity of real
data from a virtual, interactive deformation process.

These are books that present recent research. The reader can find, in the general books on
geometric modeling that were mentioned before, elements to fill any gaps they may have and to
enable them to read the books in this series.

Happy reading.
Marc DANIEL
Introduction

Triangulations and, more precisely, meshes, with the subtle difference that separates these
two entities, lie at the heart of any number of problems that arise from varied scientific disci-
plines. A triangulation or a mesh is a discrete representation, using simple geometric elements
(triangle, quadrilateral, tetrahedron, etc., any arbitrary polygon or polyhedron), of a domain that
may be an object or a region of which we want a discrete, spatial description. There are, thus,
many applications, including numerical simulations of any kind of physical problem, though not
restricted to these. In particular, a discrete representation of a (volume) object or a surface may
simply be seen as a geometric modeling problem as is. This book adopts a double point of view,
as indicated by its title, and we will look both at the use of meshes in numerical simulations (the
finite element method, especially) with, of course, the underlying constraints, as well as the use
of these meshes for the (discrete) modeling of geometry.

The literature on triangulations and meshes may be classified in two chief categories: one
more purely mathematical and geometric, the other more oriented toward industrial applications
(numerical simulations) with, of course, though not always, relations between these categories.

The first point of view is covered by the computation geometry community, which stud-
ies (among others) Delaunay triangulations in all cuts, definitions, properties, construction al-
gorithms and the complexity of these algorithms. They also study some applications of these
triangulations. Nonetheless, relatively recently, mesh generation problems are also being stud-
ied, but under a more theoretical angle, generally relating to situations that allow for the use of
Delaunay triangulations for which robust construction methods as well as interesting geometric
properties have been known for a long time. This (somewhat monotheistic) philosophy necessar-
ily imposes limits on the nature of the problems worked (workable). The first reference book on
these subjects is that of Preparata and Shamos [Preparata, Shamos-1985] published in 1985. This
was followed by several others, among which we cite two by Edelsbrunner [Edelsbrunner-1987]
and [Edelsbrunner-2001], that of Yvinnec and Boissonnat [Boissonnat, Yvinec-1997], by Dey
[Dey-2007] and winding up the list with the book by Cheng et al. [Cheng et al. 2012], which
was published in 2012. With a few exceptions, the orientation chosen by these references is not
always guided by the preoccupations of mathematicians, engineers and the world of numerical
simulation in general.
xii Meshing, Geometric Modeling and Numerical Simulation 3

It is this very need for numerical simulations, in particular (and historically) in solid or
fluid mechanics, that has led to the emergence of a “meshing” community among mathemati-
cians and engineers. Without fear of contradiction, we can state that the very first book on
meshes that was seen from this point of view is that of Thompson et al. [Thompson et al.
1985], which was also published in 1985. This book essentially discusses structured meshes
or grids1 while the first truly generalist book, discussing all types of meshes, structured or not,
is that by George [George-1991], which dates back to 1991. Over the years and with the evo-
lution of computers and methods as well as the increasingly complex nature of the meshes to
consider, other books have been published. Let us mention books by George and Borouchaki
[George, Borouchaki-1998], published in 1997, which revisits meshing methods based on De-
launay algorithms; by Carey [Carey-1997], also in 1997, which gives a very pedagogic pre-
sentation of methods; by Topping and co-authors [Topping et al. 2004], in 2004, cited in order
to be exhaustive; by Frey and George [Frey, George-2008], in 2000, with a second edition in
2008, which returns to the general view and incorporates newly appeared techniques; by Löhner
[Löhner-2008], in 2002, with a second edition that came out in 2008, which abounds in inno-
vative details; up to the recent book by Lo [Lo-2015], published in 2015, which, among others,
sheds new light on some problems. In this context, we may wonder what motivated the writing
of this book. We will attempt to answer this question (and also try, by doing so, to arouse the
reader’s curiosity).

The first observation is that the few books on this subject are already either a little old or rather
classic in the way they discuss the approaches and the methods concerning the many questions
linked to meshes. The second remark is that, evidently, new questions have appeared relatively
recently, e.g. everything related to (finite) elements or patches of higher degrees, metrics and their
links to interpolation errors, geometric modeling by meshes, numerical simulation via advanced
methods (adaptation of meshes) and so on. This remark immediately calls forth another: we
will resolutely adopt a double point of view by considering finite elements (or the constitutive
elements of a mesh) as geometric patches and vice versa, thus establishing a link between the
Lagrange world of finite elements and the Bézier world of CAD. This choice, as we will see,
noticeably changes the manner in which we perceive the problems and, consequently, the manner
in which we try to find solutions. In this spirit, this book in two volumes, beyond a set of
subjects that we may qualify as classic (and, thus, essential), approaches many subjects that are
very recent or even completely novel. Moreover, and this may be more surprising (and certainly
rare), some methods (which we find in the literature) have been deliberately left out as they were
deemed to not be of great interest and, a contrario, some methods are described in a manner
that is, at the very least, nuanced and even critical. Indeed, it seemed pertinent to share our
perception, even though this is subjective, up to a point, of these methods, perhaps to prevent
the reader from going down paths that we think are dead ends. Having specified this, this book
provides the necessary basis for a course on meshes and related problems at the masters level
and it may serve as a technical reference for engineering students in science and, more generally,
engineers using numerical simulations. We will give a brief description of the content of the
chapters in both volumes.

1. The grid-type meshes are generated by solving partial derivative equations, for example elliptical equa-
tions. It must be noted that other, more recent books on this theme, followed this pioneering work, but have
not been mentioned here.
Introduction xiii

 Volume 1:

We first give a chapter-wise overview of the first volume of this book. In the first three
chapters, this volume introduces the basic concepts related to finite elements seen through their
shape functions either as finite elements or as patches. In addition to the classic expression
via Lagrange polynomials (complete, reduced or rational) (Chapter 1), we give the equivalent
formulation based on Bézier forms (Chapter 2), which makes it possible to easily find the con-
ditions for the geometric validity of these finite elements (or patches) whether they are straight
or curved and whatever their degree (Chapter 3). Triangulation problems are the subject of the
next three chapters, where we specify vocabulary and the basic concepts that make it possible to
describe the different construction methods for any triangulation (Chapter 4), Delaunay triangu-
lation (Chapter 5) as well as the triangulation (of a domain) with constraints (Chapter 6). In the
next two chapters, we use the concepts introduced earlier to discuss the vast problem of geomet-
ric modeling of a domain in its various aspects. Geometric modeling, from our point of views
(meshes and numerical simulations), consists of construction of a discrete representation from a
continuous representation and vice versa. Different methods are described in Chapters 7, while
Chapter 8 gives several significant examples for the application of these methods. Chapter 9
brings together some basic algorithms and formulas related to finite elements (patches) and tri-
angulations, to persuade the reader to go beyond a theoretical viewpoint, to a practical viewpoint,
which we believe is essential.

 Volume 2:

We share the contents of Volume 2 of this book, for which four new authors have joined
us. This volume (finally) approaches mesh problems and begins with a detailed description of
the concept of metric, a concept which will be seen to be fundamental in all that follows. The
first two chapters, Chapters 1 and 2, thus focus on metrics. We introduce the concept of metric,
we demonstrate their properties and their relations with the geometry of elements of a mesh
and how to control interpolation errors during the resolution of a problem (by finite elements).
The construction methods of meshes and their optimization are the focus of the following five
chapters: Chapter 3 (mesh for a curve), Chapter 4 (simplicial meshes), Chapter 5 (non-simplicial
meshes), Chapter 6 (meshes of higher degree) and Chapter 7 (optimizing meshes). We will then
discuss the large subject of the adaptation of meshes, controlling the solutions via error estimates
and corresponding interpolation metrics in Chapter 8. The use of a dose or a certain dose of
parallelism (based on its multiple forms) is seen through Chapter 9. Chapter 10 illustrates, using
concrete examples, a series of applications of the methods and methodologies introduced and
described through the different chapters of both volumes of this book. As in Volume 1, the
final chapter, Chapter 11, gives practical observations on some of the algorithms from Volume 2,
especially how to use and manipulate metrics.
xiv Meshing, Geometric Modeling and Numerical Simulation 3

 Volume 3:

After completing the first two volumes, it seemed that is was obvious that a third volume had
to be written. A significant amount of issues remain to be addressed. Among these, some2 can
be seen as relating to basic knowledge, not necessarily as unpublished novelties while others are
clearly in line with the dynamics of the previous volumes involving a large degree of innovation.
Chapter 1 focuses on data structures by considering three levels, elementary structures, internal
and external structures. Chapter 2 describes the usual geometric transformations used to manip-
ulate (parts of) meshes and indicates how to quickly join (merge) two meshes. Chapter 3 revisits
the methods for renumbering the nodes of a mesh, for example to minimize the bandwidth of the
matrices (of the finite elements-type) built from such a mesh. Both the following chapters, Chap-
ters 4 and 5, discuss the broad topic of mesh visualization and solutions associated with these
meshes with particular attention paid to high-degree meshes and solutions. The two following
chapters, Chapters 6 and 7, show how to use a mesh, how to compute and assemble a matrix
within the context of using a finite element or finite volume method. Chapter 8, based on a few
selected examples, proposes to readers to carry out some exercises themselves. For this purpose,
snippets of software programs will be available to allow the reader to perform these exercises.
As with the other volumes, some formulas and algorithms are listed in the last chapter, namely
Chapter 9.


∗ ∗

Acknowledgements

We are especially grateful to two then-doctoral students who willingly helped in develop-
ing this book. Nicolas Barral helped us through his expertise with Maple to solve systems that
we encountered during the construction of Serendipity elements. Loïc Frazza, Rémi Feuillet
and Julien Vanharen have helped us regarding high-order elements and finite volume methods.
Victorien Menier is the author of most of the figures that illustrate this book. We also wish
to thank Peha for his magnificent free-hand images, illustrating the impossibility of triangulat-
ing certain polyhedra without adding internal vertices. And, of course, we would also like to
thank everyone in the former common Inria3 and UTT4, Gamma3 team: researchers, engineers
and doctoral students, whose work contributed to the development and consolidation of various
subjects discussed in this book.

2. Although they have already been described in other references, such as [Frey, George-2008], they may
have undergone developments and, in any case, it would be of interest to find them here, together with the
rest of the book.
3. Institut National de Recherche en Informatique et en Automatique.
4. Université de Technologie de Troyes.
Chapter 1

Data and Basic Techniques

The concept of data structure has several meanings. Within the context of this book, three
meanings will be examined. We shall therefore examine what basic data structures are, and how
abundantly they are used in the development of any algorithm and, also, as the very components
of other higher level structures. As indicated, algorithms rely on these structures, but make ex-
tensive use of so-called basic techniques that we are also going to discuss. Internal and external
structures will also be considered. Internal structures depend specifically on the algorithm and
are only known to it, whereas external structures are used as interfaces between algorithms (soft-
ware programs) and are thus known to (and usable by) all players of the computational chain
(meshers, solvers, visualization tools, etc.). Hereafter, the way that these structures are imple-
mented is not in question, but it will be seen that, more often than not, one or more arrays (a
particularly simple structure) are employed.

Concerning basic structures, rather than giving an abstract description, which can be found
in any good reference, we shall make an effort to consider the problem from a practical point of
view, linking to questions about meshes and their utilization in numerical simulations. Within
our specific context, a good understanding of what these structures are and what they are capable
of doing when coupled with a few basic techniques can provide ideas for different contexts (in
fact, this is somewhat the purpose of the presentation, to show that the structures and techniques
used here with positive results can, with the same results, be used in other areas).

For external data structures, we shall briefly present the structures that we use to store meshes
and solution fields, indicating that there is an existing library capable of manipulating them,
accessible to all.

1.1. Basic data structures and basic techniques

These structures range from a totally basic level (a simple single-index array) to significantly
higher levels of sophistication. Operations with the structures are based on a number (quite small)

Meshing, Geometric Modeling and Numerical Simulation 3: Storage,


Visualization and In Memory Strategies, First Edition. Paul Louis George,
Frédéric Alauzet, Adrien Loseille and Loïc Maréchal.
© ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
2 Meshing, Geometric Modeling and Numerical Simulation 3

of basic techniques. The algorithms are going to be built with the best use of these structures and
techniques.

1.1.1. Basic data structures

It is virtually impossible to be exhaustive and there are numerous references on this subject,
but we thought it might be useful to describe some of the basic structures intensively used in
algorithms, starting with the simplest, the array. In a formal way, given a set of values (mainly
integers and floating-point numbers1), a data structure is an organization in the memory allowing
these values to be stored. With this organization, and according to its nature, access methods
are associated (how the sixth value contained in an array can be accessed, how the next value is
accessed, etc.) as well as manipulating methods (how value is removed or added to a list, etc.).

Before proceeding, let us indicate that the values stored in any such structure are often inte-
gers that characterize objects (a point, an edge, a face or a mesh, etc.) but do not only think of an
array of coordinates or an array of solutions.

• Array

An array is a contiguous portion of memory in which values are stored successively one after
the other. Accessing a value is direct, via its index (single-index array) or via its indices (vectors,
matrices, etc.). In the case of a single-index array, the user sees the ith value of the array Tab,
denoted by vi , as vi = T ab(i) and the following value is given as: vi+1 = T ab(i + 1). In
memory, if the array starts at address adr and if each value occupies b memory words, then vi ,
the ith value, starts at address adr + (i − 1)b.

The operations associated with an array are very simple and only relate to the reading of a
value whose index is given, and the writing of a value V , at a given index, j, that is T ab(j) = V .
The choice to carry out these readings and writings with any one such strategy will make it
possible to build more elaborate data structures (see, immediately below, stacks, queues, etc.).

Before being used, an array must be sized by allocating a size to it. It is also necessary to know
the starting index2 (start) in the following and the end index (denoted end) or their equivalents in
the case of several indices (this case will be revisited when we mention the use of such arrays on
vector processors or in parallel). In many cases, an upper bound of the size can be estimated; in
other cases, this is impossible and may lead to overflows3 (therefore to more or less fatal errors)
when a value is added to the array. This topic concerning sizing will be discussed further on.

1. A set of triangles, for example, will be described, for each triangle, by a set of integers and, for the
coordinates of the vertices (nodes), by a set of floating-point numbers.
2. The first index is often set to 0 or 1 depending on the cultural habits of users and the language employed.
3. In other words: end − start + 1 > size.
Data and Basic Techniques 3

• List

In a list, each value is associated with one (or two) pointers, which indicate(s) the index of
the next (or previous) value or the indices of the next and previous values. This will then be
referred to as a linked or doubly linked list. In practice, a list can be seen as an array with two or
three fields (value, next link and previous link) or several arrays, one per field, which is an easier
solution. This choice is the one retained and we denote by Tab the array of values, by Prev the
array of the “previous” link and by Next that of the “next” link; then, if the ith value is at index
j, that is vi = T ab(j), we have vi−1 = T ab(P rev(j)) and vi+1 = T ab(N ext(j)).

The operations associated with a list are more elaborate and concern reads (to read a value)
and writes (to add a value), but also deletions (“remove” a value). As mentioned below, it is
assumed that start and end indices are known as well as the size corresponding to the list and one
sets P rev(start) = 0, there is no predecessor, and N ext(end) = 0, there is no next value.

Sequentially traversing the values is no longer trivial as it is in an array and is written as


vi+1 = T ab(N ext(j)) starting from index j of value vi , as a process initiated by i = start and
j = start. This traversing operation is used to determine whether a given value is in the current
list (and, if so, it should be added to the list). The list is iterated through its N ext(.) link field;
if the value is found, the answer is obtained; if a N ext(.) = 0 value is reached and this value is
not the value sought for, then this value has not been found.

To add a value, one examines the last value vend whose “previous” link does not exist (there
is no “next” value and, by convention, N ext(end) = 0). If V is the new value to be added, it is
successively written that end = end+1, N ext(end−1) = end, N ext(end) = 0, P rev(end) =
end − 1, T ab(end) = V .

The reverse operation, removing a value from the list, consists of breaking its links. Let j be
the value index, it is successively written that N ext(P rev(j)) = N ext(j) and P rev(N ext(j)) =
P rev(j). The value is still there but no longer accessible when iterating through the links.

We shall see below (for example, for hashing) a clever and effective use of this list structure
(then with a single link, N ext(.) to point to the next value).

Finally, to conclude on lists, we now give an illustration (indicating the three fields, the value
or data, the two links and showing their evolution). Consider the following list (end = 8, there
are eight values):

Value v1 v4 v7 v2 v3 v8 v5 v6

Prev 0 5 8 1 4 3 2 7

Next 4 7 6 5 2 0 8 3

In this structure, a value is added at the end, namely v9 is the value, it naturally resides at
position 9, in other words at position end + 1:
4 Meshing, Geometric Modeling and Numerical Simulation 3

Value v1 v4 v7 v2 v3 v8 v5 v6 v9

Prev 0 5 8 1 4 3 2 7 6

Next 4 7 6 5 2 9 8 3 0

The value v4 is then “removed”; thus it is no longer accessible when the linked list is traversed:

Value v1 .. v7 v2 v3 v8 v5 v6 v9

Prev 0 5 8 1 4 3 5 7 6

Next 4 7 6 5 7 9 8 3 0

• Stacks and queues

These structures are nothing more than arrays whose manipulation (reading and writing)
follows a particular strategy. A stack is a LIFO, last-in, first-out structure, whereas a queue is
a FIFO, first-in, first-out structure. In practice, integers are going to be stored in one of these
structures of integers that point to objects (points, edges, etc.) and the difference will reside in
the way that the structure is filled up and emptied; in other words, on the one hand, the entities
(hidden behind the integer indices) are not processed in the same order and, on the other hand,
memory is not managed in the same way.

 For a stack, the index of the last element is conventionally not denoted end but top, as the
top of the stack. We thus have a set of integers of allocated length (size), stored from 1 to top.
Operations on this type of structure consist of extracting the value of the integer placed at index
top and of processing the corresponding object (to this integer). The process performed may
result in introducing one or more values (integers) into the stack or, conversely, not adding any
value. Depending on the case it will follow that top = top − 1 (nothing is added) or a value is
added to the index top (top is not altered) and, if further values are still to be added, they will
be inserted at index top = top + 1, while the loop continues4. The purpose is to process every
object described in the stack and processing ends when top = 0, the stack is empty. The steps of
the operation (it is again assumed that the first index is debut = 1, choosing 0 only changes the
terminal condition) are thus as follows:

4. Making sure that top ≤ size.


Data and Basic Techniques 5

Use of a stack [1.1]


(1) Process index top:
– do top = top − 1;
– if one or several values are to be added:
- do while: top = top + 1, insert the value at this index;
– otherwise, if top = 0, END;
– go to (1).

Let us give a simple example, a stack of 10 elements, denoted 1, ..., 10.

1 2 3 4 5 6 7 8 9 10
10 is processed and 11 and 12 are added, then:
1 2 3 4 5 6 7 8 9 11 12
12 is processed and nothing is added, then:
1 2 3 4 5 6 7 8 9 11
11 is processed and nothing is added, then:
1 2 3 4 5 6 7 8 9
etc.

In the end, 1 is processed and if nothing is added, the operation is completed. A simple example
concerns an edge subdivision algorithm. The indices of the edges of a mesh are stored in the
stack (which are also described in an ad hoc structure), namely the edges are deemed too long.
The operations on the stack are then started by taking the last edge; and if it is deemed too long,
it is cut in half. The length of the two edges constructed is then calculated and according to the
decision test, we pop (top = top − 1) or push by adjusting top if required (if one edge or both
edges are to be stacked).

 For a queue, the index of the first element is obviously referred to as start and that of the
last by end (this is the end of the queue). We thus have a set of integers of allocated length (size),
stored from 1 (or 0) to end. Operations on this type of structure consist of extracting the value
of the integer placed at index start, then start + 1, etc., and of processing the corresponding
object (to this integer). The process performed may result in introducing one or more values in
the queue (integers) or, conversely, in not adding any values. The values to be added are placed
at the end, therefore at index end = end + 1, as necessary. The aim is to process all the objects
described in the queue and processing completes when the index start reaches the index end.
The steps of this operation are as follows, assuming start = 1:
6 Meshing, Geometric Modeling and Numerical Simulation 3

Use of a queue [1.2]


(1) Process index start:
– do start = start + 1;
– if one or more values are to be added:
- do while: end = end + 1, insert the value at this index;
– otherwise, if start > end, END;
– go to (1).

Let us give a simple example, again, a queue of 10 elements, denoted 1, ..., 10.

1 2 3 4 5 6 7 8 9 10
1 is processed and 11 and 12 are added, then:
2 3 4 5 6 7 8 9 10 11 12
2 is processed and nothing is added, then:
3 4 5 6 7 8 9 10 11 12
3 is processed and nothing is added, then:
4 5 6 7 8 9 10 11 12
etc.

In the end, there is only one element left, it is processed and if nothing else is added, the process
is complete. If we return to the previous example of the edge subdivision algorithm, the indices
of the edges (described in an ad hoc structure) of a mesh are stored in the queue, namely the
edges deemed too long. The operations on the stack then initiate by taking the first edge; it is
cut in half. The length of the two edges thus constructed is then calculated and according to the
decision test, the next edge is addressed, or one or two of the new edges are added at the end of
the queue before moving on to the next edge.

It is important to see that the choice of structure, to address the same algorithm, most cer-
tainly has an influence on the final result, on the cost of the process and the necessary memory
resource5.

• Grid

A grid6 is a spatial structure in one, two, three, etc., dimensions that, in our case, essentially
enable the localization of geometric data such as points and triangles. A grid, as seen below, is
a cover of a domain by a set of cubes. The purpose is to distribute the data into these cubes,
addressing only those deemed relevant, in order to accelerate a given process.

5. In a stack, the top (top) is used to add a value, while in a queue, the end index is simply incremented
(end = end + 1).
6. This is also referred to as a “bucket”.
Data and Basic Techniques 7

 An immediate application thereof is to quickly know which entities are close to a given
entity. The simplest grid is a uniform grid (Figure 1.1), which covers the area of interest using
cubes aligned with the axes, the cubes have a given size, δ in each direction (“small” segments,
squares or cubes depending on the size of the space). In practice, δ is given and a number of
cubes n is inferred, or this number (budget) n is given and δ is inferred. For some problems,
it may prove interesting that cubes of different sizes are built depending on the direction, then
one or more sizes δx , δy , etc., will be obtained, one per direction, therefore budgets nx , ny , etc.
Regardless of the choice, it is very easy to find the cube that contains a given entity. We consider
three dimensions and denote by cube(i, j, k) the cube of index, the triplet (i, j, k). The extrema
of the grid are determined and (xmin , ymin , zmin ) denotes the bottom left corner of the latter
and (xmax , ymax , zmax ) its top right corner (and the cube thus defined is slightly dilated). One
then has to find which cube contains a point of coordinates (x, y, z) (assumed to be within the
grid, as it has been built for this purpose, see hereafter). It is trivially found that:
x − xmin y − ymin z − zmin
i= ,j= and k = .
δx δy δz
To deepen the understanding of what can be done with a grid, we are going to consider a specific
problem. Given a cloud of points (say a million points for clarity), how can a structure be built
to make it easy to find points close to a given point. First, the points will be stored using a
grid of a given size. This size, for example, 10 × 10 × 10, necessarily implies that a given
cube may contain several (possibly a significant number of) points. However, we can only do
cube(i, j, k) = v, in other words, only one value v can be stored (here the index of a point Pv )
per cube. The solution is to couple the grid with a (linked) list. The value v is therefore just the
entry point in this list and it will serve to “contain” all the points of the cube under consideration.
The resource requested is therefore the grid, 1,000 in our example, and the list whose size is, still
in this example, 1,000,000, one link per point (the equivalent of the array N ext(.) seen above).
With this device, all the points can be stored.

We set cube(i, j, k) = 0 for all the indices and the list initiated, denoted N ext(.) at 0 (there
is no next), then the points are taken one by one. For a given point, of index w, its triplet (i, j, k)
is found and the value cube(i, j, k) is examined. If this value is zero, the cube is empty and one
sets cube(i, j, k) = w. If v = case(i, j, k) is not zero, the list will be iterated, starting at v, in
the following way:

Insert an index w in a list [1.3]


Start ind = w.
(1) If N ext(ind) = 0, then N ext(ind) = w, END:
– otherwise, ind = N ext(ind), go to (1).

We now have the grid cube(i, j, k) and the array N ext(.); we can thus come back to our
subject, which is to find the points close to a given point for a given distance threshold, ε. The
principle is simple, and we find the triplet (i, j, k) associated with the point being examined and
v = case(i, j, k) the index of a vertex whose link fields will be iterated. In this way, the cube
(i, j, k) is examined but it is possible that it is empty (v = 0) and/or that there is a solution in
8 Meshing, Geometric Modeling and Numerical Simulation 3

Figure 1.1. A uniform grid (here in two dimensions) and the associated linked list

a nearby cube. Therefore, we shall also look into, according to ε and sizes deltax , deltay , etc.,
the cubes neighboring the original cube likely to be relevant. To this end, the first index will a
priori vary between i − Δi and i + Δi , and the same is done for the other indices, with as a
value for Δi , the appropriate integer, calculated based on δx and ε, that is Δi = δεx . Since the
index, here the first one, must be comprised within the admissible range, one has to ascertain
that i − Δi and i + Δi are within this range. The first possible index is 0, x = xmin therefore
x − xmin xmin − xmin xmax − xmin
= = 0, whereas the last is − 1, where xmax is an upper
δx δx δx
xmax − xmin
bound for x. We thus set imin = 0 and (with an integer calculation) imax = −1
δx
we have the constraint:
i − Δi ≥ imin and i + Δi ≤ imax ,
and thereof the two values are deduced:

Δ−
i = min (i, Δi ) and Δ+
i = min (imax − i, Δi ),

in conclusion, the first index will vary7 between i − Δ− +


i and i + Δi . The same occurs for the
other indices. The following scheme can thus be proposed:

7. One could also have simply written a variation between max (i − Δi , 0) and min (i + Δi , imax ).
Data and Basic Techniques 9

Find points in the neighborhood of a given point [1.4]


Build the grid case(i, j, k) and its N ext(.) link fields.

Calculate the triplet (i, j, k) of point P = (x, y, z) under examination:


– for indi = i − Δ− + − + − +
i , i + Δi , indj = j − Δj , j + Δj , indk = k − Δk , k + Δk :
- v = case(indi , indj , indk ) → Pv the point of index v;
(1) If ||P − Pv || ≤ ε, Pv is a solution:
- v = N ext(v), if v = 0, take Pv and go to (1);
– end for.

The points searched for are the Pv discovered in this algorithm.

 Another utilization of the same structure is the implementation of a quick filter. Given a
number of points, find out if a (new) point is not too close to a given point (already stored in
the structure). If that is the case, do not retain it (filter effect), but encode it into the structure.
Note that the distance threshold, ε, depends on the point analyzed P , and each of the points (Pv
already coded in the structure) against which it will be compared. With this in mind, ε should be
written as ε(P, Pv ). The distance between P and a Pv is ideally computed within an underlying
metric field (every point is associated with a size, this is an isotropic case) and the threshold of
the filter can be expressed on the basis of a “unit” length (relative to this field). For instance,
if the sizes at P and Pv are denoted by h and hv , it will be possible to calculate the distance8
d(P, Pv ) d2 (P, Pv )
in the field as dh,hv (P, Pv ) = 2 or still d2h,hv (P, Pv ) = and a filtering
h + hv h hv
2 2
criterion dh,hv (P, Pv ) ≤ ε or dh,hv (P, Pv ) ≤ ε will be employed with, now, a fixed threshold,

2
for example 2 .

An algorithm very similar to the previous one will be used with, in addition, a more subtle
path, ideally in spiral, around the initial cube. The purpose of this is, in the event of a rejection,
to detect more quickly a (the first) conflict situation and, thus, to stop exploring the points of the
structure. Since the likelihood of finding a conflict decreases as one moves away from the cube in
which the analyzed point falls, we shall start by analyzing the points identified in this cube before
“revolving” around (unlike the previous case) by gradually moving away from it. It remains to
be clarified which cubes should be analyzed, namely those likely to contain a conflicting point. It
is assumed that the size h of the point P that is being filtered is known. The points Pv potentially
in conflict are progressively discovered, but their size hv comes to be known when such a point is
discovered. As a result, how far the search should be extended remains a priori unknown (what
is, along the width, the number of cubes to explore). Therefore, we shall rely only on the h of
point P , with a security coefficient to assess the width of the area to be explored. On the other
hand, the approximated computation of distances will be correct in accordance with the sizes
involved.

8. It has been seen, on many occasions, in previous volumes, how a distance in a field can be calculated
(approximated); we retain here an inexpensive and reasonable approximate formula for the present case.
10 Meshing, Geometric Modeling and Numerical Simulation 3

Figure 1.2. A uniform grid (here in two dimensions) and the associated linked list.
The point P (black circle) to be filtered with its size h and point Pv of the cube (supposedly
non-empty) of indices, and those determined by P , Pv as the point acting as point of entry into
the linked list

The indices (i, j, k) (therefore integers) of point P of coordinates (x, y, z) are calculated, as
well as (ci , cj , ck ), the relative coordinates of P . In more detail, we have: ci = x−xδxmin while
i = int(ci ) the integer immediately less than or equal to ci . The half-width is estimated at i, that
is lx = δhx . It follows that the first index at i to be explored is mini = int(max(0, ci − lx )) and
that the last one is maxi = int(min(ci + lx , imax ). For other directions (at j and k), we have
exactly the same approach. It now remains to determine how the set of cubes will thus be defined
in order to minimize the number of computations. The trivial solution, which is the loop from
mini to maxi (likewise at j and k) is obviously ruled out.

The list in the example in Figure 1.2 focuses merely on the relevant cubes. Figure 1.3 shows
two strategies, among many others, of possible paths. The strategy to be retained must both
minimize costs and, obviously, be relatively easy to “program” (think of the three dimensions).
The aforementioned trivial solution, although immediate in its written expression, is not a good
choice. On the left, the distance between the initial cube and the other cubes to find the path
seems optimal. Let us look first into the cube (i, j) and then its four neighbors per edge (the
cubes (i − 1, j), (i + 1, j), (i, j − 1) and (i, j + 1)). Then, we consider the four neighbors per
corner (cubes (i − 1, j + 1), (i − 1, j − 1), (i + 1, j1 ) and (i + 1, j + 1)); as such, we have just
examined the crown of rank 1. At the top level, the same idea is applied (at j the two cubes
Data and Basic Techniques 11

involved, which are (i − 2, j) and (i − 2, j), etc., for rank 2). It is clear that it is not easy to
build this path (imagine the three dimensions). To the right of the figure, the trajectory is more
simply defined, even if it is not entirely optimal. We look again first at cube (i, j) and then its
neighbors in the first crown, rank 1 (starting with (i−1, j) for example and then rotating). Next,
the adjacent (active) cubes of the second crown are taken, rank 2 (starting with (i − 2, j) for
example and then rotating). Then the following crowns are addressed if they are relevant. This
strategy seems to be much easier to program9.

Figure 1.3. Two possible paths around the cube containing point P
(two-dimensional case)

In short, the starting cube is located. If it is non-empty, we find the index v of the point that it
contains. The point P is filtered with Pv ; if there is conflict, the process terminates and P is set
aside. Otherwise, the links associated with v are iterated through and P is filtered with the points
encountered; any collision stops the filter and P is set aside. Then, if the process continues, the
surrounding cubes are traversed following the pathway strategy defined. For each cube, if it is
non-empty, the index v that it contains is found and the same process is repeated.

If, at the end of visiting all of the cubes deemed a priori, the point P has not been filtered,
it is inserted in the grid, which is tantamount to either add it to its own cube (if it was empty)
or to add it at the end of the list of that same cube. The following diagram will try to explain
synthetically what has just been discussed:

9. And this will be left to really interested readers!


12 Meshing, Geometric Modeling and Numerical Simulation 3

Filter a point [1.5]


Build the filtering grid cube(i, j, k) and its N ext(.) link fields.

Compute the triplet (i, j, k) of point P = (x, y, z) being examined. Set v = cube(i, j, k).

If v = 0, take point Pv :

(1) if ||P − Pv || ≤ ε, Pv is in conflict, END:


– v = N ext(v), if v = 0, take Pv and go to (1);

End if v = 0.

“Spiral” path following the crowns. Set rank = 0;

(2) rang = rang + 1.

Do for indices i, j and k of the “rank” crown:

Take v = cube(i, j, k), if v = 0 take Pv ;

(3) If ||P − Pv || ≤ ε, Pv is in conflict, END:


– v = N ext(v), if v = 0, take Pv and go to (3).

End Do for indices.

Go to (2) while not completing.

Insert P into the filtering grid.

It should be noted that this type of filter is perfectly adequate for Delaunay-based triangulation
(meshing) algorithms. It makes it possible to rule out (not to insert) a point too close to a vertex
and it controls the size of the (current) edges of the mesh under construction. For a frontal-type
method, such a filter avoids proposing a point that is too close to a vertex existing during the
advance of the front. For tree-based methods, the tree itself serves as a filter (see below) thereby
resorting to a grid is, in general, superfluous.

 For one last use, think of a mesh or re-mesh algorithm, to quickly find a (mesh) element
containing a given point. The grid will then be used to initialize a localization algorithm. It is
the triplet (i, j, k) associated with the point that is addressed and the index of a (current) mesh
vertex v = cube(i, j, k) is obtained. Each vertex of the mesh has to simply be associated with
an element of the latter to initiate the localization algorithm (as seen in Volume 1 for simplicial
meshes). The use of the grid makes it possible, in principle, to be relatively close to the solution
and thus enhances the performance of the search.
Data and Basic Techniques 13

Most of what has just been seen can be extended to the anisotropic case more or less. The
geometry of the cubes is not really adaptable, but, on the other hand, the lengths can be evaluated
in accordance with anisotropy.

Many other uses, in our field or in others, are possible. In conclusion, let us indicate that
a grid, as described above, is by nature uniform (cubes with equal sizes defined according to
directions) and that there is the risk that one (or more) cube(s) “contain(s)” a significant number
of points (here), while others, if not most of the other cubes, are seldom filled up or even empty. If
the distribution of points into the cubes is poorly balanced, most of the advantages of the grid will
be lost. As a matter of fact, in such a case, the performance gain is reduced, the mere presence
of a single cube containing almost all of the points (and this case is real) may lead to quadratic
behavior (Figure 1.4). As a result, a more flexible and adaptive structure is to be considered, such
as a quadtree- or octree-type tree (see below).

Figure 1.4. The separation potential of a grid with 25 cubes and a tree of the same number
of cubes. The smallest cube of the grid (of all in fact) has a size equal to 15 ,
1
that (or those) of the tree has (have) a size equal to 64 . This discrepancy is,
naturally, the greatest possible, thus, in practice, it is not actually achieved.
However, the value reached is better than that related to a grid

 Hash grid: We may not effectively build the grid (thereby a two- or three-index array,
which are memory-hungry) but describe it via a single index by way of a hash. One virtually
has a grid such as hereafter and a point P is associated with its triplet (i, j, k) in a conventional
manner. Then, a hashing function is used to build a single index, denoted by ijk. For example,
let ijk = i + j + k up to a modulus. In this way, the size of the grid is controlled, although the
size of its cubes might be extremely small, giving a good degree of separation and an excellent
resolution; the price to pay is the increase in length of the linked list that will take longer to
iterate. This aspect (the resolution) will be explained in Chapter 2 when mesh reconnecting and
merging problems will be addressed.
14 Meshing, Geometric Modeling and Numerical Simulation 3

• Tree

Of all the possible types of trees, two of them can be distinguished, namely quatree- or octree-
types of trees, that will be seen as spatial structures (similarly to the grids discussed above). The
goal is the same here, to store, localize or filter points. The use of such a structure, more sophis-
ticated than a simple grid, will allow, for a reasonable budget, that no cubes (of the quadrants or
octants) be abnormally filled. Furthermore, in situations where the geometry is fine, the cubes are
refined and otherwise, larger sized cubes are kept. It should also be noted that the conventional
rule for balancing the tree, applied for use for meshing purposes (Volume 2, Chapters 4 and 5),
is not required here. This will avoid behaviors of a quadratic nature by controlling the depth of
the tree in order to control the maximal number of points linked to a given cube. In practice, this
maximal number of points is chosen and the maximal depth is deduced therefrom as well as the
maximal number of cubes in the tree (which will give the necessary memory budget).

To further describe this technique, two dimensions are considered, and thus adopt a quadtree
and follow the very clear presentation given in the previously study [Löhner-2008]. A list of
possible variations of this process will also be proposed. The tree is seen as a two-index array,
namely an array denoted quad(., .). The second index is that of the quadrant. The first index can
take several values and, according to them, it allows the history of the quadrant to be described
and the points it contains to be referenced. Therefore, if no more than four points are tolerated10
per quadrant, the array is written as quad(1 : 7, i) and the precise meaning of quad(j, i) is as
follows:
– v7 = quad(7, i) indicates that the quadrant has been subdivided (v7 < 0), is empty (v7 = 0)
or gives the number v7 of points that it contains (v7 > 0);
– v6 = quad(6, i) indicates the position of the quadrant i inside its parent v5 ;
– v5 = quad(5, i) gives the index of the parent quadrant (v5 > 0);
– quad(1 : 4, i) is defined according to the value of v7 :
- if v7 > 0, quad(1 : 4, i) gives the indices of the points contained in the cube; a zero
index indicates that the cube is not already saturated;
- if v7 < 0, quad(1 : 4, i) gives the indices of the 4 children of quadrant i.

Besides this array (of integers), an array (of real numbers) bounds(., .) will be defined that asso-
ciates with each quadrant (octant), aligned with the axes, the minimum and maximum abscissa
(the ordinate, and the height) of its vertices. In two dimensions, we thus have four values, de-
noted xi , yi , Xi , Yi for quadrant number i, that is to say that xi = bounds(1, i), etc. In three
dimensions, there will be six such values. These bounds will be used to locate a point given by
finding the cube (cell, or leaf) in which it is contained.

10. A natural value since the quadrants are likely to be subdivided into four, but another value may be taken
for example 10. The array will then be organized as quad(1 : 13, i). For indices 13, 12 and 11, we find
once more the analog of values 7, 6 and 5 as they are defined in the text and for indices 1 to 10, either at
most the 10 points stored are found or only the four children. In three dimensions, for an octree, the natural
value is obviously 8.
Data and Basic Techniques 15

All of this information will be used to build (with the given points) the tree and then to use
it to carry out operations related to the considered process (localization of a point, insertion of a
new point and filter of a point relatively to the existing points in the tree).

Let us indicate that there are other ways to define a tree, in particular, by storing only what
is strictly necessary in the structure itself and by adding pointers. Examples of this alternative
construction method will be discussed below.

 Localization of a point

Consider the structure with its two arrays. To find the cube containing a point, the tree
has to be iterated starting from its root and descending through its various levels down to the
desired depth corresponding to the solution cube. The array bounds(., .) is used to know whether
the point being examined is inside a given cube. The array11 quad then makes it possible, if
necessary thus if v7 < 0, to travel downwards in the structure via one of the values quad(1 :
4, .) to find the cube at the terminal level containing the dot. This gives a particularly simple
algorithm. Starting from the root, it searches in which of the (4) children the point is located
and the process iterates as long as it is incomplete. This gives the following steps, denoting
P = (x, y) the coordinates of the point being analyzed.

Localization of a point [1.6]


iel = 1
– (1) if quad(7, iel) < 0, if among the four children, quad(1 : 4, iel), find the one that
contains P :
- recover xiel = bounds(1, iel), Xiel = bounds(2, iel)
yiel = bounds(3, iel) and Yiel = bounds(4, iel) then;
- if xiel ≤ x < xiel +X2
iel
, two cubes are discarded;
yiel +Yiel
- if yiel ≤ y < 2 , a single cube, i, is retained, do iel = i and go to (1).
– The solution is cube iel, END.

It should be immediately noted that testing the range of x and y requires only two operations
(three in three dimensions). We also observe that a problem of inaccuracy in the computation
does not really have any consequence; in the worst case, the wrong cube is found.

 Insertion of a point

To insert a point in the structure, it has first to be localized: this is the previous algorithm.
Let i be the quadrant index found, and the insertion of the point (denoted e in Figure 1.6) is done
according to the following scheme.

11. Or its three-dimensional equivalent.


16 Meshing, Geometric Modeling and Numerical Simulation 3

Figure 1.5. A cube with its bounds, its four children with the new bounds
and the local numbering of its children. On the right, the four points “fall” into the first child

Insertion of a point [1.7]


– (1) if quad(7, i) = 4 (the quadrant is saturated):
- subdivide the quadrant into 4, rank the 4 points in the desired child;
- localize the point e in the desired child, let i be its index, go to (1).
– Otherwise quad(7, i) = quad(7, i) + 1 and quad(quad(7, i), i) = e, END.

Figure 1.6 shows how to insert into the structure, a point that falls into a saturated quadrant that
will have to be subdivided, this is the case quad(7, i) = 4 of the scheme where i is the index of
this quadrant.

The memory is intended for m cubes and in the array quad, a minima was initialized (7, 1 :
m) = 0 during the construction phase of the tree (see hereafter). It is assumed that n cubes
have already been created; the first free cube is thus that of index n + 1. The cube, of index i,
containing the point is saturated, as such it must be subdivided before attempting to insert the
point.

The children will therefore be stored at the first free indices, that is n+1, n+2, n+3 and n+4.
The cube of index i should forward to these indices, namely quad(1, i) = n+1, ..., quad(4, i) =
n+4 and should marked as subdivided, that is quad(7, i) = −1. In the reverse direction, children
must point to their parent, namely quad(5, n + 1) = i, ..., quad(5, n + 4) = i. Then, we localize
points a, b, c and d in the desired children. In the example in the figure, the cube n + 1 contains
a and therefore quad(7, n + 1) = 1, the next cube is empty, the next one contains b and c, thus
quad(7, n + 3) = 2, etc. The array bounds(., .) is updated in parallel. Once done, the point e is
then inserted, it falls into the cube n + 2 which is unsaturated, and it can thus accommodate it.

 Construction of a tree

With the two operations above (localization and insertion), it will be shown how to build a
tree to store a cloud of given points.
Data and Basic Techniques 17

Figure 1.6. Updated array quad(., .) related to the attempt to insert a point, denoted e, in a
saturated quadrant, of index i (already containing points of index a, b, c and d)

The root (the first quadrant (octant)) is defined from extrema (xmin , ymin , zmin ) and
(xmax , ymax , zmax ) slightly dilated. This root might be a square (cube) or a rectangle (paral-
lelepiped). The quadrants (octants) will therefore be squares (cubes) or not. For the subdivision
of cubes, we shall rely on the middle of the edges. The construction of the tree consists of
defining its cubes down to the desired level and to associate them with their bounds (the arrays
quad(., .) and bounds(., .)), while satisfying the constraint: no more than four points per cube.
We start by setting quad(7, i) = 0 for all the expected values for i (from 1 to m). The root is
initialized:
– i = 1, quad(1 : 7, i) = 0;
– bounds(1, i) = xmin , bounds(2, i) = xmax , bounds(3, i) = ymin , bounds(4, i) = ymax .

Then, the insertion algorithm is sufficient to build the tree, that is simply:

Construction of the tree [1.8]


– Do for all points of the cloud:
- Insert the point using Algorithm [1.7].
– End do.
18 Meshing, Geometric Modeling and Numerical Simulation 3

The index of the quadrants is found during subdivisions by a simple increment of 1 in the
numbering order seen above (Figure 1.5 in the middle).

 Filtering of a point

Similarly to, and for a comparison with, a grid, it will be shown how to filter a point relatively
to a set of points stored in a tree structure. With each point a size h is associated and the relevant
quadrants of the tree will be visited to find out if the point under examination P = (x, y) is too
close to a given point for a threshold ε. Except for a few details, the same method12 allows the
points in a certain neighborhood of a given point to be found. The idea is similar to a grid to
locate the cube containing the point being analyzed and then to examine, if necessary, the neigh-
boring cubes in a given neighborhood. The cubes neighboring a given cube are of several types
depending on their degree of “kinship”. Siblings will be found (derived from the subdivision of
the same cube) as well as more or less distant cousins (derived from the subdivision of siblings
or cousins of the parent). In other words, it will be necessary to travel through the tree, a priori
in both directions, downwards (moving from a child to the child of a child, etc.) or upwards
(moving from a parent to the parent of a parent, etc.). Stepping downwards or upwards into the
tree is trivial via either quad(1 : 4, i) or quad(5, i).

Filter a point or find the points located within a given neighborhood [1.9]
– Locate the point P in the tree, that is, i the index of the cube found.
– Examine cube i and depending on the case:
- for a filter:
. for the Pv discovered in the cube, if ||P − Pv || ≤ ε, the point is filtered, END;
- to find a neighborhood:
. for the Pv discovered in the cube, if ||P − Pv || ≤ ε, the point Pv is a solution.
– Move to relevant neighboring cube, while not terminating.

We have the same remark as for a grid, if the h of the point analyzed is known. Those
of the points, P , discovered in the tree are not known until the time of their discovery, hence
a coefficient of security should be provided. Nevertheless, the tree was built from supposedly
consistent points (in terms of their size h) and, in addition, the risk of having a cube too full
is prevented by the subdivision rule for the saturation criterion and, even if several neighboring
cubes are impacted, the number of points (and therefore of computations) to be examined is
lower than in the case of a grid.

Figures 1.7 and 1.8 try to illustrate the utilization of a tree for filtering a point given with
respect to the points stored in the structure. Geometrically, the domain surrounds a central object.
The outer boundary is very coarsely meshed while the boundary of the central object is finely
meshed. The result is the tree shown in Figure 1.7 (on the left). A magnification of the central
region is shown on the right side of the figure. The point P to be filtered can be seen, and is

12. If ε defines the neighborhood, the same algorithm is used by fixing h = 1 for every point.
Data and Basic Techniques 19

represented by a red circle. Its size h determines the area to be analyzed by the red circle, the
encompassing box of this circle was also added (aligned with the axes).

Figure 1.7. On the left, in blue, the boundary of the domain with its two connected components
and the associated tree. On the right side, the central object is enlarged, the point to be filtered
(red circle) is the center of the circle of radius h

There is a case where to travel from a cube to one of its neighbors, it is necessary to go back
to the root and then move downwards into a another branch of the tree, but this (long) journey
can be avoided using a simple trick. To this end, the depth p of the tree has to be known (see
immediately below). This depth makes it possible to calculate the size (sizes) of the smallest
1
cube in the tree, namely a factor p of xmax − xmin and ymax − ymin . The two increments are
2
xmax − xmin ymax − ymin
then defined δx = and δy = . From of the bounds of the cube and
2p+1 2p+1
its position in its parent, in the middle of Figure 1.5, rather than moving up and down in the tree,
a location query will be initiated on a fictitious point. For example, to find the cube to the right
of the cube numbered 2 of bounds (xi , Xi , yi , Yi ), we shall build as a fictitious point the point
(Xi + δx , yi + δy ). Similarly, to find the left neighbor of the cube numbered 1, we build as a
fictitious point the point (xi − δx , yi + δy ), etc.

Before continuing, let us figure out how to calculate the depth of the tree. It is necessary to
associate an additional value13 to the cubes, their depth. Therefore, let pi = quad(8, i) be the
depth of the cube i. We initialize p1 = 0 for the root and set p = 0. At each subdivision, we have
pchild = pparent + 1 and the depth p is adjusted as p = max(p, pchild ).

In the case of a grid, to determine which cubes to examine and to define how to travel them,
the width of the relevant area had been calculated and a spiral trajectory constructed (or almost).

13. For example, by giving the array a size of eight values in the first field.
20 Meshing, Geometric Modeling and Numerical Simulation 3

Figure 1.8. A real example (the tree of which would be close to that of the previous figure)
where a large number of points (from the boundary of a domain) is located inside a very small
part of the volume. For a grid, one (or a few) cube(s) is (are) largely full,
and the others (although of the same size) being almost empty. For a tree,
the cubes really utilized (and with adapted size) will be in the neighborhood of
this half-plane (right-side enlargement) in its cube (visible on the left)

The regular structure (the grid) had been used to define this trajectory. With a tree, we are going
to look at how these operations are implemented. The non-regular structure of the tree, the
existence of (geometrically) neighboring quadrants, but of different sizes (different levels) that
are not topologically neighbors (sharing no common edges), makes the spiral path more difficult
around a given quadrant. It should be possible to navigate the tree upwards, travel through
a neighbor, at this level, and move downwards. In order to avoid these operations, we shall
propose to rely only on the localization algorithm. A quadrant is considered and it is virtually
associated with a circle whose center is the barycenter of the quadrant and radius, a length in
accordance with the size of the quadrant. By increasing this radius, the circle will intersect the
quadrants in the vicinity of the initial quadrant. The circle should simply be sampled with a few
points. Its points are then localized in the tree and the quadrant that contains them is determined;
this quadrant is intended to be processed (by referring to the vertices that it points to). To avoid
addressing the same quadrant multiple times, a coloring is used. Indeed, the fact that the radius
increases does not mean that the affected quadrants are always different, a large quadrant could
be found for several radius values. It should be noted that this method has the advantage of
solving the problem of finding neighboring cubes and that neighborhood information between
sibling quadrants does not need to be stored.
Data and Basic Techniques 21

Bear in mind that the encompassing cube can be defined (aligned with the axes) and associ-
ated with the circle (sphere) being searched for. This can accelerate the computations. A point
in an a priori relevant cube can be evacuated in a single simpler computation, the difference in x
(in y) with this cube.
 Filtering or insertion of a point

In Delaunay-based methods or frontal methods, points are proposed in order to be inserted.


These points have to be filtered (discarded) or selected as candidates for the insertion. We then
follow the algorithm above for this specific purpose.

In Delaunay-based methods, the insertion of a point needs to quickly find the element in
which it is situated. Its localization in a tree cube makes it possible to find a vertex of the mesh
being built. An element should then be associated with every vertex to initialize the search
process with the element associated with the vertex found; this is the notion of seed which we
will revisit.
 Anisotropy
Similarly to a grid, a tree is inherently isotropic. On the other hand, distance calculations can
be carried out by taking into account an anisotropic field.
 Immediate or more advanced variants for the definition of a tree

The definition of a quadtree, as seen above, can be made significantly more compact and
therefore less memory greedy and thus makes it possible to store on a laptop computer a tree
containing dozens or hundreds of millions of cells.

We consider the situation where only downwards searches are performed in the tree. This
avoids having to memorize parents. Next, the coordinates of the cell vertices are not stored (the
array bounds(., .)). They will be calculated on the flight at the expense of redundant calcula-
tions, but with significant memory gain. For the purpose of further compacting the structure, a
minimum amount of information will be stored in every quadrant. These are of two types, inter-
mediate quadrants and terminal quadrants. A intermediate quadrant should only allow access to
its children. Only one index is needed that points to the first child, the others being consecutively
stored in memory. Terminal quadrants (which do not have children) allow effective access to
the entities (for example, vertices) stored in the tree via, once more, a single index. In the event
that several entities are to be stored in the tree, they will be stored consecutively in the memory
starting from that first index. The same index, therefore only one, can contain the information.
If the index is positive, the quadrant is terminal and the index points to the entities are stored. If
the index is negative, the quadrant is intermediate and the absolute value of the index points to
the first child of the cell.

Everything that has just been seen in two dimensions can be extended to three dimensions.

This section ends by mentioning a problem mentioned above for the arrays, but also common
to all these structures, which is the case where their initial sizing proves to be insufficient. For
22 Meshing, Geometric Modeling and Numerical Simulation 3

example, an array has been sized to store up to 1,000 values and storage for another value is
required. This point will be discussed in section 1.1.2 where several solutions will be proposed.

1.1.2. Basic techniques

It is impossible to be exhaustive, here too, but it is useful to describe some of the basic
techniques14 used intensively in algorithms (here rather destined to meshes). In our experience,
we have observed the role played by a reduced number, all in all, of techniques which, for the
most part, are surprisingly simple and, in principle, (should be) known to everyone.
• Coloring and dynamic coloring
The purpose of coloring is to quickly know the status of an entity (a point, an element, etc.)
vis-a-vis a given situation. To find out if a point has undergone any processing (a move, etc.) or
if an element belongs to a certain set (a cavity, a sub-domain, etc.), it can be assigned a color, c,
via an array (a marker) of colors. In these two examples, a Boolean number (0 or 1, true or false,
yes or no) can be used, more precisely, two colors only, but it remains to be known whether
it has been updated if the entity marked is modified. To clarify, reconsider the example of the
construction of a queue, Algorithm (1.2), by stipulating a few rules. The aim is to find a list
of elements in a mesh that do not verify Delaunay’s criterion vis-a-vis a point. One starts by
identifying the element, iel, containing this point and then builds, per neighborhood, the queue
sought for, the array T ab(.). Thereby we set start = end = 1 and T ab(start) = iel and the
following algorithm is unwound:

Construction of a queue of elements [1.10]


(1) Process the element of index start:
– do start = start + 1;
– if one or more neighbors are to be added:
- do while: end = end + 1, insert the element at this index;
– otherwise, if start > end, END;
– go to (1).

A subtlety is hidden in this algorithm. Since it is proceeding by neighborhoods, a neighbor


of an element may already be in the queue. It is therefore necessary to verify this case and to not
consider (storing in the queue) such elements15. The simplest method is to color the elements to
indicate if they are or are not already in the queue. First method, a Boolean, the elements already
taken into account are marked with the value 1, the others take the value 0. By simply looking
into this value (color), it can be seen if a particular neighbor must be considered. However, at
the end of the construction and to achieve another one, the situation has to be to re-established,

14. Some of which have already been described in Volumes 1 and 2.


15. Except for compressing the queue, if it has not burst in the meantime, during postprocessing to eliminate
duplicates.
Data and Basic Techniques 23

namely to reset the elements that have been marked with a 1, to 0. Thus, a color array is needed
(0 or 1) and an array that memorizes the numbers of the elements whose color has been modified.
We call M ark(.) and M odi(.) these two arrays, set i = 0 and initialize M ark(.) to 0, then the
above algorithm becomes clearer in (with, as above start = end = 1 and T ab(start) = iel):

Construction of a queue of elements with boolean coloring [1.11]


(1) Process the element of number iel and index start in the queue and mark it:
M ark(iel) = 1, then do i = i + 1 and M odi(i) = iel.
– Do debut = debut + 1.
– If one or more neighbors are to be added:
- do while for iel the number of the neighbor: if M ark(iel) = 0, end = end + 1, insert
the element at this index.
– Otherwise, if start > end, END.
– Go to (1).
– Reset M ark(.): Do for j = 1, i, M ark(M odi(j)) = 0.

The lines in red indicate how coloring is achieved. The algorithm is correct, but has an
obvious weakness, the need to manage the array M odi(.). To tackle this problem, we shall resort
to dynamic coloring. The same algorithm unfolds by initially setting c = 0 and M ark(.) = c,
where c is seen as a color, then the following algorithm is obtained:

Construction of a queue of elements with dynamic coloring [1.12]


– Do c = c + 1.
(1) Process the element of number iel and index start in the queue and mark it:
M ark(iel) = c.
– Do start = start + 1.
– If one or more neighbors are to be added:
- do while for iel the number of the neighbor: if M ark(iel) = c, end = end + 1, insert
the element at this index.
– Otherwise, if start > end, END.
– Go to (1).

Again, the lines in red indicate how the coloring is used. Updating array M ark(.) is implicit,
therefore unnecessary; the array M odi(.), on the other hand, has become useless, the algorithm
is easier, and therefore faster.

With this example, which can be applied to many situations, we have shown how a simple
technique, dynamic coloring, can be beneficial.
24 Meshing, Geometric Modeling and Numerical Simulation 3

• Resorting to randomness

This is still a very simple technique that, when used, enables the performance to be improved
(speed and/or efficiency) in many algorithms. A random choice is applied to choose in which or-
der16 the entities (for example points) whose indices are contained in an array will be addressed.
This technique also allows choosing (randomly therefore) the process to apply in case there are
several possible choices.

In the first case, it must be taken into account that access to memory can penalize performance
since we are going to “hit” the memory almost everywhere and certainly not in a sequential man-
ner. As such, a good tradeoff has to be found between the expected improvement and possible
cache misses that might occur.

Finally, it should be kept in mind that, although they comprise of random aspects, algorithms
remain deterministic.

• Sorting

The sorting that will take place in our domain mainly concerns lengths (of edges) and qualities
(of elements). Sorting therefore involves floats referred to by an array of indices and can be used
to order these indices so that corresponding floats be ordered, for the chosen criterion, from the
smallest to the largest or the other way around. Thereby, the first index points to the smallest
value (actually to the corresponding entity), and the last to the largest or vice versa. There are a
number of sorting methods and references on this subject are numerous (any computer science-
related course should be a good start). Rather than paraphrasing any such source, we would
rather give in extenso a sorting program that we use frequently, in Fortran then in C; the sorting
is carried out in ascending order for the given criterion (array CRITER or criter depending on
the case and notation conventions).

We thus give these two programs and the reader is left17 to understand the hidden mechanics
of this particular sort that exchanges values. It should be noted that the array (of indices) as well
as that of the values are updated during the process.

 A sorting program written in Fortran

SUBROUTINE TRIRE3(CRITER,ARRAY,N)
C +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
C GOAL: SORT THE ARRAY ARRAY(1:N) ACCORDING TO CRITER(1:N)
C ---
C +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
INTEGER N,ARRAY(N)
REAL CRITER(N)
INTEGER I,L,R,J,TAB
REAL CRIT

16. So to speak, since there is none.


17. Pencil in hand!
Data and Basic Techniques 25

C
IF ( N .EQ. 1 ) RETURN
C
L = N / 2 + 1
R = N
2 IF ( L .LE. 1 ) GO TO 20
L = L - 1
CRIT = CRITER(L)
TAB = ARRAY(L)
GO TO 3
20 CRIT = CRITER(R)
TAB = ARRAY(R)
CRITER(R) = CRITER(1)
ARRAY(R) = ARRAY(1)
R = R - 1
IF ( R .EQ. 1 ) GO TO 10
3 J = L
4 I = J
J = 2 * J
IF ( J .LT. R ) THEN
GO TO 5
ELSE IF ( J .EQ. R ) THEN
GO TO 6
ELSE
GO TO 8
END IF
5 IF ( CRITER(J) .LT. CRITER(J+1) ) J = J + 1
6 IF ( CRIT .GE. CRITER(J) ) GO TO 8
CRITER(I) = CRITER(J)
ARRAY(I) = ARRAY(J)
GO TO 4
8 CRITER(I) = CRIT
ARRAY(I) = TAB
GO TO 2
10 CRITER(1) = CRIT
ARRAY(1) = TAB
END

 The same sorting program written in C

void heap(int siz, double *criter, int *array)


{
int i,j,r,l,tab;
double crit;

if ( siz <= 1 ) return;

l = siz/2+1;
r = siz;
26 Meshing, Geometric Modeling and Numerical Simulation 3

while ( r != 1 ) {
if ( l > 1 ) {
//--- save state l
l = l-1;
crit = criter[l];
tab = array[l];
}
else {
//--- save state r and put 1 in r
crit = criter[r];
tab = array[r];
criter[r] = criter[1];
array[r] = array[1];
r = r-1;
if ( r == 1 ) break;
}
j = l;
i = j;
j = 2*j;
while ( j <= r ) {
if ( j != r ) {
if ( criter[j] < criter[j+1] )
j = j+1;
}
if ( crit < criter[j] ) {
criter[i] = criter[j];
array[i] = array[j];
i = j;
j = 2*j;
}
else
break;
}
criter[i] = crit;
array[i] = tab;
}
criter[1] = crit;
array[1] = tab;
return;
}

One might ask how can a sort be parallelized in an attempt to see how the program aforemen-
tioned must be transformed, with what effort and at the cost of which algorithm complexification
(which is already not very clear sequentially).

• Renumbering
Some methods for renumbering nodes or mesh elements are presented in Chapter 3, where
the goal is to optimize certain properties in matrices built to perform calculations, using the finite
element method. It should not be considered as a basic technique; this is an algorithm in its own.
Data and Basic Techniques 27

On the other hand, renumbering methods based on filling curves are indeed basic techniques
that can be used in many algorithms. Chapters 4 and 5 of Volume 1 and Chapter 9 of Volume 2
to which we refer the reader cover their relevance. Nonetheless, we shall return to this point in
Chapter 3 of this volume.

One should already recall the fact that the latter type of renumbering can be used to obtain at
the same time a random part and a sequential part (via the underlying sorting) in processing the
data thus renumbered.

• Dynamic threshold

Making use of sorting allows data to be processed in a specific order. If the sorting criterion
is a length, think of edges, sorting will provide a means to consider the edges from the smallest
to the largest or conversely. If the sorting criterion is a quality (in elements), it will be possible
to address the elements from the worst to the best. In this example as in the previous case, the
use of a dynamic threshold is an alternative (which, as a result, avoids a sorting).

The principle is simple; if ε is the target threshold, the entities will be processed in several
stages with an initial threshold set to α ε with α > 1, then, over the stages, the value of α will be
reduced until it reaches α = 1. This strategy often pays off, in terms of efficiency and time. With
regard to the criterion, the worst entities are first addressed and, in overall, the gain is significant,
more significant than if the entities were considered from the first to the last.

• Hashing

A recurring question is to know whether any entity, an edge (a segment), one face (a triangle,
etc.) is present in an array (of entities) of edges, faces, etc. Since an edge is defined by a couple
of indices and a face by such a triplet, the question becomes: is there a couple (a triplet) given
in an array of couples (triplets)? A direct method is to compare the couple (triplet) with every
couple (triplets). This method is quadratic and therefore inadmissible. The hashing provides a
solution which, in practice, is somewhat linear. Another application (already seen in Volume 1)
is to perform a hashing to quickly construct the array of edges (faces) of a mesh, which is a
broader problem than the previous one, but that includes it. The hashing can also be used as a
quick means to establish the neighborhood relationships (by edges or faces, depending on the
space dimension) between the elements of a mesh by constructing the edge (faces) array for this
purpose18. Beside these examples, there are obviously many other applications for hashing19.

Hashing has been extensively described in Volume 1 (Chapter 4), consequently only a quick
description will be carried out here. Consider the example of the construction of the array of
edges in a mesh. The point is therefore to manipulate pairs of indices (those of the ends of each
edge). The question is to find out if a given couple is already in the array or needs to be added,
without having to iterate the whole array and compare, one by one, the pairs of indices.

18. In fact, a bit more information has to be memorized than for the simple construction of this array.
19. A widespread idea in teams is that anything and everything can be hashed to full advantage; this will be
seen further on.
28 Meshing, Geometric Modeling and Numerical Simulation 3

To do this, any pair (i, j) will be associated with two values, one key and a failsafe value. The
simplest key is keyij = i + j, a possible failsafe value is the integer min(i, j). Two identical
couples (i1 , j1 ) and (i2 , j2 ) with therefore, i1 = i2 is j1 = j2 or i1 = j2 and j1 = i2 (if
we have the notion of orientation) necessarily have the same key (and the same failsafe value).
On the other hand, two couples having the same key are not necessarily the same, we just have
i1 + j1 = i2 + j2 . As a consequence, in order to find identical couples, all of the couples with
the same key have to be iterated by verifying the value of the failsafe value at the same time. As
several couples may have the same key, it is necessary to define a linked list. The key is the entry
point in this linked array and iterating over the links allows the decision. The couple is found,
and the edge may already exist in the array; otherwise, it has to be added to the array (it should
be noted that, in this way, we are counting the edges).

The sizing of the arrays requires knowing the number of edges (or an upper bound) for the
edge array strictly speaking and depends on the key function for the link array. The choice of
the key function depends equally on the size of the array, and on its filling rate as well as on the
number of collisions (thus the length of the chaining for a given key value).
• Resizing a resource (now insufficient)
To unroll an algorithm, the necessary memory resources must be allocated, namely by prop-
erly sizing the arrays (regardless of how, in the end, they are implemented) that are used. For
some arrays, a good estimate can be obtained, namely an upper bound or (we know) the exact
value of the necessary size. For others, this is not the case. However, even if in general, we
have a more or less accurate idea of the necessary resource, there is always the possibility of
overshooting what was predicted. What can be done concerning the way arrays are allocated?
Two schools can be identified, one, slightly old perhaps, where only a single array, known as
a super-array is allocated, via the allocation system, is available (in the language being used,
Malloc in C) and then the useful arrays are managed by “hand” (which is tantamount to defining
their sizes and (starting) addresses in the super-array) and the other in which each array is allo-
cated via the allocation system. Independent of the method for managing memory, any program
normally constituted must detect any overflow, avoid it and properly terminate20 by notifying the
user (by means of a message or an error code).

It should be noted that the perception of a problem of insufficient sizing is not the same for
everyone and depends on the context in which it occurs, a context that conditions the nature of
the solution to be provided.

In super-array mode, the size of the latter is either a default value21, a value specified by
the user or, still, all of the available memory. In the latter case, it may have an adverse effect
on the cost. For example, if arrays (thereby of very large sizes) are to be initialized, this time
(useless) may not be marginal. Otherwise, we see systems that are only concerned with the parts

20. Certainly not with a segmentation fault or any other funny name announcing the thing or the likely effect
of the thing.
21. Decided at best by software designers based on what they know about what users most often do and
what is the usual size of their applications.
Data and Basic Techniques 29

of the arrays actually used and the fact that arrays are oversized is transparent. Still in super-
array mode, if, for a prescribed size, the algorithm fails due to insufficient size having “lost” a
reasonable amount of time (a few minutes), one trivial solution consists of restarting the process
by increasing the size. A few minutes (of computation) were wasted but the solution only took
a handful of seconds and, typically, the algorithm designer did nothing and the algorithm is
necessarily simpler. As a matter of fact, developing an automatic system (transparent to the user)
capable of resizing the resource and then of replacing the current data back into that new resource
to be able to continue the process is not trivial. For some arrays, they merely have to be recopied
(the copied values remaining relevant), for others, for pointers for example, the fact of copying
in itself can result in these pointers being wrong in the new memory environment. Therefore, the
re-creation of a consistent system is a difficult (programming) operation22.

In the other mode where several arrays are allocated (malloc in C), if an array overflows,
it is reallocated (realloc in C), for example, to two times its current size23. Two cases may
then come across. The new array is simply the previous one that was enlarged, however it
starts, in memory, at the same location. Therefore, its contents does not need to be copied. The
other way around, this copying is necessary. It should be noted that all of this is automatic (the
system knows how to do it) but the issues due to the presence of pointers (which would become
erroneous) remain to be solved.

1.2. Internal data structures

An internal structure is specific to an algorithm and, as a result, depends on the nature of


the latter. It is based on a set of elementary structures known as basic structures (array, list,
stack, queue, pointer, grid, tree, etc.) and on the techniques associated (access, updating, etc.)
with these components. As the name suggests, an internal structure is not known to the exterior.
Moreover, this will fuel the classic debate about whether a quantity must be stored in the struc-
ture (at the cost of increased memory space) or computed on the spot; this debate identifies the
difference in cost between a memory access and a computation.

1.2.1. What should be stored in an internal structure and how?

Independent of the situation, the internal data structure contains a specific number of com-
pulsory arrays, essentially an array of coordinates (those of the vertices) and an array of elements
(the indices of their vertices), but there is a way to do it differently. There are two other kinds
of necessary arrays. Some are ephemeral and will be of no interest here. The others will be
shortly used end-to-end during the course of the algorithm; the latter are the ones that we shall
talk about, noticing that some resources are commonly (utilized by) several methods.

22. It should be noted, nonetheless, that this technique for recovering on the spot a lack of resources can be
seen in some software programs.
23. A “small” resource will not be allocated on the spot every time it is requested. The doubling of the
current resource makes this approach very effective.
30 Meshing, Geometric Modeling and Numerical Simulation 3

The definition of an internal structure is paramount and, in general, not unique. Several
possible solutions will therefore be considered to store a particular entity when there is a choice to
be made. In Volume 1, regarding triangulations, it is deemed self-evident that such triangulation
(of simplices) is described as a list of elements and, for each one of them, a list of vertices, their
coordinates being stored in another array. In Volume 2, with regard to meshes, other elements
are seen (including high-order elements) but the same form of organization, a list of elements,
is kept and, for example, per element, a list of vertices and nodes (non-vertices if any) whose
coordinates are in another array. This choice of organization, which is all natural and particularly
simple, can be discussed, that is what follows.

• How can a triangle be defined

From a geometric point of view alone, a triangle (of the first-degree in two or three dimen-
sions) is completely defined by the data of its three vertices, thus there are two ways to describe
it:
– via three integers that are the indices of its vertices and a global array of coordinates. The
index i therefore refers to the coordinates coor(1 : 2, i);
– via three pairs of coordinates (and here, there is no need for a global array of coordinates),
therefore we just have the set (x1 , y1 ), (x2 , y2 ) and (x3 , y3 ).

The first method has the advantage of being compact in memory because the coordinates of a
vertex shared by several elements are only stored once. Conversely, in the second method, the
coordinates are duplicated as many times as a given vertex appears in an element. In addition,
in the first method, any change to a coordinate is implicitly propagated to all the elements con-
cerned. In the other method, the coordinates should be changed in all the elements involved,
in order for the mesh to remain consistent. However, for the benefit of this method, there is no
indirection, which can speed up some algorithms (for example, for visualization).

A third method, close to the second and well adapted to vector processing, consists of defining
six arrays, two per vertex, one storing the first coordinate, the other the second, that is:
– an array for the coordinate x of the first vertex (s1) and therefore for all the (ne) elements:
[xs11 , xs12 , ..., xs1ne ];
– an array for the y-coordinate of the first vertex and therefore for all the (ne) elements:
[ys11 , ys12 , ..., ys1ne ];
– ditto for the second vertex (s2), namely [xs21 , xs22 , ..., xs2ne ] and [ys21 , ys22 , ..., ys2ne ];
– ditto for the third vertex.

All geometric operations involving the coordinates of the elements can therefore be carried
out globally with these six large vectors, and a vector machine will process the values by groups
of 64 as quickly as a scalar machine would process a single one. See also below, in a more
abstract way, the link between memory (reading and writing) and the organization of the infor-
mation stored in a structure (internal, external or arbitrary).
Data and Basic Techniques 31

• How can a mesh be defined

At a minimum, the internal mesh data structure contains the coordinates of the vertices and
the list of indices of the vertices of every element. However, to be effective, it optionally contains
other information, possibly in large amounts. Some is of general interest, other is more specific
and inherent to a particular method, as it will be seen further on. Of general interest, we find the
array of neighbors (by two-dimensional edge, by three-dimensional face) and a germ (or seed)
that indicates for every vertex an element of its ball. For a high-degree mesh, it is the list of
nodes that will be considered and their coordinates.

• A brief conclusion

Let us retain that there is no single manner to define an internal mesh structure which, we
recall, is known only to the construction, modification, or visualization, etc., meshing algorithm
under consideration. It depends thereof. Let us point out that apart from the natural informa-
tion (coordinates, list of elements), the structure contains all the information (value, array, etc.)
enabling the algorithm to be efficient (robust and fast). Finally, the external mesh structure will
preserve only what is strictly necessary and will be known to external access, this is the one that
operates as interface, for example between the mesh world and the computation world.

1.2.2. Internal structures, method by method

We shall now method (mesh or optimization) by method look at the information that is spe-
cific to them by choosing the way24 they are structured.

• Frontal-type method

The frontal method, in its conventional version, Volume 2, consists of starting from a front
(the discretized boundary of the domain) and building the elements by relying on the items (edges
or faces depending on the size) from the front and introducing new vertices. The front thus
evolves (toward the interior of the domain) and the method terminates when the front is empty.
The crucial operation is to know whether the point that is added or the element that is built is
valid. This is determined by the detection of the absence of intersection between the current
mesh and the items sought to be built. It is therefore essential to locate the point (the element
under consideration) with respect to existing items (in principle, we are in a vacuum). A grid
is therefore an element of the answer, the points are coded inside the grid and we quickly know
what the potential conflicts are (filter effect). With the points (already inserted), a germ (an
element of the ball, open or closed, of the point) is associated and from this germ the balls are
found, which requires knowledge (of the array) of the neighbors as well as coloring. Next, the
front is going to be manipulated (and developed), thereby the most flexible structure possible has
to be implemented to enable insertions (a new item to be stored) or deletions (a front item that
became inactive), thinking of a queue is then quite legitimate.

24. Following the previous discussion.


32 Meshing, Geometric Modeling and Numerical Simulation 3

• Delaunay-like methods
The analysis of a Delaunay-like method and its various steps makes it possible to identify
useful and utilized data as well as the basic techniques to be implemented. Therefore, the Delau-
nay criterion (inherent to these methods) concerns the balls circumscribed to the elements. Thus,
it is not surprising that every element is associated with the center of its ball and the radius of
the latter25. The construction of cavities, Volume 1, being done by looking at the neighbors of
a (starting) element, the array of its neighbors will naturally be found, by element. Localization
problems (finding out in which element a given point is located) lead to the use of a grid (or a
tree) as well as, for every vertex, the data of a germ (an element of the point ball). A grid is also
useful for filtering a point with respect to the existing vertices. A point marker (used to avoid
losing one during insertion) and an element marker (useful for building cavities), therefore two
arrays of colors, are also implemented. On the other hand, throughout the process, it is nec-
essary to quickly know if an edge (face) is boundary or not (thereby constrained or not). This
information will obviously be found by implementing an array of these entities based on hashing.
• Tree-based method
In the tree-based method (quadtree or octree depending on the size), seen not as a mere
localization structure, but as a mesher (Volume 2, Chapters 4 and 5), two types of entities are
used – vertices and cells (quadrants or octants). During the construction of the tree, the cells
are cut in accordance with the mesh of the domain boundary. Each cell then contains a pointer
to its parent and 4 (8) pointers to its children in order to be able to iterate through the tree by
moving upwards and downwards. Terminal cells, or leaves, point to a linked list that contains the
geometric entities (vertices, edges, triangles, or quadrilaterals) of the surface that intersect these
cells. In practice, a mesh is composed of an array of coordinates and vertices and the index of the
root cell, the latter pointing recursively to all the others. The structure is thus not required to store
the exhaustive list of all its elements (cells) but only the first. During the insertion operations of
a triangle into the tree, the list of terminal cells intersecting this triangle is found and it is added
to the linked list of each cell. During the refinement and balancing phases, the cells are sliced
according to geometric criteria (two vertices far too close inside the same cell) or to respect the
balancing rules. Once the tree is finished, the mesh elements are obtained by slicing the terminal
cells and they are stored in a conventional way.
• Optimization method (local)
An important part of a local mesh optimization algorithm, Volume 2, consists of processing
the balls of the vertices (for moves) and the shells of the edges (for flips). The construction
of these topological entities is achieved based on the neighborhood relationships (per face) be-
tween the elements; the array of neighbors is thus necessarily accompanied by a coloring of the
elements to avoid inserting the same element several times in the searched array (ball or shell).
Optimization strategies are based on an a priori quality criterion, involving the elements. A qual-
ity criterion is also defined involving vertices (taking into account the quality of the elements of

25. Which could be calculated on the spot at the expense of an extra time cost, but with immediate gain
in memory or that could be replaced by a calculation of determinants, the latter being a solution that we
consider less robust.
Data and Basic Techniques 33

their ball). Keeping and maintaining these two qualities will result in avoiding recomputing them
on the spot (a relatively expensive operation) and spending too much time with a vertex or an ele-
ment of already good quality. Coloring can be used for the same purpose. In addition, sorting the
quality array allows the algorithm to be directed (with or versus the use of dynamic thresholds,
as seen above).

• Internal structures

The different structures are listed here (essentially seen as arrays) whose relevance was shown
in the analysis of methods. The reader will easily see, for a given array, what methods are
involved.

We denote by ne the number of elements, np the total number of vertices (and/or nodes),
noe the number of nodes per element, nf the number of faces of an element and by dim the
dimension of the space.

 Coor(1 : dim, 1 : np), the array of the coordinates of the vertices (and non-vertex nodes).
 Elem(1 : noe, 1 : ne), the array of the elements giving, for each one, a list of its nodes.
 N eighbor(1 : nf, 1 : ne), the nf neighbors of a given element. A zero value indicates a
boundary face, and there are no neighbors on this face. It should be noted that the mixed mesh
problem emerges, and the number of faces per element is not constant.
 Grid(0 : ., 0 : ., 0 : .), three-index array (three-dimensional) complemented by its link
array, the localization grid to help localize items, but also for filtering purpose. Here, the indices
start obviously at value 0.
 T ree(.), the tree itself according to the method retained to build it.
 Germelement (0 : np), the index of an element (germ or seed) for each vertex. The elements
are numbered from 1 to ne and the fact that the array starts at 0 allows finding a color for a virtual
or non-existent element (which, precisely, has 0 for index, see above the neighbors array) without
having to explicitly verify that the number is zero.
 Colorvertex (1 : np), the dynamic coloring of the vertices.
 Colorelement (0 : ne), the dynamic coloring of the elements. This coloring is useful for
the construction of cavities, but, in plain words, in the search for the (topological) balls of the
vertices and edge shells. The index starts at 0 for the reason given above.
 Center(1 : dim, 1 : ne), the coordinates of the center of the element ball.
 Radius(1 : ne), the radius of the (geometric) element ball.
 F ront(1 : dim, .), the array of the faces of a front. This array can be seen as a queue,
sorted or not.
 Contedge (1 : 2, .) and Contf ace (1 : 3, .), the arrays containing the constrained edges and
faces (here, triangular), as arrays accompanied by a hash.
 Sub − Dom(1 : ne), a value per element to indicate membership to a sub-domain (a
connected component). This information can be reduced by giving only one germ per sub-
domain, the elements being obtained by neighboring (as long as a boundary is not crossed).
34 Meshing, Geometric Modeling and Numerical Simulation 3

 Edgeedge (1 : 2, .) and F acef ace (1 : 3, .), the array of edges (of the triangular faces), if
necessary only.
 Etc., depending on the specific needs of the algorithm.

A few remarks on these arrays. The grid can be replaced by a tree to balance the filling of the
cubes. The arrays storing the centers and the radii of the balls may, on the one hand, be grouped
into a single one (CenRad(1 : 4, 1 : ne)) or, on the other hand, not exist at all, the centers and
radii being evaluated on the spot. This is the debate of storage versus computation (memory and
access versus computational time).

It is clear that these arrays (apart from the first two) either have no interest to the exterior or
are easy26 to recreate, and therefore are not intended to be found in the external data structure.

• Visualization algorithms

For this type of algorithm, the internal structures concern the organization chosen to describe
a mesh (list of elements, list of vertices per element and array of coordinates) and a solution
driven by this mesh, but, also, structures more directly related to the underlying graphic aspect.
Given this specificity, these issues will be addressed in Chapters 4 and 5.

1.3. External data structures

An external structure is the means of communication with the exterior. Initially, we give in
full, an example in ASCII of a mesh file (a small mesh of two tetrahedrons and six pyramids).

MeshVersionFormatted 2
Dimension
3

Vertices
9
0. 0. 0. 0
1.8 0. 0. 0
1.8 1.8 0. 0
0. 1.8 0. 0
0. 0. 1. 0
1.8 0. 1. 0
1.8 1.8 1. 0
0. 1.8 1. 0
.9 .9 .5 0
Tetrahedra
2
1 2 6 9 2

26. Think of the arrays of neighbors, its storage in the external structure unnecessarily puts an additional
burden on the latter, its re-creation, via a hash, remains a reasonable operation.
Data and Basic Techniques 35

1 6 5 9 2
Pyramids
6
1 2 3 4 9 1
7 6 5 8 9 1
3 4 8 7 9 1
2 3 7 6 9 1
1 5 8 4 9 1
End

We then indicate the underlying philosophy in other words, keywords, fields, etc. This philoso-
phy of the external storage is to avoid any redundancy and to be as compact as possible, however
containing enough information to help reconstruct all the useful information (which, for some,
were in the internal structure, but were not saved, partly due to memory issues) for the algorithm
under consideration.

The MESHB file format27, see the example above, is the one we have defined to store the
mesh. The main idea is to be based on a set of keywords, each used to describe a mesh entity.
A keyword is a simple code (a string of characters) that identifies the type of data that is going
to be provided (Vertices, Tetrahedra, Pyramids, Edges, etc.). After the keyword, we find the
number of entities that will be described and then one “line” per entity, see above and below for
the description of the edges.

Edges
5
1 2 0
1 5 100
6 7 0
6 2 0
3 4 100

example in which five edges are described by the indices of their extremities and a reference.

The file is made up of a succession of keywords whose order does not matter. Each keyword is
associated with a field of values. Readers are invited to visit the website of the library libMeshb
for a comprehensive description of the format and of the programs freely usable to handle this
type of file. In addition to a mesh, the format proposes an organization to store solutions, this is
a SOLB file, as the example hereafter. The format, which evolves according to technologies and
novelties, offers backwards compatibility.

MeshVersionFormatted 2
Dimension 3
SolAtVertices
5

27. B file for binary.


36 Meshing, Geometric Modeling and Numerical Simulation 3

1 2 % 1 = A single field, 2 = Vector-type field


1. 6. 8.
2. 4. 8.
3. 2. 6.
4. 5. 6.
5. 2. 6.
End

Files can be stored in ASCII format or in binary. The binary format is preferred in terms
of performance since current flash memory (SSD) has speeds of several GB per second, while
the ASCII format cannot exceed a few tens of MB per second because of the time taken for
interpreting the data. The result is a factor of 100 in speed for the benefit of binary, and given
that the acceleration of current SSDs and networks do not benefit the ASCII format, it is expected
to further increase. In addition, reading and writing to the same file in ASCII are very complex
operations to be parallelized due to the unpredictable length of the data, whereas access in parallel
is trivial in binary. For example, the number 2.5 in binary will always take 8 bytes if stored in
double precision, whereas in ASCII it could be perfectly written as 2.5000000 or 2.5 followed
by a series of white characters for the purpose of aligning the data with the columns. The result
is that if an ASCII file is split into two equal parts with the same number of bytes in order to
be processed in parallel by two processors, the task receiving the second block will have no
idea about the starting address of its data, which could start right in the middle of a line or a
real number. This would require post-processing to partially re-assemble the data decrypted in
parallel.

1.4. Data structures and memory access

In general, the organization of data in memory should not solely be a function of the re-
quirements of a particular algorithm, but must take into account the physical constraints of the
hardware. It is therefore necessary to dive a bit into the technical aspects because hardware and
its architecture (multi-core, GPU, but also simply sequential) have a preponderant influence on
performance.

The performance of the computer main memory (DRAM) depends on two main factors: clock
speed in Hz, that is, the number of words that can be read or written in a second and the width of
these memory words in bytes. It should be noted that these sizes are not related to the frequency
and the bandwidth of the memory bus, which is used to connect the DRAM to the CPU. These
values are often put forward by manufacturers, but hide a much more complex reality. On the
other hand, the memory frequency has not changed for several decades and stagnates around 100
MHz. To improve performance, memory manufacturers have no other solution than to enlarge
the memory word. This has a non-negligible impact on how memory is managed because a
system based on DDR4-2133 modules (as on a simple laptop) is capable of transferring 133
million words of 128 bytes per second. As a result, when the system accesses the memory to
read a byte or 128 bytes, the access time is the same. This indicates the importance of grouping
memory accesses together and consecutively, and therefore, to store information about an entity
in an appropriate way.
Data and Basic Techniques 37

As it has just been said, the cost of access is constant up to 128 bytes. As such, it may be
useful to store the data related to an entity in a consecutive fashion. As seen above, simply for
a coordinate array, one can have a single array containing all the coordinates of a vertex or an
array for each of the coordinates. In the first case, access to the x-component of a vertex triggers
the reading the neighboring cubes that contain the y-components (as well as z-components).

The reasoning can be taken a step further by storing the vertices in the form of an array
of structures (AOS), each structure containing all the data related to a vertex – coordinates,
metrics, material reference, germ, or degree (number of elements of its topological ball), etc.
The underlying idea is that, again, access to all this data is done simultaneously, therefore at the
same cost as access to a single data item. Such a structure is optimal when the algorithm needs
all of the data. On the other hand, if the algorithm is required to travel the structure to retain only
one integer value, it triggers the reading of 128 bytes to keep thereof only 4 (the integer value).
It is then advantageous to store data in the form of a single structure containing multiple arrays
– an array for each coordinate, a reference array, etc. This will then be referred to as structure of
arrays [SOA].

What structure (AOS or SOA) should be chosen? None are optimal in all situations and
the designer is left with the choice of organizing the data according to the modes of access of
its algorithms. In some simple cases, one of the methods is clearly more efficient and also for
most algorithms. In other cases, access patterns change and it is impossible for a single storage
method to be optimal. It is then necessary to change the type of layout throughout execution
using AOS ↔ SOA conversion routines, when the program will enter a phase where the nature
of data accessed is recognized as being of one type rather than another. It should be noted that
the change of data organization, called transposition, is not transparent from the programmer’s
point of view. For example, in C language, mesh->vertex[i]->ref in SOA mode becomes
mesh->vertex->ref[i] in SOA mode.

This discussion, if we now take the case of a square matrix, repeats itself to know how to
store it, by line, by column or even by metaline, etc. This non-trivial question falls outside the
scope of this book.


∗ ∗

In this chapter, we tried to show how to use basic data structures, therefore conventional (ar-
rays, list, linked list, tree, grid, etc.), and also the basic techniques (sorting, hashing, coloring,
etc.) for developing performing algorithms not only for the purpose of improving speed, but
also from the point of view of optimizing the necessary memory space. Performance is achieved
when, for the most part, obtaining linear complexities or close whereas, in some cases, a naive
implementation (perhaps simpler) would rather be quadratic. However, it should be stressed that
performance, in terms of complexity, is not really obtained if memory management is not finely
achieved in order to avoid the cost of memory access. Naturally, this potential negative impact
is not visible if we are just addressing small-sized cases or, at the very least, with a size not
38 Meshing, Geometric Modeling and Numerical Simulation 3

hampering operations related to accesses. To properly take these kinds of issues into account,
we saw that a minimum of knowledge of how readings and writings occurred was necessary and
therefore, sometimes, go so far as to look at how many bytes were affected by any of the opera-
tions and whether it was possible to influence this factor by organizing the data structures used
within an algorithm differently (internal structures). Now, regarding the external data structures,
as interfaces between a given algorithm and the outside world, we saw the limitations of writ-
ings (readings) in ASCII mode and all the benefits brought by writings and readings in binary
mode. In addition to speed, the necessary memory resource is significantly reduced. This mode
is therefore the only one really suitable for processing large-sized problems.

We took the liberty to promote mesh formats and solution fields that we developed by in-
dicating the existence of a library of software programs for their processing (writing, reading,
converting, etc.).
Chapter 2

Mesh Transformations, Patching,


Merging and Immersion

Numerical simulations involve objects (domains) of varied geometric nature, but many of
them are, in practice, assemblages of sub-objects (sub-domains), some of which can be deduced
from others by a simple geometric transformation, symmetry, translation, rotation, etc.

Automatic meshers are capable of processing arbitrary geometries, but, in the case where the
geometry under consideration has special properties (such part is the symmetrical counterpart
of another), the result of an automatic mesher does not, in general, reproduce this particular-
ity. In other words, the automatic mesher would not be able to do what a human being would
naturally do. To achieve this result, the only method is to decompose the domain into different
sub-domains, defined by taking into account the existing properties of geometric repetitiveness,
to mesh only the strictly necessary parts, to infer therefrom – by geometric transformation – the
other parts and, in the end, to join all parts together in order to build the mesh of the entire do-
main. In this way, this mesh automatically acquires the properties of repetitiveness of the full
domain.

If, for some reason – for instance due to symmetry – the solution to a problem is only calcu-
lated on part of the domain, it can prove interesting to visualize this solution (Chapter 5) over the
full domain. This is tantamount to saying that the solution must be transported from the initial
geometry to the geometry resulting from the transformation.

First, the different geometric transformations of a mesh are presented, followed by the decom-
position into simplices of non-simplicial elements, and of higher degree elements into elements
of the same degree. Then, a quick and robust method is described to join two meshes presenting
a common area (border) and to join together the solutions associated with each one. Finally, we
indicate how to merge two meshes that present a common (arbitrary) area and the operation of
immersion of one mesh into another is described.

Meshing, Geometric Modeling and Numerical Simulation 3: Storage,


Visualization and In Memory Strategies, First Edition. Paul Louis George,
Frédéric Alauzet, Adrien Loseille and Loïc Maréchal.
© ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
40 Meshing, Geometric Modeling and Numerical Simulation 3

2.1. Geometric transformations

We examine conventional geometric transformations, the simplex decomposition of a non-


simplicial element, the conversion of a non-simplicial mesh into a simplicial mesh, the splitting
of an element to be used for joining, as well as breaking down high-degree elements.

2.1.1. Conventional geometric transformations

To simplify, we first look at meshes where the elements are of the P 1 type. In this type
of element, nodes coincide with vertices and there are no other nodes (on edges, faces or in
elements).

Given such a mesh, the goal is to build a new mesh based on a classic geometric trans-
formation (isotropic or anisotropic symmetry, translation, rotation, dilation (contraction) or,
more generally, by any transformation described by its matrix whose form is explicitly given
[Rogers, Adams-1989]).

If the transformation behaves like a positive isometry, it only affects the position of the points
(here the vertices) of the mesh; the connectivity of the elements (the local enumeration of their
vertices is identical to that of the initial mesh) remains unchanged.

Conversely, a transformation that behaves like a negative isometry affects both the position of
the mesh points (the vertices) and the numbering (connectivity) of the vertices of the elements,
so as to preserve positive (two-dimensional) surfaces or positive volumes (three-dimensional),
given that surface meshes do not have this constraint. To achieve this result, a permutation of the
list of the element vertices must be performed (for example, the triangle of vertices (1, 2, 3) will
be transformed into the triangle of vertices (2, 3, 1) or other equivalent notations, see below).
The different transformations of interest here are as follows:
– symmetries with respect to a line or plane;
– translations (shifts) of a given vector;
– isotropic or anisotropic dilations around a dilation center with given dilation coefficients;
– rotations of a given angle around a point or axis (in three dimensions);
– general transformations (data explicitly in the form of a matrix).

In addition, any combination of several of these operators can be used to define a new transfor-
mation and we encounter the last case.

Formally, a geometric transformation can be described through a matrix Tra . If P is a vertex


of the initial mesh, the corresponding vertex after transformation, P  , is defined as:

P  = Tra (P ) . [2.1]

However, in order to describe a transformation in this form, a system of homogeneous co-


ordinates has to be considered (that is, in two dimensions, the vertex of coordinates (x, y) is
Mesh Transformations, Patching, Merging and Immersion 41

seen as the triplet (x, y, 1)). The matrix is then a (d + 1) × (d + 1), where d is the dimension
of the space. Therefore, for example, in two dimensions, a symmetry with respect to the line
Ax + By + C = 0 is written as:
⎡ ⎤
1 + A2 F ABF ACF
−2
Tra = ⎣ ABF 1 + B 2 F BCF ⎦ with F = 2
A + B2
0 0 1
Analogously: ⎡ ⎤
1 0 Tx
Tra = ⎣ 0 1 Ty ⎦
0 0 1
is the translation of vector (Tx , Ty ). A dilation1 of coefficients (αx , αy ) and center
(Cx , Cy ) is characterized by the matrix:
⎡ ⎤
αx 0 Cx (1 − αx )
Tra = ⎣ 0 αy Cy (1 − αy ) ⎦
0 0 1
and a rotation of angle α around point P = (Px , Py ) is defined by:
⎡ ⎤⎡ ⎤
cos α − sin α Px 1 0 −Px
Tra = ⎣ sin α cos α Py ⎦ ⎣ 0 1 −Py ⎦ .
0 0 1 0 0 1
In three dimensions, one has (with the same notations):
⎡ ⎤
1 + A2 F ABF ACF ADF
⎢ ABF 1 + B 2
F BCF BDF ⎥ −2
Tra = ⎢⎣ ACF
⎥ with F =

BCF 1 + C 2 F CDF A2 + B 2 + C 2
0 0 0 1
for a symmetry with respect to the plane Ax + By + Cz + D = 0. One has:
⎡ ⎤
1 0 0 Tx
⎢ 0 1 0 Ty ⎥
Tra = ⎢
⎣ 0 0 1 Tz ⎦

0 0 0 1
for a translation and: ⎡ ⎤
αx 0 0 Cx (1 − αx )
⎢ 0 αy 0 Cy (1 − αy ) ⎥
Tra ⎢
=⎣ ⎥
0 0 αz Cz (1 − αz ) ⎦
0 0 0 1
for a dilation.

1. Contraction if the coefficients are strictly smaller than 1. These operations will be used extensively for
mesh visualization (Chapter 4).
42 Meshing, Geometric Modeling and Numerical Simulation 3

For a rotation around the x-axis, one will have:


⎡ ⎤ ⎡ ⎤⎡ ⎤
1 0 0 0 1 0 0 Px 1 0 0 −Px
⎢ 0 cos α − sin α 0 ⎥ ⎢ 0 cos α − sin α Py ⎥ ⎢ 0 1 0 −Py ⎥
Tra =⎣
0 ⎦
or ⎣
0 sin α cos α 0 sin α cos α Pz ⎦ ⎣ 0 0 1 −Pz ⎦
0 0 0 1 0 0 0 1 0 0 0 1

depending on whether the rotation is made around the origin or a point P . For rotations around
the other axes, we find similar expressions; in other words (with P as the origin):
⎡ ⎤ ⎡ ⎤
cos α 0 sin α 0 cos α − sin α 0 0
⎢ 0 1 0 0 ⎥ ⎢ sin α cos α 0 0 ⎥
Tra = ⎢ ⎥
⎣ − sin α 0 cos α 0 ⎦ and ⎣ 0
⎢ ⎥
0 1 0 ⎦
0 0 0 1 0 0 0 1

around the y- and z-axes, respectively. The matrix Tra defined as:
⎡ ⎤
a2 − Sd + CS2 g ab − Se + CS2 h ac − Sf + CS2 i 0
⎢ SC2 a + Cd + SS2 g SC2 b + Ce + SS2 h SC2 c + Cf + SS2 i 0 ⎥
⎢ ⎥
⎣ −S2 a + C2 g −S2 b + C2 h −S2 c + C2 i 0 ⎦
0 0 0 1

corresponds to a rotation of angle α, of axis A = (Ax , Ay , Az ) around the point P = (0, 0, 0).
In this matrix (see Figure 2.1), one has:
Ñ é
Å ã
Ay Az
– if Ax = 0, φ = arctan , θ = arctan » :
Ax A2x + A2y
C = cos φ and S = sin φ, C2 = cos θ and S2 = sin θ,
Ñ é
Az
– otherwise, if Ay = 0, θ = arctan » :
A2x + A2y
C = 0, S = 1 and C2 = cos θ, S2 = sin θ,
– otherwise C = 0, S = 1 and C2 = 0, S2 = 1.

and the values of a, b, ..., i of the matrix are:

a = C2 C , b = C2 S , c = −S2 and d = − cos αS − sin αS2 C ,

e = cos αC − sin αSS2 and f = − sin αC2 and finally


g = − sin αS + cos αS2 C , h = sin αC + cos αSS2 and i = cos αC2 .
as an exercise, it could be verified that the matrix above is none other than the combination
of matrices:

− −
→ →
− →
− −

Tra (φ, Z ) ◦ Tra (θ, Y ) ◦ Tra (α, X ) ◦ Tra (−θ, Y ) ◦ Tra (−φ, Z )

where Tra (angle, vector) represents the matrix associated with the rotation of angle angle and


axis vector ( Z for the z-axis, etc.).
Mesh Transformations, Patching, Merging and Immersion 43

Figure 2.1. Rotation around the axis A = (Ax , Ay , Az ). The angles φ and θ allow a return to
a rotation of α around the x-axis

We look at the case where Ax = 0. Therefore, by noting only the sub-matrices (3 × 3)


corresponding to the rotation, the expression above is tantamount to calculating the product of
the following five matrices:
ñ ôñ ôñ ôñ ôñ ô
C −S 0 C2 0 S2 1 0 0 C2 0 −S2 C S 0
S C 0 0 1 0 0 cα −s α 0 1 0 −S C 0 ,
0 0 1 −S2 0 C2 0 sα cα S2 0 C2 0 0 1

by repeating the above notations (where c α designates cos α and s α designates sin α). This
operation can then be written in the form:

T 1 T2 T3 T4 T5 .

The product T1 T2 is calculated, that is:


⎡ ⎤
CC2 −S CS2
⎣ SC2 C SS2 ⎦ .
−S2 0 C2

The product T3 T4 T5 is denoted by:


⎡ ⎤
a b c
⎣ d e f ⎦.
g h i

The above matrix Tra is recovered by carrying out the product of these two matrices. Next, we
express T3 T4 T5 as a function of T3 and of the product T4 T5 , which gives the above coefficients.

The rotation center is supposed to be at the origin; in order to deal with the general case, a
translation just has to be added by taking into account the desired point P = (Px , Py , Pz ). This
amounts to first applying a vector translation t (−Px , −Py , −Pz ) and then, after application to
the result of the matrix Tra above, to performing the opposite translation.
44 Meshing, Geometric Modeling and Numerical Simulation 3

In the case of a rotation, the definition of the angles φ and θ by an arctangent gives a deter-
mination modulus π. When such an operator is programmed, this ambiguity must be be lifted
using the sine and cosine of the angles, in order to exactly determine those angles. Furthermore,
the coefficients C, S, C2 and S2 are directly calculated depending on Ax , Ay and Az to avoid
this indeterminacy.

In practice, the formal notation (via a matrix) can most often be replaced by a simpler nota-


tion. For example, for a translation, P  = Tra (P ) is simply expressed as P  = P + T with


T = t (Tx , Ty ). The implementation will therefore be achieved either generically, by virtue
of the appropriate definition of the matrix, or, on a case-by-case basis by the specific desired
formula. This thus yields, depending on the nature of the transformation, the two schemes that
follow.

Algorithm for a “positive” transformation [2.2]


Definition of Tra .

Loop over the mesh vertices:


– P  = Tra (P )
End loop.

In this case, only the coordinates are affected by the transformation. A “negative” transfor-
mation thus leads to the following scheme:

Algorithm for a “negative” transformation [2.3]


Definition of Tra .

Loop over the mesh vertices:


– P  = Tra (P )
End loop.

Loop over the mesh elements:


– inversion of the list of the element vertices.
End loop.

Here, the coordinates are affected by the transformation as well as the enumeration order
of the vertices of the elements, which is modified by a process known as inversion. To cover
the five usual elements (triangles, quadrilaterals, prisms, pyramids and hexahedra), an inversion
matrix Inv can be built2, allowing the proper ordering of the mesh vertices resulting from the
transformation. First, we introduce two matrices (morally a 3 × 3, matrix denoted Tria , for

2. There are other existing solutions to build this matrix, we present just one of them.
Mesh Transformations, Patching, Merging and Immersion 45

processing a triangle and a 4 × 4 matrix, denoted by Quad , to process a quadrilateral). Thereby,


we set: ⎡ ⎤
⎡ ⎤ 0 0 0 1
0 0 1 ⎢ 0 0 1 0 ⎥
Tria = ⎣ 0 1 0 ⎦ and Quad = ⎢ ⎣ 0 1 0 0 ⎦.

1 0 0
1 0 0 0
If (1 2 3) refers to the number of vertices of a triangle3 (the first vertex is 1, etc.), then the vertices
of the transformed triangle are obtained through the process Tria (1 2 3). The ordered list (3 2 1)
is then found (Figure 2.2); the first vertex is the third vertex of the initial triangle, and this
transformed triangle is therefore well oriented, as such Inv = Tria is the inversion matrix. For
a quadrilateral, Quad (1 2 3 4) will be the new numbering, that is Inv = Quad . It now remains

Figure 2.2. Numbering of vertices before and after symmetry

to build the inversion matrices relative to the tetrahedra, prisms, pyramids and hexahedra. For
Inv , it is successively found that:
ï ò ï ò ï ò ï ò
Tria 0 Tria 0 Quad 0 Quad 0
, , and finally ,
0 1 0 Tria 0 1 0 Quad
where 0 refers to a row or column of (3 or 4) 0 or still a 3 × 3 or 4 × 4 matrix of 0 depending
on the case. This notation, of ⎤ can be more prosaically replaced with Tria =
⎡ a certain elegance,
⎡ ⎤ 1 0 0 0
1 0 0 ⎢ ⎥
⎣ 0 0 1 ⎦ and Quad = ⎢ 0 0 0 1 ⎥, which simply indicates that, the first vertex being
⎣ 0 0 1 0 ⎦
0 1 0
0 1 0 0
fixed (1), the following vertex of the transformed element is the previous vertex of the initial
element, etc. This being set, the matrices Inv are expressed as before.

3. Of the plane, let us again recall that this problem does not occur for an non-oriented surface.
46 Meshing, Geometric Modeling and Numerical Simulation 3

 Attribute transformation

In general, the data structure (Chapter 1) contains – besides the geometric data (the position
of the nodes) and the topological data (list of elements described by means of the list of their
nodes, which hides the definitions of the edges and faces) – attributes related to the application
considered subsequently. These attributes, which are integers, are values that serve to indicate
the nature of the corresponding item: a node, an edge, a face or an element. Here, since the nodes
are vertices, the attributes are carried by the vertices, edges, faces and the elements only. The
mesh obtained as a result of the geometric transformation of an initial mesh inherits the attributes
of the latter, modified or unchanged. Since the local enumeration of nodes may have changed,
the initial attributes must follow (for “negative” transformation).

The attribute(s) of the element do not change. On the other hand, those of the nodes, edges
and faces must be permuted adequately. Except in two dimensions, it does not seem possible
for this transformation to be written formally (using a matrix); therefore, in practice, a tabulation
will be used, which will indicate the correspondence between the local number of an initial edge
(face) and its new local number after transformation.

It should be noted that here the attributes are transferred without changing the value; it is only
their position in the list that changes. It is clear that it might be desirable to change the values,
for example attribute 1 transforms into attribute 2. In this case, the location of the attribute in the
list has to be found first and then its value changed.
 Arbitrary degree mesh
It is considered here that there is at least one node per edge (so we are at least at degree
2). The transformation will affect the position of all nodes and, unless it is also ineffective (as
a translation or a rotation) on the local enumeration (element by element) of the nodes. As
mentioned previously, the presence of attributes is to be taken into account.

2.1.2. Cutting a non-simplicial element into simplices

The elements considered here are of degree 1 in the sense that their edges are straight line
segments. Any element in this category can be sliced, topologically speaking, into simplices.
There may be different solutions depending on the choices made. Geometrically speaking, a
topological cut may prove to be false because there is at least one negative element (triangle or
tetrahedron). The Schonhardt polyhedron is famously known (Volume 1, Chapter 6) for which
no partition into tetrahedra is even possible. Therefore, its decomposition into such elements
requires the introduction of a point (the so-called Steiner point) in its interior (specifically in
the visibility center of its faces) and to find, not three elements but eight. The difficulty comes
from the fact that the faces (triangulated) of a prism are a topological constraint that can lead
to this impossibility, depending on how they connect from a quadrilateral face to its neighbors.
In practice, the method that will be used later makes it impossible to encounter this situation
(because two face diagonals will necessarily be incidents at a vertex).

In the following, we denote by [ijk...] the local indices of the vertices (denoted [Si , Sj , ...])
of the element being considered. Except for quadrilaterals (in a plane), exhaustively treated, the
Mesh Transformations, Patching, Merging and Immersion 47

solutions for other elements are only topological, of the list of the indices of the vertices. The
geometric issues that can arise will be covered later when the validity of the elements of a slicing
will be looked into, as well as the issue of knowing whether the initial geometry is actually
preserved during cutting. For the elements under consideration, the following (topological) cuts
are successively found:
– quadrilaterals: [1234] =⇒ [123] ∪ [134] is one of the two topologically possible slicings
into two triangles, the other being [124] ∪ [234]. The solution is based on the choice of one of
the diagonals of the quadrilateral. For the aficionados of Delaunay-based meshes, the shortest
diagonal will be chosen if the two triangles created are positive. In the case of a surface element,
the emergence of a twist will be avoided to choose any of the diagonals. In fact, as mentioned,
the question arises as to whether the geometry of the surface is preserved when the elements are
cut out; this is an issue that also prevails for the faces of solid elements;
– pyramids: [12345], of (quadrilateral) face of base [1234], =⇒ [1235] ∪ [1345] is one of two
possible cuts into two tetrahedrons, the other being [1245] ∪ [2345], choosing the other diagonal
of the base;
– pentahedra: [123456], of (triangular) face of base [123], =⇒ [1234] ∪ [25634] =⇒ [1234] ∪
[2564] ∪ [2634] is one of the 12 possible cuts4 into three tetrahedra. A corner tetrahedra is
built, here [1234], on the base and a pyramid remains whose base is the opposite quadrilateral
to the corner vertex, here [1]; therefore, there are two possibilities per corner and there are six
possible corners. This trick avoids looking at other suspicious cases and having to manipulate
a tetrahedron whose four vertices are those of a face (thereby quadrilateral) of the prism. In
extenso, one finds:
[1234] ∪ [2564] ∪ [2634] or [1234] ∪ [2345] ∪ [3456],
or [1235] ∪ [1365] ∪ [1645] or [1235] ∪ [1345] ∪ [3645],
or [1236] ∪ [1526] ∪ [1564] or [1236] ∪ [1426] ∪ [2456].

– hexahedra: [12345678], with face of base [1234], =⇒ [1236] ∪ [1348] ∪ [1568] ∪ [3678] ∪
[1386] (four corner tetrahedra and an internal tetrahedron) is one of the two possible decomposi-
tions into five tetrahedrons, the other being [2347] ∪ [2415] ∪ [2675] ∪ [4785] ∪ [2457];
– hexahedra: [12345678], with face of base [1234], =⇒ [123567] ∪ [134578] =⇒ [1235] ∪
[2675]∪[2735]∪[1345]∪[3785]∪[3845] is one of the (36) possible decompositions into six tetra-
hedra. We start by cutting the hexahedra into two prisms, or more specifically six possibilities,
and these are then decomposed into three tetrahedrons each.

Again, except for (plane or warped) quadrilaterals, no consideration was given to the geomet-
ric criteria that here amounts to verifying that the surfaces (plane cases) or volumes (solid cases)
of the simplices are positive (it is known that the Jacobian of these elements, straight simplexes,
is, up to a factor, their surface or their volume) and that the geometry is preserved (see below).

4. In fact, there are only six solutions due to redundancies. If a corner tetrahedra is chosen, another will be
present in the decomposition, which amounts to saying that only three corners have to be considered among
the six.
48 Meshing, Geometric Modeling and Numerical Simulation 3

It should be observed that decomposing non-simplicial (finite) elements gives a result in


which the underlying polynomial space is not, a priori, equivalent. To illustrate this evidence,
let us take the simple case of a quadrilateral seen as a finite element or as a geometric patch. The
basic monomials (Volume 1, Chapters 1 and 2) are 1, x, y and xy. If here one triangle is sliced
into two of the same degree, the basis will be the monomials 1, x and y and therefore a different
space. In order to be able to capture the monomial xy, it is necessary to shift to the second
degree, we shall then have the basis 1, x, y, xy, x2 and y 2 . A wise choice of control points allows
for canceling the contributions of x2 and y 2 to find the same space as for the quadrilateral. Note
that, in this way, having to make a choice for any of the diagonals to cut a quadrilateral face is
avoided. This issue will be revisited in the chapters dedicated to visualization.

2.1.3. Decomposition into simplices of a non-simplicial mesh

We have seen how to decompose into simplices one non-simplicial element taken individu-
ally, and the decomposition being purely topological. The problem now is to decompose all the
elements of a consistent non-simplicial mesh (its faces from an element correspond exactly to
its neighbors, whether they are triangular or quadrilaterals) ensuring the consistency of the re-
sult. This consistency requires that the faces resulting from the decomposition be common from
one element to its neighbors. In two dimensions, since the edges of the quadrilaterals remain
unchanged, compliance is automatically verified. In three dimensions, the decomposition of an
element involves cutting its quadrilateral faces and compliance will only be ensured if it is guar-
anteed that this decomposition is the same in both elements sharing the common initial face, and
that such a solution exists. There are virtually no references to this problem, the only one that we
have been able to find is [Dompierre et al. 1999], which proposes a solution based on a particular
choice of decomposition (among all of the solutions individually possible seen above) based on
the number of vertices of the quadrilateral faces of the elements, with the numbers being taken
in a certain order. It should be noted that this strategy does not explicitly utilize the neighboring
relationships between the elements. The proposed solution is topologically valid. The authors,
however, indicate the possibility of creating negative tetrahedrons, therefore geometrically in-
valid (and, rather than introducing an internal point, propose to use an optimization meshing
(tetrahedral) method to straighten them without being able to show that another decomposition,
precisely this one, can be used). This calls for a few remarks.

With perhaps the current view on the notion of validity of a non-simplicial element, is it possi-
ble to decompose a valid hexahedron (consider this example)5 and nonetheless form a non-valid
tetrahedron (negative solid). The answer is affirmative. Let us go back to a simple triangulation
problem of a plane cavity (precisely chosen as being concave if it is plane), and there exists topo-
logically valid triangulations but with negative elements. The only way to mitigate this defect is
to explicitly verify the signs of the surfaces (solids) of the elements or, at least on paper, to build
the cavity applying the Delaunay criterion. This is what is missing here, and each element can

5. It has been seen that the Jacobian of a hexahedron of degree 1 × 1 × 1 is positive if its eight corner
tetrahedrons are positive but that this is only part of the conditions to be satisfied (Volume 1, Chapter 3 and
Volume 2, Chapter 5) and thereby this test alone does not detect certain fake elements!
Mesh Transformations, Patching, Merging and Immersion 49

be seen as a fair arbitrary cavity (the element whose cavity is valid) but the topological criterion
does not see the sign of the volumes of the elements constructed.

Reconsider the [Dompierre et al. 1999] method, which gives a method for decomposing the
quadrilateral faces and the corresponding solid elements in a consistent manner.

We denote again by [1, 2, 3, ...] the local indices of the vertices and by [S1 , S2 , ...] their global
indices (in the mesh) and we look at the three types of elements that can occur, starting with the
pyramids then moving on to prisms and finally to hexahedra.

 Pyramids: The element is written as [12345] such that its quadrilateral face is the face
whose vertices correspond to the first four local indices, that is the list [1234]. We examine
this face and the global numbers of its vertices, that is [S1 , S2 , S3 , S4 ]. We search in this list
S = min Si . The diagonal from S is chosen to cut the face into two triangles. The two solution
i
tetrahedrons are formed with these triangles and the vertex of local index 5, that is S5 .

 Pentahedra: Consider the global indices of the vertices, namely the list [S1 , S2 ,
S3 , S4 , S5 , S6 ]. One calculates S = min Si and the element is rewritten by formally plac-
i
ing S in the first position. According to S, the initial element [S1 , S2 , S3 , S4 , S5 , S6 ] is un-
changed if S = S1 , that is to say that the element is written as [SI1 , SI2 , SI3 , SI4 , SI5 , SI6 ] with
[I1 I2 I3 I4 I5 I6 ] = [1, 2, 3, 4, 5, 6]. Otherwise, we have one of the following cases:

[S2 , S3 , S1 , S5 , S6 , S4 ] that is [I1 I2 I3 I4 I5 I6 ] = [2, 3, 1, 5, 6, 4],

[S3 , S1 , S2 , S6 , S4 , S5 ] that is [I1 I2 I3 I4 I5 I6 ] = [3, 1, 2, 6, 4, 5],


[S4 , S6 , S5 , S1 , S3 , S2 ] that is [I1 I2 I3 I4 I5 I6 ] = [4, 6, 5, 1, 3, 2],
[S5 , S4 , S6 , S2 , S1 , S3 ] that is [I1 I2 I3 I4 I5 I6 ] = [5, 4, 6, 2, 1, 3],
[S6 , S5 , S4 , S3 , S2 , S1 ] that is [I1 I2 I3 I4 I5 I6 ] = [6, 5, 4, 3, 2, 1].
A pyramid is defined by linking this vertex S to its opposite quadrilateral face, the one which
is not a vertex, and the complementary tetrahedron of this pyramid is formed. Therefore, the
element [SI1 SI2 SI3 SI4 SI5 SI6 ], of (triangular) base face whose local indices of the vertices are
[I1 I2 I3 ], is written with the elements whose vertices have [I1 I5 I6 I4 ] and [I2 I5 I6 I3 I1 ] as local
indices. The list [I1 I5 I6 I4 ] corresponds to the tetrahedra relative to the corner of local index I4 ,
whose edges are those of indices [I1 I4 ], [I1 I5 ] and [I1 I6 ]; the last two edges originating from the
decomposition of the (quadrilateral) faces incidents at the vertex of local index I1 , cut via their
diagonals originating from this vertex. Once done, we consider the face of the pyramid whose
vertices have the indices [I2 I5 I6 I3 I1 ] and this pyramid is sliced, as indicated above, by choosing
the diagonal of the quadrilateral face from the vertex with the smallest global index, that is, the
diagonal of local indices [I2 I6 ] or [I3 I5 ]. According to this choice, one infers the local indices
of the vertices of the elements of both solutions:

[I1 I5 I6 I4 ] ∪ [I1 I2 I3 I6 ] ∪ [I1 I2 I6 I5 ],

or [I1 I5 I6 I4 ] ∪ [I1 I2 I3 I5 ] ∪ [I1 , I5 I3 I6 ],


50 Meshing, Geometric Modeling and Numerical Simulation 3

namely the only possible solutions (among all choices seen above) due to the fact that diagonals
issued from I1 have been retained. To conclude, since the indices Ii can take six values, there
are six solutions of this form.

 Hexahedra: We consider the global indices of the vertices, namely the list [S1 , S2 ,
S3 , S4 , S5 , S6 , S7 , S8 ]. One calculates S = min Si and the element is rewritten by formally
i
placing S in the first position. According to S, the initial element [S1 , S2 , S3 , S4 , S5 , S6 ,
S7 , S8 ] is unchanged if S = S1 , that is to say that the element is written as [SI1 , SI2 ,
SI3 , SI4 , SI5 , SI6 , SI7 , SI8 ] with [I1 I2 I3 I4 I5 I6 I7 I8 ] = [1, 2, 3, 4, 5, 6, 7, 8]. Otherwise, we have
one of the following cases (noting that this is just one of the possible notations):

[S2 , S3 , S4 , S1 , S6 , S7 , S8 , S5 ] that is [I1 I2 I3 I4 I5 I6 I7 I8 ] = [2, 3, 4, 1, 6, 7, 8, 5],


[S3 , S4 , S1 , S2 , S7 , S8 , S5 , S6 ] that is [I1 I2 I3 I4 I5 I6 I7 I8 ] = [3, 4, 1, 2, 7, 8, 5, 6],
[S4 , S1 , S2 , S3 , S8 , S5 , S6 , S7 ] that is [I1 I2 I3 I4 I5 I6 I7 I8 ] = [4, 1, 2, 3, 8, 5, 6, 7],
[S5 , S8 , S7 , S6 , S1 , S4 , S3 , S2 ] that is [I1 I2 I3 I4 I5 I6 I7 I8 ] = [5, 8, 7, 6, 1, 4, 3, 2],
[S6 , S5 , S8 , S7 , S2 , S1 , S4 , S3 ] that is [I1 I2 I3 I4 I5 I6 I7 I8 ] = [6, 5, 8, 7, 2, 1, 3, 4],
[S7 , S6 , S5 , S8 , S3 , S2 , S1 , S4 ] that is [I1 I2 I3 I4 I5 I6 I7 I8 ] = [7, 6, 5, 8, 3, 2, 1, 4],
[S8 , S7 , S6 , S5 , S4 , S3 , S2 , S1 ] that is [I1 I2 I3 I4 I5 I6 I7 I8 ] = [8, 7, 6, 5, 4, 3, 2, 1].
Since the first vertex SI1 is that of the smallest global index, the three incident faces at this
vertex are correctly characterized; this vertex is also their smallest global index. We therefore
know which diagonals will be chosen for the decomposition, namely [SI1 , SI3 ], [SI1 , SI6 ] and
[SI1 , SI8 ].

It then remains to determine how the other three faces will be decomposed and which decom-
position of the element itself derives therefrom. For each side, there are a priori two possibilities,
which gives a total of eight possible configurations. In the reference mentioned above, it is shown
that the analysis can be restricted to only four situations. Here, we shall directly analyze the eight
cases. The three face diagonals that are involved will therefore be examined. The combinatorial
is thus the following:
- [SI5 SI7 ] or [SI6 SI8 ], in short denoted 57 or 68;
- with [SI2 SI7 ] or [SI3 SI6 ], denoted 27 or 36;
- the whole with [SI3 SI8 ] or [SI4 SI7 ], denoted 38 or 47.
This gives the following eight cases:

(57 27 38) or (57 27 47) or (57 36 38) or (57 36 47) or


(68 27 38) or (68 27 47) or (68 36 38) or (68 36 47).

All we have to do is to look at which decomposition of the element has, on its faces, the edges
13, 16, 18 and the three edges of each of the possibilities listed above. The analysis consists of
looking if there is a decomposition into two prisms (six tetrahedrons) or not (five tetrahedrons).
To find a prism, one examines the opposite faces to the three incident faces at 1 to see if one of
them contains the desired diagonal (therefore corresponding to that at 1). The opposite diagonals
Mesh Transformations, Patching, Merging and Immersion 51

13, 16, 18 are 57, 47, 27. We discuss eight cases by considering the number of incident diagonals
at vertex 7, which is the opposite vertex, to vertex 1.

C ASE 1.– Diagonals 13, 16, 18, 68, 36, 38: there is no diagonal issued from 7 and none of the
diagonals corresponds on the opposite side to the diagonals issued from 1. The solution is one of
the decompositions into 5 and the constraint on diagonals from 1 indicates that it is [1, 2, 3, 6] ∪
[1, 3, 4, 8] ∪ [1, 5, 6, 8] ∪ [3, 6, 7, 8] and the central tetrahedron [1, 3, 8, 6].
C ASE 2.– Diagonals 13, 16, 18, 68, 27, 38: the presence of 27 only shows the two-prism solution,
therefore two solutions with six tetrahedrons. The two prisms are identified, either (let us not
forget that i must read Ii ):

[2, 7, 3, 1, 8, 4] ∪ [2, 6, 7, 1, 5, 8], or, with 1 at the front, [1, 4, 8, 2, 3, 7] ∪ [1, 8, 5, 2, 7, 6].

Reconsider the decomposition of [1, 2, 3, 4, 5.6], or (see above):

[1234] ∪ [2564] ∪ [2634] or [1234] ∪ [2345] ∪ [3456],

or [1235] ∪ [1365] ∪ [1645] or [1235] ∪ [1345] ∪ [3645],


or [1236] ∪ [1526] ∪ [1564] or [1236] ∪ [1426] ∪ [2456],
and the indices are permuted to apply these decompositions to [1, 4, 8, 2, 3, 7] and [1, 8, 5, 2, 7, 6].
For the first prism, the correspondence between indices is as follows:

1 ≡ 1, 2 =⇒ 4, 3 =⇒ 8, 4 =⇒ 2, 5 =⇒ 3, 6 =⇒ 7,

which gives, a priori, the following elements:

[1482] ∪ [4372] ∪ [4782] or [1482] ∪ [4823] ∪ [8237],

or [1483] ∪ [1873] ∪ [1723] or [1483] ∪ [1823] ∪ [8723],


or [1487] ∪ [1347] ∪ [1372] or [1487] ∪ [1247] ∪ [4237],
but since segments [24] and [47] are not permitted, only two decompositions remain:

[1483] ∪ [1873] ∪ [1723] or [1483] ∪ [1823] ∪ [8723],

For the second prism, the correspondence between indices is as follows:

1 ≡ 1, 2 =⇒ 8, 3 =⇒ 5, 4 =⇒ 2, 5 =⇒ 7, 6 ≡ 6,

which gives, a priori, the following elements:

[1852] ∪ [8762] ∪ [8652] or [1852] ∪ [8527] ∪ [5276],

or [1857] ∪ [1567] ∪ [1627] or [1857] ∪ [1527] ∪ [5627],


or [1856] ∪ [1786] ∪ [1762] or [1856] ∪ [1286] ∪ [8276],
but since segments [25] and [57] are not permitted, only two decompositions remain:

[1856] ∪ [1786] ∪ [1762] or [1856] ∪ [1286] ∪ [8276].


52 Meshing, Geometric Modeling and Numerical Simulation 3

To find the complete solutions, a diagonal simply has to be chosen on the face [1278] common
to both prisms, thus [17] or [28]. This enables the two possible solutions to be established:

[1483] ∪ [1873] ∪ [1723] with [1856] ∪ [1786] ∪ [1762],

or [1483] ∪ [1823] ∪ [8723] with [1856] ∪ [1286] ∪ [8276].


Remember that the notation [1483] refers in short to the tetrahedron of local indices [I1 I4 I8 I3 ]
and global indices [SI1 SI4 SI8 SI3 ].

C ASE 3.– Diagonals 13, 16, 18, 57, 36, 38: the presence of 57 only makes it possible to show
the two-prism solution based on the previous case, after having enumerated the hexahedron with
vertex 5 in lieu of vertex 2. The indices of the enumeration [1, 2, 3, 4, 5, 6, 7, 8] are replaced by
[1, 5, 6, 2, 4, 8, 7, 3], we reuse the two solutions above with the permutations6 (which can be seen
as a rotation of 120◦ around the virtual axis [17]):

1 ≡ 1, 2 =⇒ 5, 3 =⇒ 6, 4 => 2, 5 =⇒ 4, 6 =⇒ 8, 7 ≡ 7, 8 =⇒ 3.

It directly follows:

[1236] ∪ [1376] ∪ [1756] with [1348] ∪ [1738] ∪ [1785],

or [1236] ∪ [1356] ∪ [3756] with [1348] ∪ [1538] ∪ [3578].

C ASE 4.– Diagonals 13, 16, 18, 47, 36, 38: the presence of 47 only makes it possible to show
the two solutions with two prisms starting from the same case, after having enumerated the
hexahedron with vertex 4 in lieu of vertex 2. The indices of the enumeration [1, 2, 3, 4, 5, 6, 7, 8]
are replaced by [1, 4, 8, 5, 2, 3, 7, 6], and we reuse the two solutions of Case 2 with permutations
(which can be seen as a rotation of 240◦ around the virtual axis [17]):

1 ≡ 1, 2 =⇒ 4, 3 =⇒ 8, 4 => 5, 5 =⇒ 2, 6 =⇒ 3, 7 ≡ 7, 8 =⇒ 6.

It directly follows:

[1568] ∪ [1678] ∪ [1748] with [1623] ∪ [1763] ∪ [1734],

or [1568] ∪ [1648] ∪ [6748] with [1623] ∪ [1463] ∪ [6473].

C ASE 5.– Diagonals 13, 16, 18, 57, 36, 47: the presence of 57 and 47 makes it possible to show
two directions to define two decompositions into prisms, each prism can be decomposed into
tetrahedrons in two different ways. Since [27] does not exist, the edge [36] is present. The
conformal connection between these meshes of the prisms will be ensured by choosing the same
diagonal ([17] or [35]) for the common quadrilateral face; that choice is a priori one degree of
freedom.

For the first direction (Figure 2.3, on the left) the decision is made according to [13] and [57],
the solutions for each prism are formed by a tetrahedron and either of the decompositions of

6. Easy to achieve by substitutions to the text editor.


Mesh Transformations, Patching, Merging and Immersion 53

Figure 2.3. The hexahedron and its decomposition into two prisms according to the first
or second possible direction

the remaining pyramid. For the first prism, the tetrahedra [1236] is exhibited and the pyramid
remains pointing to vertex [6], that is:

[1236] ∪ [1356] ∪ [3567] or [1236] ∪ [1567] ∪ [1376].

For the second prism, the tetrahedra [1347] is shown and only the pyramid pointing to vertex [7]
remains but as edge [18] is imposed, there is only one solution left, that is:

[1347] ∪ [1487] ∪ [1578].

Since this decomposition possesses edge [17], only the following can be retained as a complete
solution:
[1236] ∪ [1567] ∪ [1376] with [1347] ∪ [1487] ∪ [1578].
For the second direction (Figure 2.3, on the right) the decision is made according to [16] and
[47]. If the prism is observed from “above”, the only possible edge for the sole free quadrilateral
face is [17] because [18] exists and the solution is obtained with the tetrahedron [1567] and the
pyramid that points to [7] with [18] therefore:

[1567] ∪ [1487] ∪ [1578].

For the bottom prism, edge [17] ought to be considered as chosen for the connection and since
edge [13] is fixed such as [63], only one solution is derived, that is:

[1236] ∪ [1637] ∪ [1473].

The complete solution is:

[1236] ∪ [1637] ∪ [1473] with [1487] ∪ [1567] ∪ [1578],

which is the only possible solution (the two ways of making a decision leading to this result).

C ASE 6.– Diagonals 13, 16, 18, 57, 27, 38: the presence of 57 and 27 enables two decomposi-
tions into two prisms to be displayed. In order to find solutions, the case above will be considered
and the permutation of required indices carried out. In this case, the 240◦ rotation seen above is
used. The solution is directly derived:

[1483] ∪ [1387] ∪ [1578] with [1567] ∪ [1237] ∪ [1276],


54 Meshing, Geometric Modeling and Numerical Simulation 3

C ASE 7.– Diagonals 13, 16, 18, 68, 27, 47: the presence of 27 and 47 allows for showing two de-
compositions into two prisms. In order to find the solutions associated, case 5 will be considered
and the permutation of required indices carried out. For this case, the 120◦ rotation seen above
is used. The following solution is directly derived:
[1568] ∪ [1867] ∪ [1276] with [1237] ∪ [1487] ∪ [1473],
C ASE 8 ( LAST POSSIBLE CASE ).– Diagonals 13, 16, 18, 57, 27, 47: the presence of 57, 27 and

Figure 2.4. Decompositions for cases 5, 6 and 7 (from left to right)

27 allow for showing three decompositions into two prisms (one per direction). Thereby, there
will be a priori six possible decompositions in this case; this hypothesis will contradicted.
– First direction item ([13] and [57] from bottom to top). We expect to find two solutions, but
the presence of two Schonhardt polyhedra (due to the fact that there are now three constraints,
and as such less flexibility in the choices), one by prism, makes it so that there is only one solution
left, that is:
[1567] ∪ [2671] ∪ [2317] with [1347] ∪ [1857] ∪ [1748].
– Second direction item ([16] and [47] from left to right). We expect to find two solutions, but
the presence of two Schonhardt polyhedra (due to the fact that there are now three constraints,
and therefore less flexibility in the choices), one by prism, makes it so that there is only one
solution left, that is:
[1276] ∪ [1237] ∪ [1347] with [1675] ∪ [1748] ∪ [1785].
– Third direction ([18] and [27] from the front to the back). We expect to find two solutions,
but the presence of two Schonhardt polyhedra (due to the fact that there are now three constraints,
and therefore less flexibility in the choices), one by prism, makes it so that there is only one
solution left, that is:
[1237] ∪ [1874] ∪ [1347] with [1276] ∪ [1567] ∪ [1785].

To conclude, in this case, there are only three possibilities but a more detailed analysis shows
that the three solutions are identical, which is not surprising given the number of constraints.

In short, a solution was built, thus proving its existence. Consistency is ensured by construc-
tion, each quadrilateral face being cut along its diagonal from its vertex with the smallest global
index.
Mesh Transformations, Patching, Merging and Immersion 55

Number of diagonals in 7 List of tetrahedra

[1236] [1348] [1568] [3678] [1386]

27 [1348] [1387] [1237] [1568] [1678] [1276]


27 [1348] [1238] [2387] [1568] [1286] [2678]

47 [1568] [1678] [1487] [1236] [1376] [1347]


47 [1568] [1486] [4678] [1236] [1346] [3746]

57 [1236] [1376] [1567] [1348] [1387] [1578]


57 [1236] [1356] [3567] [1348] [1385] [3578]

27 47 [1568] [1867] [1276] [1237] [1487] [1473]

27 57 [1348] [1387] [1578] [1567] [1237] [1276]

47 57 [1236] [1376] [1347] [1487] [1567] [1578]

27 47 57 [1567] [1276] [1237] [1347] [1578] [1487]

Table 2.1. Hexahedra decomposition into tetrahedra

For geometric validity, it is necessary to effectively ensure that all volumes (tetrahedra) are
positive because the decomposition is only topological. If there are several possible solutions
and one is not valid, another can be looked into. If there are still negative elements, one solution
is to apply a mesh optimization process or, more simply, to introduce a point in the volume and
to connect it to all of the faces now triangulated.

To conclude on hexahedra, all cases are grouped7 in the same table (Table 2.1). The overall
process is as follows:

i) the enumeration of the element to put in first position the vertex with the smallest global
index;

ii) the diagonal from 1 is therefore imposed on the three incident faces;

7. The tetrahedra are eventually listed in a different manner (compared to above) since a lexicographic order
was used for the first two indices.
56 Meshing, Geometric Modeling and Numerical Simulation 3

iii) the diagonals of the other three faces are imposed and those that go through vertex 7 are
counted. Depending on what this number is, this would correspond to one of the cases reported
in Table 2.1 and the corresponding tetrahedrons are defined. In the presence of multiple choices,
the first valid one is taken or the others are explored. In the event of failure, an internal point is
created and is linked with the 12 triangular faces of the hexahedron.

It should be noted that some cases could be avoided by permutating the enumeration again
and applying the only remaining case. For example, case 27 is retained as a pattern and in the
case of 47, case 27 can be recovered by the proper rotation of indices. Any of the decompositions
is then applied for this case.

Now follows the synthetic diagram of the decomposition process of a (compliant) mesh into
tetrahedra (also compliant):

Decomposition algorithm [2.4]


Loop over the elements:
– if the element is a tetrahedron, move to the next one;
– if the element is a pyramid, decompose its quadrilateral face starting from its vertex
with the smallest global index and build the two associated tetrahedrons;
– if the element is a pentahedron:
- find its vertex of lowest index and permute the enumeration to
place it in the first position;
- build the tetrahedron defined on this vertex;
- cut the remaining pyramid as above;
– if the element is a hexahedron:
- find its vertex of lowest index and permute the enumeration to
place it in the first position;
- define the diagonals of non-incident faces at this vertex;
- count those incidents at vertex 7 and apply the decomposition of the corresponding case.
End Loop over the elements.

We shall see an application of this decomposition method in Chapter 5, with the observation
already made about the nature of the underlying polynomial spaces.

2.1.4. Decompositions for a complying connection

The point is to connect, in a consistent manner, a non-simplicial mesh with a simplicial


mesh. The most common method employed consists of building a pyramid based on the desired
quadrilateral faces of the non-simplicial mesh by defining, as best as possible, a (fifth) vertex
in front of the face to be dealt with. In this way, a domain must be meshed (between the two
meshes) whose boundary is composed of triangles. A general-purpose automatic mesh is then
a solution (see also, below, the immersion operation that exhibits a few similarities with this
problem).
Mesh Transformations, Patching, Merging and Immersion 57

Conversely, decomposition of the pyramids, the pentahedra and the hexahedra under consid-
eration can be utilized based on what has been discussed above. A pyramid is sliced into two
tetrahedrons by choosing the desired diagonal. For a prism, a point is defined in the interior,
for instance its barycenter, and it is connected to the six faces of the element. The result is two
tetrahedrons and three pyramids. Of the latter, the one (or those) that need to be joined together
is (or are) cut into two tetrahedrons (Figure 2.5, on the left). Similarly, a point is introduced
inside a hexahedron to be addressed and this point is joined to the six faces of the element. Six
pyramids result thereof. Of the latter, the one (or those) that need to be joined together is (or are)
cut into two tetrahedrons (Figure 2.5, on the right).

Figure 2.5. Decomposition of a quadrilateral prism or hexahedron face to ensure a consistent


transition. The other quadrilateral faces are not sliced

It should be noted that, being conceptually simple, this method for decomposing the elements
to be connected, results, on the one hand, in a significantly larger number of pyramids than the
conventional approach and, on the other hand, in removing prisms and hexahedra, those that have
been cut to incorporate the transition.

2.1.5. Decomposition of a high-degree element

The question of mesh and solution fields will be revisited in Chapter 5. Let us indicate that
an element of any degree has simply to be decomposed into subelements of the same degree.

The idea is to replace the Lagrange notation with a Bézier notation and then to rely on
De Casteljau subdivision algorithms and then, at the end, to come back to the Lagrange nota-
tion.

Applying a De Casteljau subdivision algorithm, depending on the geometric nature of the


element being addressed, directly or not, produces the desired result. Consequently, introducing
the point of parameters ( 12 , 12 ) in a quadrilateral builds, as such, the four subelements sought.
For a triangle, introducing the point of parameters ( 13 , 13 , 13 ) allows for decomposing the initial
triangle into three sub-triangles but the initial edges are not sliced. Thereafter, in Chapter 5, the
expression of the element will be worked on directly, and its edges sliced with the middle point
and, thus, four sub-elements are built.
58 Meshing, Geometric Modeling and Numerical Simulation 3

2.2. Reconnection

As seen above, reconnecting two meshes (sharing, in principle, a portion of common bound-
ary meshed in an identical fashion in each of the initial meshes) is a useful operation. The re-
connection is therefore an eminently geometric operation designed to identify common entities,
coupled with a topological aspect related to the numbering of common entities (other vertices
and nodes if any), which will have to be compacted.

We shall also see the possibility of creating cracks or fissures in the presumed reconnection
area. In other words, two common entities (geometrically speaking) will not necessarily be
merged in this case.

In principle, we make8 the assumption that the reconnection involves boundary entities (ver-
tices, nodes, edges, and faces); as a result, the search for common entities will only focus on
boundary entities and not on all the entities of the two meshes being considered. More often than
not, the reconnection area follows a particular geometry such as a symmetry plane. Moreover, it
is assumed that common vertices (nodes) either actually coincide or are only very slightly differ-
ent (due to possible rounding errors). The other cases that do not verify these assumptions will
instead be seen as merging or immersions, as discussed later.

Find, for a (boundary) node of the first mesh, the possible (boundary) of the second common
mesh by iterating through these latter leads to a quadratic algorithm. To avoid this deleterious
effect, an accelerating structure will be used. Finding two common entities is quite similar to
a localization problem. It is thus not surprising to resort to a grid9 (Chapter 1) to simplify this
search and obtain a fast method.
• Definition and use of a grid for identifying common pairs
There are two ways to define a search structure, one through the construction of a grid (or box)
addressing both meshes and the other including a grid involving only one of them (Figure 2.7 and
Chapter 1). In any of the cases, the extrema (corners) of the grid are computed from the extrema
of the mesh(es) and the result is then slightly expanded. If xmin , xmax , etc., are the extrema (of
the expanded box) and if Δx = xmax − xmin , etc., we calculate Δ = max (Δx , Δy ) and, for
a given precision threshold ε, δ = 2 ε Δ is defined, which is the size of the cubes of the grid
(virtual or not) (Figure 2.6) and represents the resolution capacity of the grid. It should be noted
that δ can be seen as δ = Δ n , where n is the number of grid cubes in one direction, that is to say
that n = 21ε with factor 2, which we will see later, why.
 Definition of a grid encompassing the two meshes
The grid is (virtually) composed of n×n cubes of size δ (we could have one size per direction)
and its lower left corner, C, has as coordinates xmin and ymin . With a given point of coordinates
x − xmin y − ymin
x and y is associated the pair of indices (integers) i and j with i = and j = .
δ δ

8. Otherwise, a few details of the algorithm will have to be adapted.


9. One could also look at whether using a tree-based structure brings additional advantages.
Mesh Transformations, Patching, Merging and Immersion 59

If the grid is actually built, thus seen as an array with two indices the array Grid(., .), the
point P (its number #P ) will be coded in the cube of index (i, j) either directly or indirectly
via an entry point and a linked list. In detail, if the cube is empty, one sets Grid(i, j) = #P ,
otherwise the index Grid(i, j) (which is the index of the first point that was stored in this “cube”)
and then the list is iterated up to a free value at which #P is stored (Chapter 1).

Figure 2.6. Construction of the grid (virtual or not) and definition of its parameters,
in particular δ

Figure 2.7. A grid including both meshes (on the left) and a grid including only one of the two
meshes (one the right) with the choice of the latter and the corresponding grid (black or red)

If the grid is not actually built and this solution is preferred, a single-index array will be
defined, still denoted Grid(.). At point P the two indices i and j are associated, by hashing with
a single index denoted ij, for example ij = i + j, up to a modulus. The point P (its number #P )
will be coded in the cube of index ij, either directly or indirectly via an entry point and a linked
list. In detail, if the cube is empty, one sets Grid(ij) = #P , otherwise the index Grid(ij) is
60 Meshing, Geometric Modeling and Numerical Simulation 3

recovered (which is the index of the first point that was stored in this cube) and then the list is
iterated up to a free value at which #P is stored. The structure of the hashed grid has already
been described in Chapter 1. Resorting to hashing and the effective non-construction of the grid
(in the form of an array with two indices) allows for simulating very thin-sized cubes, δ, and
thus gives a very significant resolution capacity. On the other hand, hashing induces the fact that
the points discovered during the traversal of the linked list can be very close to the point being
examined or, on the contrary, far apart (therefore quickly discarded).

The first step of the method consists of “inserting” in the grid and its associated linked list
the vertices (of the boundary) of the first mesh Ω1 . Once done, the boundary of the second mesh
Ω2 will be explored to find identical pairs of vertices. In principle, there is only one solution
and since the two points coincide with one another, the key – the couple (i, j) or the hashed
value ij – is identical and the distance between the two points is zero. In practice, and because
of possible rounding errors, it is possible, on the one hand, to have several reasonable solutions
(Figure 2.8, on the right-hand side) (and to have to decide for a single one, the right one!) and,
on the other hand, not to have the same key for the two points examined (Ω1 and Ω2 , which, for
example, are not (virtually) in the same cube, as such it is possible to find the indices (i, j) and
(i + 1, j) and, in the event of hashing, a different starting index), whereas the distance between
the two points found in the end is not exactly zero. To avoid this initial error and to examine
all the relevant cubes, the analysis of a point P of coordinates (x, y) is accompanied by those
of eight (virtual) points inferred by a translation (with a step slightly less than half the size of
the cubes, shif t ≈ 0.499δ). These points are therefore defined by the couples (x, y ± shif t),
(x±shif t, y), (x−shif t, y ±shif t) and (x+shif t, y ±shif t). In this way, a possibly relevant
(starting) cube cannot be missed. In Figure 2.8 (on the left) the relevant four cubes are shown,
the one where P is located but also the other three impacted by one of the virtual translations
of P .
 Definition of a grid including only one of both meshes

The principle is the same, and the benefit is that the analysis of some specific points of the
second mesh is immediately deemed unnecessary if the point is not even in the (virtual) grid of
the first mesh, which significantly speeds up the search algorithm. To minimize work, the mesh
with the smallest number of elements will be “stored” in the grid (more specifically the smallest
number of boundary vertices).

 Utilization of the grid to identify the points to be reconnected

The solution with hashing is retained. Two identical points have the same index in the hash-
ing or an index deduced from a small translation (the aforementioned shift) which, in terms of
(effective) cubes, corresponds to a cube neighboring the initial cube. In contrast, two points of
the same index are not necessarily identical. To identify pairs of candidates, the starting box(es)
will be examined and the corresponding links traversed. The difficulty resides in cases where
there are, for a given vertex, several plausible solutions (if we rely only on the distance between
the points), some of which may lead to topological aberrations (and therefore to a wrong result).

The search for pairs must be fast, so quick rejections also. If B of coordinates (x, y) is a
vertex of the second mesh Ω2 and if x > xmax or x < xmin , etc., B is not in the grid of the first
Mesh Transformations, Patching, Merging and Immersion 61

• • • shift
• P• •
• • •
shift

Figure 2.8. On the left-hand side, to analyze the point denoted P , one looks at the eight
(virtual) points inferred via a translation according to the eight possible directions (to the
right, to the left, etc.), allowing for the possible examination of other cubes.
On the right-hand side possible ambiguities are shown (the distance between the two meshes
has been deliberately exaggerated). For the reconnection of Ω2 with Ω1 (one strives to
compare the points of Ω2 based on the grid of Ω1 ), the point B3 seems linked to
A1 or A4 , B4 with A4 or A5 , to be decided

mesh Ω1 and therefore it is not a candidate for reconnecting. Moreover, as already seen, hashing
induces the fact that the points discovered through the traversal of the link fields can be very
close or, on the contrary, very distant (and as such quickly discarded). Furthermore, consider B
a vertex of the second mesh Ω2 , its index (hashed) is calculated, the latter pointing to a vertex
of the first mesh Ω1 , a vertex named A1 . If the distance dist(B, A1 ) > δ, the cube found to
analyze point B is not the right one (it is the biased result of hashing) and this point B is not a
candidate for reconnection. If dist(B, A1 ) ≤ δ, A1 is a possible candidate or is the entry point
into the chain of points Ai associated with its cube, as potential candidates. Under reasonable
assumptions and a choice of ε thus adequate δ, the possibility of finding two points to associate
with a single one (of the other mesh) is excluded and the progress of the search is very simple,
this is what follows.

For later use, an array of length np2 is initialized at 0, of vertices of Ω2 , or number(1 :


np2 ) = 0, and then the vertices (of the boundary) of this mesh are looped over:
– for a vertex such as B 10, find the index (double the virtual cube containing it) and deduce
therefrom the hashed index;
– if A is the vertex of Ω1 corresponding to this index and is at a distance greater than δ, move
to the following vertex B;
– if A is the vertex of Ω1 corresponding to this index and is at a distance dist shorter than δ:

10. Again, B designates a vertex or its index.


62 Meshing, Geometric Modeling and Numerical Simulation 3

- if dist = 0, B is (exactly) coinciding with A, number(B) = A, END (move to the next


vertex);
- otherwise, traverse the linked list (starting from A or a cube found from an index related
to the light shif t applied to B) and select point Ai closest to B, number(B) = Ai , END (move
to the following vertex). If this solution is adopted, the vertex considered common is not strictly
identical to the two vertices thus identified, it may be either one (or, why not the midpoint). In
other words, the elements of the ball (reconnected) are not exactly the elements of the two initial
“semi”-balls, which for their part are deemed fair. It would therefore be necessary, in principle,
to verify that the result remains valid (positive elements in the right sense, surface, volume or
Jacobian depending on the nature of the element involved).

 Concatenate the two meshes into a single one

The array number(.) defined and filled above will enable the concatenation of the two meshes
into a single one. This operation concerns the coordinate array (removal of duplicates and global
numbering of vertices) and the array of elements (update the list of vertices (nodes) taking into
account the reconnected vertices and the overall numbering).

Let ne1 , ne2 , np1 and np2 denote the number of elements and vertices of the two initial
meshes; we denote list1K (.), list2K (.) and listeK (.) the list of the vertices of the element K of
an initial mesh or of the resulting mesh. We set np = np1 . We start (we have to start) with the
array of coordinates with coor1 (., .), coor2 (., .) the arrays relative to Ω1 and Ω2 and coor(., .) the
array concatenated. The vertex coordinates of the first mesh are carried forward and then those
of the second mesh, by taking into account the common vertices (therefore already deferred).
Incidentally, we finish filling out the array number(.):
– Loop over the vertices of Ω1 :
coor(., 1 : np1 ) = coor1 (., 1 : np1 );
– end Loop over on the vertices of Ω1 ;
– np = np1 ;
– Loop over the vertices S of Ω2 ;
– if number(S) = 0, do np = np + 1, number(S) = np, coor(., np) = coor2 (., S);
– end Loop over the vertices of Ω2 ;

The array of elements can then be updated. The list related to an element of Ω1 is carried forward
as such, and the list related to an element of Ω2 takes into account the common summits whose
new number is the one they had in the first mesh, array number(.):
– Loop over the elements K of Ω1 :
listK (k) = list1K (k), for k = 1, ..., nbs;
– end Loop over on the vertices of Ω1 ;
– Loop over the elements K of Ω2 :
listK+ne1 (k) = number(list2K (k)), for k = 1, ..., nbs;
– end Loop over the elements of Ω2 ;
Mesh Transformations, Patching, Merging and Immersion 63

with nbs the number of vertices of an element.

The algorithm and its various stages are synthesized through the following scheme:

Reconnection algorithm [2.5]


– Extract the boundary vertices of mesh Ω1 .
– Extract the boundary vertices of mesh Ω2 .
– Compute the extrema of mesh Ω1 , deduce thereof the virtual grid.
– Insert in this grid the boundary vertices of Ω1 with a hashed index and a linked list.
– Loop over the boundary vertices of Ω2 , identify the coinciding vertices and establish
the match number(.).
– Concatenate the two coordinate arrays by completing number(.).
– Concatenate the two arrays (nodes) of elements using number(.).

Extracting the boundary vertices gives an immediate gain in time11 but is not strictly neces-
sary. This extraction requires the establishment of neighborhood relationships through the edges
or faces according to the dimension (Chapters 4 and 9 of Volume 1 and Chapter 1 of this volume);
a neighbor with number zero means that we are on the mesh boundary. Both ends of the edge
or the vertices of the corresponding boundary face are marked as boundary items. Moreover,
neighborhood relationships allow for finding the (usual) topological entities of the mesh, such as
balls, shells, etc., which may be necessary for validating the result.

Before continuing, we shall give a real example (Figure 2.9). This is a problem of homog-
enization, and is part of a study being conducted at UPEC12 that provided the geometry. The
simulation is based on a cubic cell in which there is a gyroid-type structure. This part (its sur-
face) is meshed and then the encompassing cube is defined by its meshed faces. The trace (the
edges) of the internal structure is easily identified in these meshes which, at some places, is
exactly tangent. The images at the top of the figure show the part and three faces of the cube.
The bottom image, on the left, shows (through cross-sectioning) the reconnected surface while,
on the right side, the solid mesh of the object can be seen. Incidentally, this tetrahedron mesh
indicates that the reconnection of the different surfaces is right. Indeed, the tetrahedral mesher
would have failed and detected errors. Regarding the general algorithm given above and since
these are surfaces, it is possible to extract the boundary vertices of the gyroid, but all the face
vertices (of the cube) must be considered. This reconnection situation could then be seen as a
case of fusion (which will be discussed below).

 Are there any guarantees about the result?

11. And in robustness, it is indeed impossible to identify a boundary vertex of Ω2 with an internal vertex of
Ω1 , even if it is the closest.
12. Paris-Est Créteil University.
64 Meshing, Geometric Modeling and Numerical Simulation 3

If the distance between the vertices of the Ω1 mesh are at least 2 ε, the algorithm is correct.
Otherwise, a vertex of Ω2 could be identifiable to two vertices of Ω1 and the fact of taking the
nearest one is not a guarantee. Only a topological verification makes it possible to decide. In
other words, the vertices are identified but so too the edges13, which shows again, unsurprisingly,
the fundamental role of the edges in a mesh.

Figure 2.9. On the top left, the gyroid structure that will be immersed in a cube.
On the top right, three of the cube faces; the faces are meshed and carry the trace of the
meshes of the gyroid tangent to them. At the bottom left, the surface obtained by joining
(cross-cutting). At the bottom right, the cube is meshed with tetrahedrons (cross-cutting)

 Purely geometric or physical-geometric reconnection

• In a purely geometric reconnection, common vertices are determined as before. Common


edges are then those whose extremities are common, and common faces are those whose vertices
are common. Considering both common edges and faces, and not only common vertices, allows
easy verification of Euler’s formulas (Volume 1, Chapter 6 and Volume 2, Chapter 5) and as such,
a way of verifying the accuracy of the resulting mesh is obtained.

13. In two dimensions but also in three dimensions where this identification is sufficient.
Mesh Transformations, Patching, Merging and Immersion 65

• In a physical-geometric reconnection, in addition to the geometric aspect, a physical aspect


is taken into account (related to the problem). Including information of a physical nature in a
mesh is done by associating physical attributes with the entities of this mesh. Then, two vertices
will be said to be common if, on the one hand, they are geometrically common and, on the other,
whether or not their two physical attributes are identical. Similarly, the edges and common faces
will be defined with these two aspects. The introduction of physical criteria is therefore a simple
way of defining cracks. In reality, two edges at the same position exist at a given time and, due to
the nature of the problem being addressed, move away from each other at a later stage: a crack
appears.

 High-degree mesh reconnection


Finally, we comment on the case where the meshes addressed have not only vertices but also
nodes on their edges (their faces or in their interior). Several questions addressed in Chapter 6 of
Volume 2 related to the management of non-vertex nodes will be revisited.

As in the standard case, (only) the coinciding vertices will be searched for and the edges (and
then the faces) will have to be looked at explicitly. The fact of looking at the edges will strengthen
the guarantee of the validity of the result since a topological side is added to the process. Once
identified, the edges will be processed in order to ensure a unique definition of an element (of
the first mesh) to its neighbor (of the second mesh) sharing such an edge. This appears in the
description of the elements (via a list of their vertices and nodes which, let us recall, gives an
implicit definition of the edges and faces).

 Reconnection of solutions. Denote by sol1 , sol2 and sol initial and reconnection solu-
tions.

With the array number(.), previously, we have:


– Loop over the vertices of Ω1 :
sol(1 : np1 ) = sol1 (1 : np1 );
– end Loop over on the vertices of Ω1 ;
– np = np1 ;
– Loop over the vertices S of Ω2 :
if number(S) > np1 , do np = np + 1, sol(np) = sol2 (S);
– end Loop over the vertices of Ω2 ;

2.3. Merging

Unlike reconnection and although close to it (at first glance), the merging operation14 consists
of creating a single mesh, Ω, from two meshes, Ω1 and Ω2 , presenting an area potentially deemed
common. This area can be part of the boundary but not necessarily (therefore a “slight” cover or

14. Which does not have the same meaning as the concept of merging in divide-and-conquer methods.
66 Meshing, Geometric Modeling and Numerical Simulation 3

a really substantial covering). If this is part of the boundary, and unlike a simple reconnection, we
shall look at the case where the two entities to be identified are not meshed in the same way (con-
sistency problem). Another interesting situation is the existence of an area where both meshes
are close enough to decide to join them (at best). Apart from this particular case, in a merging
problem, this is the nature of intersections that differentiates this problem of a simple joining.
The intersection between the two meshes is not limited to common or very close entities but
topologically identical ones, but might result in points (mesh vertices in the other or intersections
between edges and faces of a mesh, and edges and faces of the other). Thereafter, the merging of
two meshes is a singularly more complicated operation (but unfortunately rich in really critical
applications15 and thus very useful) than a simple reconnection. In particular, in this sense, only
one mesh can be considered, which presents pathologies (for example, self-intersections and/or
overlaps) and merging will then consist of making it right; this will be referred to here as mesh
cleaning. Figure 2.10 is an attempt to give an indication of some of the situations that we are
concerned with. One last operation, close enough through certain aspects, is the immersion of
one mesh into another, and the mesh to be immerged should remain unchanged, which makes it
the particularity of this operation. This point will be mentioned at the end of the section where we
shall see the similarities and especially the differences between the more conventional merging
problems, which are discussed below.

Figure 2.10. From left to right, two meshes with a “real” or a “light” overlap,
two meshes with a boundary area that can be considered as being common,
although not exactly coinciding; two lines (boundary of one or two meshes)
geometrically coinciding but topologically non-compliant

The difficulties encountered increase when moving from situations in the plane, in volumes
or, and even more, in a surface. The first question is to characterize and then to quickly detect
a pathology that could (should) be treated, and it is natural to find localization problems in this
phase. A subsidiary question, not without value, is to identify the cause of any such pathology
(see below). The next question is to find out how the pathology can be removed, once again
quickly, by using any of the many tools we have at our disposal, or by developing specific al-
gorithms. In the conventional toolkit, generally very effective in the case of simplicial meshes
and, more often than not, much less effective otherwise, one will find: the insertion of points
(local insertion by a simple decomposition or insertion via a cavity algorithm), moving vertices,

15. Which are precisely the ones that are of interest in practice.
Mesh Transformations, Patching, Merging and Immersion 67

edge flipping, edge removal by merging their two ends, vertex, edges or faces reconnection (in
the sense seen above) and the removal of an element or part of an element element (found twice
in the mesh, as an overlapping case). In generic tools, we will essentially have those that allow
for detecting intersections (edge–edge, edge–face, face–face which, in fact, is detected via the
edge–face case) and, if necessary, those relating to the actual calculation of intersection points.

• Origins deleterious pathologies


Two kinds of pathologies will be distinguished. One related to simple (rounding) errors in
the mesh geometric manipulation (for example, after rotation, the mesh obtained does not exactly
coincide with the initial mesh at the level of the boundary area presumed as being common, by
presenting either a slight cover, or, on the contrary, a slight spacing). The other case, substantially
more problematic, is linked to a real construction or even design failure.

In the first case, slight offsets have merely to be managed (thus mainly proximity problems),
the topology being correct.

In the second case, in addition to a problem of geometric coincidence, a topological incon-


sistency will be encountered. A notable cause behind these phenomena is that the design phase,
based on a CAD system, was not conducted in accordance with the meshing process (and its con-
straints) but uniquely with geometric concerns (or even simply aesthetic ones). In other words,
the patches built do not always verify, themselves, the classic properties of meshes, namely com-
pliance and therefore a topological joining of one patch to another, tightness and thus geometric
patching of one patch to the other and consistency in terms of size16 of one patch to another.
These recurring issues either directly come across at the patch level or through the meshing of
these same patches. Directly processing the patches (of the CAD) within the system itself and
before meshing is the main topic but falls beyond the scope of this book. We are just going to
propose to see, if at the mesh level (reputedly correct for each patch taken individually), solutions
can be found to the existing issues. In addition, it should be noted that the time spent to correct
a CAD at its own level can be very significant (or even preponderant) compared to the meshing
time of the object thus defined. This rather manual correction process is obviously tedious and
any more automatic solution is preferable, even if it does not deal with every case.

In the following, through a few situations (sometimes very simple or even slightly simplistic),
we will try to make readers aware of the various difficulties expected.

• The simple case of a pure two-dimensional or three-dimensional situation (volumes)


without compliance issues

Here, we recognize the case of a reconnection but with the added difficulty that the vertices
to be merged are not exactly coinciding. The solution is then to resume the patching method but
with some adjustments. The grid used may involve only one of the two meshes or in fact the two
meshes. Unlike a reconnection, the (virtual) step, δ, of the grid is larger (to capture couples even

16. Neighboring patches of disparate sizes will, after meshing, give rise to elements of disparate sizes as
well, thus, in most cases, of poor quality.
68 Meshing, Geometric Modeling and Numerical Simulation 3

in the presence of rounding errors and slight differences) and all the vertices (not just those on the
boundary) are coded, first of all those of the first mesh without thinking then those of the second
mesh17 if they are potentially close to a vertex of the first mesh and deserve analysis. Beyond
this geometric aspect, it is useful to have information of a topological nature, in other words, to
be able to analyze certain balls or edges (typically those related to a vertex potentially candidate
for patching).

To find these sets (balls or shells), neighborhood relationships are needed. In this way, we
shall be able to detect deleterious pathologies and delete them. Edge analysis is indeed paramount
to ensure the topological validity of the result. In two dimensions, a vertex reconnected without
at least one of its incidental edges being so is a situation that we consider invalid. In three
dimensions, a vertex reconnected without at least one of its incidental edges being connected is
a situation also deemed invalid.

In the presence of one (or multiple) intersection(s), for example when a vertex of the second
mesh is contained in an element of the first mesh or belongs to the boundary of the latter. An
intersection, at least one, is detected and two situations are met:
i) the intersection disappears when the two vertices involved are identified (Figure 2.11, on
top and at the bottom left). The topology of the result is valid, and the common edges have been
connected together. The result is geometrically valid if the balls of the vertices are connected
together. Indeed, since B2 is not exactly coinciding with A2 , we can see a displacement of this
vertex as an indirect consequence of the reconnection, thus requiring an a posteriori effective
validation and a possible correction (again a displacement);
ii) the intersection also concerns an element that is not directly a boundary. In this case, in-
tersections must be explicitly addressed (due to the presence of an overlap) (Figure 2.11, bottom
right). In the example shown, the closest vertex to B2 is not A2 but A5 , patching B2 with A5
leaves the vertex A2 orphaned and the edges [A1 A2 ] and [A2 A3 ] also orphaned, consequently,
the result is not valid. Two ways of dealing with this case are: merging A2 with A5 (merging
of an edge of Ω1 ), which restores a situation similar to previous ones, or the insertion in each
mesh of the vertices of the other that it contains, and then the calculation and insertion of the in-
tersections between the edges of both meshes (at this stage) and finally the removal of duplicate
elements necessarily built by these manipulations. This intersection issue which also occurs for
surfaces will be covered in detail.

The other case to be considered is that where the area deemed to be common is actually a slot; the
two meshes are very close but do not “touch” each other. The two meshes will have to be joined
together by closing the underlying slit (Figure 2.12). The pairs of vertices are identified, one of
Ω1 , the other of Ω2 , the closest ones. These vertices are merged just as the initial boundary edges
enabling the topological validity of the result to be controlled. Given that the vertices identified
were not identical, the operation hides movement of vertices (from at least one of them); it is
therefore necessary to validate the newly formed balls.

17. With the same tactic as for standard reconnection, a light shift making it possible to capture a neighbor-
ing (virtual) cube, which thus participates in the analysis.
Mesh Transformations, Patching, Merging and Immersion 69

Figure 2.11. On top and bottom left, the pairs (Ai , Bi ) for i = 1, 4 are deemed identical. The
vertex B2 of the mesh Ω2 closest to A2 is on the boundary edge,
in a boundary element or in a different but neighboring element; the reconnection ensures the
existence and the uniqueness of the edges whose ends are connected together, and the
numerical inaccuracy has disappeared. At the bottom right, the pairs (Ai , Bi ) for i = 1, 4 are
deemed identical except the pair (A2 , B2 ). The vertex B2 is close to the non-boundary vertex
A5 , a direct reconnection would produce a topological error and the edge [A1 A2 ] would
remain pending

Figure 2.12. The closest pairs of vertices are merged,


and the incidental edges (boundaries) are merged to ensure the topological validity
of the result

Topological verifications rely on the identified edges (faces) and the eventual detection of the
creation of orphaned edges (faces). Geometric verifications (positivity) concern the balls built
around the merged vertices. Relying on the nearest vertices requires that there be no ambiguity
(as in Figure 2.11, bottom right), the nearest vertex is the (right) solution. Otherwise, it is neces-
sary to implement a more or less complicated choice strategy. To finish with this case, the choice
of the nearest vertex (actually masking a move for one of the two vertices thus identified) may
lead to the creation of negative surface elements (volume, Jacobian) that will either need to be
straightened (again with a move) or to be removed (through topological operators, flips, merging,
etc.).
• A pure two-dimensional or three-dimensional situation (volumes) with a potential
problem of compliance
70 Meshing, Geometric Modeling and Numerical Simulation 3

The previous case is met once more, here with a new difficulty: the two initial meshes are
not really compatible (consistency problem) at the level of their boundaries deemed common and
the reason is not simply related to an accuracy problem. The idea is in line with the concept of
intersection and will consist of projecting onto the boundary of mesh Ω1 the concerned vertices
of the boundary of mesh Ω2 and vice versa. This will ensure the compliance property. The
affected elements of the initial meshes are decomposed to take into account18 all the vertices
inserted. This decomposition is immediate for simplicial meshes because the opposite vertex to
the boundary edge (face) affected is visible by all of the items in the partition (edges or triangles)
of the boundary entity being addressed (edge or triangle). In other words, it is extremely difficult
(if not impossible) to correct compliance defects in the case of non-simplicial meshes (except to
introduce simplicial elements, among others, or to slice the relevant elements at the level of their
faces deemed common, which is an operation described above).

Figure 2.13. Vertex B2 is introduced on the edge [A1 A2 ]. Vertex OP13 , mesh Ω1 , sees the
edges of the partition, [A1 B2 ] and [B2 B3 ], the decomposition of the element concerned
is trivial and establishes compliance

In the event of exact non-concidence, the projections of the (non-compliant) vertices will
require a geometric validation prior to insertion.

• The case of a surface19 with self-intersections or covers

This situation is the most complicated because it mixes all the difficulties and, in particular,
the compliance that the method will have to restore. To simplify the discussion, we are only
going to consider meshes composed of triangles by returning to this case, by decomposition, if
the elements are quadrilaterals. Issues emerge about the detection of intersections, the calculation
of intersection points between the edges and faces, the insertion of these two types of points to
restore compliance and the construction of a single mesh. To solve this problem, we are going to
offer a surprisingly simple method. Two situations will be distinguished depending on the nature

18. And the quality of the result can therefore be affected.


19. This situation may (even if it seems slightly artificial) exist in the pure two-dimensional case and is
addressed in the same way.
Mesh Transformations, Patching, Merging and Immersion 71

of the intersection detected that may involve two non-coplanar faces (simple case) (Figure 2.14)
or two coplanar faces (more difficult case) (Figure 2.15):

Figure 2.14. Intersection between two non-coplanar faces. On the left, two edges of the green face cut
the other face. On the right, one edge on each face cuts the other face

Figure 2.15. Intersection between coplanar faces corresponding to a cover. Some of the
possible motives showing 0–3 edges of one face cut
by the other side

i) the two faces involved are not coplanar. Two patterns are essentially found (so-called clean
cases) and the associated borderline cases. The intersection concerns two edges of one of the
two faces (clean case), one edge of each face (clean case), which are identical situations. But it
exists degenerate situations: two edges of a face and one edge of the other, two edges of a face
and a vertex of the other and even, a vertex of one of the faces is on an edge of the other or is in
inside the other face, reducing the intersection to one point;
ii) the two faces involved are coplanar. There are many possibilities of possible patterns.
Given one face, the other can cut it on 0, 1, 2 or 3 edges. In the first case, one of the faces is
entirely contained in the other; in the other cases, the edges cut one another and/or one or two
72 Meshing, Geometric Modeling and Numerical Simulation 3

vertices of one of the faces contained in the other. Unlike the previous case, there is a cover of
all or part of a face by all or part of the other.

It remains to be shown how to quickly detect intersections and how they should be solved to
obtain a fair and compliant mesh.

 Intersection detection

With every vertex of either of the meshes, an element (a germ) of its ball is associated. The
vertices of one of the meshes are classified, for example, Ω1 , in a grid (or a tree), this operation
should now be familiar. Consider the vertices of the other mesh, therefore Ω2 . For a given vertex
of Ω2 , we look in the grid at whether there is a relatively close vertex of Ω1 . If such a summit
is found, starting from the associated germ, its ball is examined. We are looking for whether
there is an element in this ball of (Ω1 ) that cuts an element of the vertex ball (Ω2 ) examined, thus
defining one (or two) intersection points. When an intersection is detected, the next one can be
directly searched for by returning to the grid or, more finely, by neighborhood.

 Decomposition and re-establishing consistency

If the two faces involved are not coplanar, each of them just has to be sliced to reveal the
intersection (a segment). The decomposition does not present any difficulty. Two conventional
situations will be distinguished – the intersection point is on an element edge or inside an element.
If the intersection point is on an edge, the decomposition consists of connecting this intersection
point to the vertex opposite the edge that carries it, and this in the two elements involved (for
surfaces with uniform variety) or in all relevant elements (for non-uniform variety surfaces).
Four or more triangles will then be formed, two per element. Depending on the order in which
the intersection points are dealt with, any solution will be obtained, and thus the solution is
not unique. If the intersection point is internal to one element, the decomposition consists of
connecting it to the three vertices of the element, that is, three triangles. However, the main
strength is that the composition does not introduce any new intersections. In addition, the result
is necessarily compliant and will comprise all elements originating from the decomposition,
instead of the two (or more) initial elements.

The case where the two faces involved are coplanar is more complicated because the (direct)
decomposition motivated by an intersection can reveal new intersections. This phenomenon can
be observed in most situations depicted by simply referring to Figure 2.15. Consequently, the
merging problem will be formulated differently for covers existing in a plane.

A so-called master mesh, denoted by Ω1 , and a slave mesh, denoted by Ω2 , are defined. The
aim is to build a mesh Ω that covers the domain formed by the union of the domains covered
by the two initial meshes. Five regions will be distinguished (Figure 2.16). A region formed
by elements of Ω1 without intersection with mesh Ω2 , a region formed by the elements of Ω2 ,
without intersection with mesh Ω1 , a region composed of the elements which are at the same
time in Ω1 and Ω2 . Finally, for the last two regions, we find the region formed by elements of Ω1
having an intersection with elements of Ω2 that are also not in Ω1 , and the region formed of the
elements of Ω2 that are not in Ω1 and have an intersection with elements of Ω1 . These last two
Mesh Transformations, Patching, Merging and Immersion 73

regions are two bands (one consisting of elements of Ω1 , the other of elements of Ω2 ) or strips
that partially overlap.

The goal is to retain certain elements of Ω1 and Ω2 and to modify the mesh of the two bands
in order to ensure the uniqueness of the resulting mesh (no covering) and the consistency of this
mesh.

Figure 2.16. On the left, the mesh Ω1 and Ω2 that partially overlap. In the middle,
the master mesh is Ω1 (in red) and the different regions with connecting
bands (hatched) can be guessed. On the right side, the master mesh is Ω2 (in red)
and the corresponding regions can be seen, including the two connecting bands

The treatment is very simple (on paper). We essentially rely on the conventional point in-
sertion operators and those for forcing edges by subdivision, rather than defining around the
connection area (by deleting elements), a cavity to be re-meshed (which is an approach that will
be retained in the case of an immersion, see below).

Consider that Ω1 is the master mesh, an element is said to be in Ω1 (Ω2 , respectively) if its
three vertices are in Ω1 (Ω2 , respectively).

The regions defined hereafter are defined as follows:


– [region 11]: elements of the mesh Ω1 without intersection with Ω2 , these elements are in
the resulting mesh, unchanged;
– [region 22]: elements of the mesh Ω2 without intersection with Ω1 , these elements are in
the resulting mesh, unchanged;
– [region 1+2]: elements of the mesh Ω1 also in Ω2 , these elements are in the resulting mesh,
unchanged;
– [region 2+1]: elements of the mesh Ω2 also in Ω1 , these elements are not part of the
solution.

Next, we address the two bands (Figure 2.17) the master band and the slave band, in the following
way:

i) insertion of the only boundary vertices (namely the boundary of Ω1 ) of the master band in
the elements of the slave band. A significant side effect from these insertions is that it will be
74 Meshing, Geometric Modeling and Numerical Simulation 3

impossible to find a master edge strictly inside a slave element, every edge of the boundary of
Ω1 is “hooked” by an element of Ω2 , this clearly simplifies the problem.

There are then two ways of proceeding with the process. The first consists of performing the
following steps (ii) to (v):

ii) computing the intersection points of the slave edges from a vertex of Ω2 not located inside
Ω1 . Only the closest intersection point to the vertex not located in Ω1 is considered (when the
slave edge cuts more than one master edge);

iii) inserting these points into the slave band;

iv) removing any element resulting from this decomposition contained in Ω1 , the overlap is
thus deleted. This suggests a trick; it is not really necessary to mesh the portion of an element of
Ω2 , which will end up in Ω1 and is thus destined to be destroyed;

v) inserting these same points into the master band, therefore consistency is established.

The second method is a conventional method for forcing an edge in a mesh (Chapter 6 of
Volume 1), namely:

ii)bis master edge forcing20 into the slave mesh;

iii)bis deletion of any element from this forcing contained in Ω1 .

The result of this process mechanically meets the original objectives, no more overlaps and
construction-ensured consistency. On the other hand, the quality of the elements built is not a
criterion taken into account and therefore it will be possible to think about optimizing the result
(at least within this area).

In practice, the vertices of the two meshes are colored in order to classify the elements. This
is therefore a localization problem. Insertion algorithms are as rough as possible and it seems
that this is the optimal solution. Inserting a vertex of Ω1 in Ω2 in an element is done by slicing it
in three, or by slicing in half the two elements that share the edge on which the point is located if
this is the situation we face. Inserting an intersection point, therefore located on an edge, is done
by connecting it to the opposite vertex to this edge. In the case where there are several points to
be inserted, the insertion order has no real impact on the result (even if the latter is not identical).

In more detail, critical configurations (for example, when points almost combined are found)
will have to be considered.

20. With the particular case of an edge in which an end is not in the slave mesh; this is a case that must be
be specifically addressed by inserting the intersection point.
Mesh Transformations, Patching, Merging and Immersion 75

• For two plane meshes21 with self-intersections or covers.

Figure 2.17. The elements of the two strips, the master strip (in red) and the slave strip (in
black). On top (the left side), the initial state. On top (the right side), insertion into the slave
elements of the vertices, denoted as 1, 2 and 3 of the master strip. It should be noted that a
slave (hatched) element becomes completely contained in the master and therefore will be
removed. At the bottom (the left side), there is insertion into the slave strip of the intersections
between the desired edges. Note that (hatched) slave elements located “below” the boundary
of the master are completely contained in the master and therefore will be removed. At the
bottom (the right side), there is insertion into the master strip of the same intersection points.
The result is consistent with no overlapping

In fact, what has just been described to address co-planar faces with self-intersections or
covers is applied as described here. The techniques to be used are therefore localization, insertion
and the calculation of edge–edge intersections.
• For two solid meshes22 including self-intersections or covers
The same process as in the plane case will be exactly followed with a master mesh and a slave
mesh. Processing regions 11, 22, 1+2 and 2+1 are identical. Non-classified or deleted elements
at this stage form a (several) strip(s). The elements of a strip have 1, 2 or 3 vertices in a region
and the other vertex (vertices) in the other. Processing a strip will make it possible to remove
the overlap and establish consistency. The latter is no longer achieved through the edges only
but by triangular faces. These triangles are derived from the decomposition of the faces of the
master mesh, as a decomposition caused by the intersections between a master element and a
slave element. On the slave side, the insertion of the master vertices and edge–edge intersections
or edge–face will define two regions formed by the tetrahedrons built during these insertions.
One of these regions is contained in Ω1 so its elements will be deleted and there is no need to

21. Which can be seen as a Boolean operation, as the union of two meshes.
22. Which can be seen, again, as a Boolean operation, as the union of two meshes.
76 Meshing, Geometric Modeling and Numerical Simulation 3

create these items. The other region will constitute the connection between (what remains of) Ω1
and Ω2 . To restore consistency, the “boundary” faces of Ω1 will be sliced by inserting the same
intersection points and this is done analogously. Finally, the tetrahedrons of Ω1 , of which a face
has been sliced, are themselves cut to take this face decomposition into consideration. Again,
since the opposite vertex to the sliced face sees this face, tetrahedrons have to be formed just
with this vertex, as well as each decomposition of the triangles of the face.

The main difficulty to overcome is that the decomposition of faces be unique, Ω1 or seen by
Ω2 (at least what remains of it). It will be seen that this issue is not difficult at all. It is amusing
to observe that the above proposed algorithm (for a surface or plane) applies, virtually without
any upheaval. In more detail, the process follows the following steps:

i) insertion of the only boundary vertices (namely the boundary of Ω1 ) of the master band
in the elements of the slave band. As in the previous case, these insertions confirm that it will
be impossible to find an edge or master edge strictly inside a slave element; every edge and
boundary face of Ω1 is thus “hung” by an element of Ω2 . This simple detail clearly simplifies
the problem. The process proceeds utilizing the intersection insertion method23;

Figure 2.18. The different types of intersection between the two strips. For a given master
face, a notion very similar to that of a pebble is met again, a pebble being the set of tetrahedra
shells wrapped around an edge that pierces a master face. From left
to right, the first slave tetrahedra going through a master vertex, an a f intersection,
slave edge-face, an a f intersection, slave-face master edge and three intersection cases with
one, two or three vertices on either side of the master faces

ii) computation of the intersection points of the slave edges from a vertex of Ω2 not located
inside Ω1 . The intersection concerns the master face or one of its edges. Only the intersection
point closest to the vertex, not located in Ω1 , is considered (when the slave edge cuts more than
one edge or master face);

23. The conventional method (Volume 1, Chapter 6) is considered too technical in three dimensions.
Mesh Transformations, Patching, Merging and Immersion 77

iii) insertion of these points into the slave band;

iv) computation of the intersection points of the master edges with the mesh faces of the band
of Ω2 , in which a vertex is not located in Ω1 ;

v) insertion of these points into the slave band;

vi) removal of any element from this decomposition contained in Ω1 , the overlap is thus
eliminated. This suggests the same trick as before, it is not really necessary to mesh the portion
of an element of Ω2 , which will end up in Ω1 and is thus destined to be destroyed;

vii) insertion of every intersection point in the master band, consistency is established.

The insertion algorithms utilized here are as rough as possible and it seems that this is the
optimal solution. Inserting a vertex of Ω1 in an element of Ω2 is done by slicing it in four if it is
internal, cutting in half the elements sharing the edge (the shell) if the point is found on an edge
or, finally, by cutting into six the two elements that share the common face upon which the point
is located.

Inserting an intersection point in Ω2 , therefore located on an edge of Ω2 is made by slicing


each element of the shell of this edge in two. Inserting an intersection point, located on a face, is
done by connecting it to the opposite vertex to this face in each of the elements sharing the face.
In the event that there are several points to be inserted, the insertion order has no real impact on
the result (even if the latter is not identical).

In more detail, critical configurations (points almost coinciding, or quasi-coplanarity situa-


tions, etc.) will have to be considered. This is, moreover, the real difficulty to be solved.

Hardly any reference is made to merging methods, that can be found. Nevertheless, in
[Lo-2015], a slightly different presentation with a very similar problem is proposed, illustrated
with a few ad hoc examples. Note also that this subject – Boolean operations – is addressed
in the algorithmic geometry community, but the examples are most often of a geometric nature
quite distant from that of our objects (in solid mechanics, fluids, etc.).

2.4. Immersion

The immersion of one mesh into another is an operation that enables a (meshed) detail to be
embedded in a given mesh or for example, even a structured mesh24, in an unstructured mesh or
vice versa. In fact, the nature of existing meshes is not called into question which, at worst and
in three dimensions, only affects the way in which consistency is established. In this operation,
the immerged mesh is not modified.

In two dimensions and especially in the plane, the immersion of a mesh into another is con-
ceptually easy, even if, in practice, this operation is quite technical. In three dimensions, the

24. Or having a structure different from the grid type, think of a radial mesh.
78 Meshing, Geometric Modeling and Numerical Simulation 3

problem is significantly more complicated. The situation in which two meshes overlap (overlap-
ping meshes, chimera method) is here excluded and a solution is looked for where there is no
cover but a consistent connection. Let Ω1 denote the mesh destined to receive mesh Ω2 , it is
obviously assumed that that the latter is strictly included in the first.
• Two-dimensional immersion
The elements of Ω1 are triangles and/or quadrilaterals, those also of Ω2 and it will be seen
that possible mixtures do not raise any difficulty at consistency level. Furthermore, it will be
ensured through edges, and therefore independently of the geometric nature of the neighboring
elements.

The idea of the method is very simple, Ω2 is localized25 and more specifically its boundary,
Γ2 in the mesh Ω1 . The elements of Ω1 localized inside Γ2 are destroyed. Therefrom, a cavity
will be defined which will then be meshed into triangles. This cavity is an area between the
boundary Γ2 and the elements of Ω1 which are facing Γ2 . To facilitate the meshing work of
this cavity, certain elements of Ω1 deemed too close to Γ2 will have to be destroyed in order to
provide a given distance (or sufficient space) between the new boundary of Ω1 in front of Γ2 . The
elements of Ω2 are unchanged, and those of Ω1 not destroyed during the definition of the cavity,
also remain unchanged. We are therefore facing a problem of conventional mesh construction
for the freed area and the result will be automatically conformal, the mesh elements of the cavity
that are added to the elements already present (this result can be seen as the reconnection of
three meshes). It is to avoid that the area to be meshed be too constrained that have destroyed
a few elements of Ω1 non-intersected by Γ2 , over a distance in accordance with the size of the
neighborhood elements, both those of Ω2 and those (that have become boundaries) of Ω1 . On the
other hand, destroying a few more elements allows for a more regular boundary, therefore easier
to process. The space thus spared also makes it possible to obtain elements of acceptable quality.

The mesh of the cavity is done by either of the conventional methods, extensively described
in Volume 2 of this book. Although the area to be meshed is generally reduced, the complexity
of the immersion is that of a mesher, which, in two dimensions, remains reasonable.

However, by observing that the mesh cavity is most often similar to a crown, the use of a
method that connects the elements of the two boundaries facing each other by neighborhood
might be considered. We start by linking two vertices (one from a boundary, the other from the
other, for example the nearest one) by an edge and then the triangles covering the crown are built
step by step. This approach was discussed in Chapter 7 of Volume 1.
• Three-dimensional immersion
The same approach can exactly be reused, a cavity is provided that is then meshed, excepting
for special cases, into tetrahedra. Depending on the nature of the mesh to be immersed and
the host mesh, the faces of the (two) boundaries of the cavity are triangles (this is the case if
the elements are tetrahedra) or quadrilaterals (this is the case if the elements are hexahedra).

25. Think immediately about using an acceleration and neighborhood relations grid for this localization
problem.
Mesh Transformations, Patching, Merging and Immersion 79

The presence of quadrilaterals will complicate the way in which a consistent resulting mesh is
established.

To break the conflict between a quadrilateral face and triangular faces, the two classic ap-
proaches are used once again. In the first, a layer of pyramids is inserted in order to ensure
the transition between tetrahedrons and hexahedrons and the triangular faces of these pyramids
are the ones that form the boundary to be considered. It is then necessary to plan for a cavity
large enough to accommodate these pyramids. In the second, the aforementioned decomposition
method is used by slicing the element itself in order to lay out triangular faces on the boundary
of the cavity.

Once again, it should be noted that the complexity of the immersion, in this case, that of a
three-dimensional mesher and as such it is far from being trivial.

An exotic application close to the immersion problem is not to embed a (meshed) detail in
a mesh, but to eliminate it or even to insert a hole. In the first case, a cavity is built that is
subsequently meshed. In the second case, a cavity that is meshed as previously is again defined
and then the mesh (which acted as the immersed mesh) is destroyed.


∗ ∗

In this chapter, we have chosen to present some methods that, although of significant practical
value, are rarely documented or not documented at all.

For the construction of a mesh using the geometric transformation of another, we have seen
that beyond the pure geometric aspect (the way in which the vertex coordinates are modified), the
process must or may be accompanied by a set of treatments that concern the proper orientation of
the elements, therefore the proper enumeration of their vertices and the possibility of modifying
all or part of the (physical) attributes of the vertices of the edges, faces and elements themselves.

The conversion, by decomposition, of a non-simplicial mesh into a simplicial mesh allowed


us to look at how an element should be cut – taken individually – and then at how to cut a mesh
ensuring consistency of the result, a complicated, purely topological point, while also maintain-
ing the “positivity” of the elements. The partial decomposition of an element with quadrilateral
faces to replace one of them with triangles allowed us to propose a different approach to the con-
ventional approach of connecting non-simplicial elements with simplices. The decomposition
of high-degree elements (but also of degree 1) allowed us to address some issues related to the
preservation of the underlying geometry.

Finally, by looking at how a reconnection algorithm for (two) meshes can be developed, it
was again shown how to use a set of basic techniques (grid, tree, hashing, linked list, or filter,
etc.) in a real situation and, at the same time, to see the potential of these techniques. In terms of
problems for merging neighbors or for the immersion of (two) meshes, we tried to provide some
indications.
80 Meshing, Geometric Modeling and Numerical Simulation 3

Again, we confirmed that processing simplicial meshes is much easier and more flexible than
processing non-simplicial meshes and/or mixed meshes (multielements).

In principle, the methods are accessible and, as such, could be the subject of many graduate
internships, from the simplest to the most demanding. Indeed, we did not hide the fact that it is
taking into account borderline situations and their processing that makes the difference between
a simple mock-up and a really robust tool.
Chapter 3

Renumbering and Memory

Numerical simulations based on finite elements use a mesh as spatial support. A matrix
(so-called elementary matrix)1 will be constructed for every element and the assembly of these
elementary quantities results in obtaining the corresponding global matrix. Filling up this matrix,
that is, the position of its a priori non-zero entries, depends on the numbering of mesh nodes. The
aspect of this filling, visible by looking at the distribution of coefficients, has, according to the
solving methods used to solve the corresponding systems, an impact on several points, obviously
the matrix bandwidth and, also, the memory size needed to store it. Another effect (most often
extremely perverse) is that access to coefficients can trigger cache misses when moving from
one coefficient to another whose memory locations are very distant. It should also be noted
that the numbering of the vertices and/or of the elements (of a mesh) may also have a non-
negligible effect on the performance of a particular algorithm (for example, in visualization, but
also in mesh algorithms themselves as well as for purposes of parallelization of certain methods
in which there is a link between mesh renumbering and partition). These two questions will
therefore be looked into: how can vertices, and more generally nodes, and/or mesh elements be
renumbered to optimize a given criterion?

It is interesting to indicate that conventional (and somewhat old) renumbering methods, to a


great extent although not always, keep some relevance, and, for some, experience a renewed in-
terest (consider methods based on space-filling curves, already mentioned in Volume 2 regarding
cache misses and parallelism, among others).

The numbering of vertices and mesh elements depends on the construction method employed,
one of the conventional methods (such as frontal, Delaunay or tree-based methods) or methods
based on mesh modification (for example, during a loop of mesh adaptation) or still as a result
of partitioning a pre-existing mesh. For high-degree mesh, the numbering concerns, in addition,
(non-vertex) nodes.

1. Or several, think of a mass matrix and a stiffness matrix (see Chapter 6).

Meshing, Geometric Modeling and Numerical Simulation 3: Storage,


Visualization and In Memory Strategies, First Edition. Paul Louis George,
Frédéric Alauzet, Adrien Loseille and Loïc Maréchal.
© ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
82 Meshing, Geometric Modeling and Numerical Simulation 3

The numbering (of vertices and elements) resulting from a Delaunay construction is to-
tally unpredictable and, therefore, has no optimality character for any criterion. In a tree-based
method, numbering follows the construction (recursive decomposition) of the tree. For a frontal
method and depending on how the front is addressed, this advancing front aspect will be found
in the numbering. In computation loops including mesh adaptation, the modification of a mesh
(from one iteration to the next) has a priori no optimality character.

The link between numbering and memory is twofold. It concerns the memory itself, namely
its size, and the access to memory. In the first case, the objective is to minimize the memory
resources needed. In the second case, the goal is time performance, therefore corresponding
essentially to the time of accessing a data item and the minimization of cache misses. This last
aspect becomes the important point since the problem of memory (size) is less critical today.

3.1. Vertex and node renumbering

Two different aspects that can motivate the renumbering of mesh vertices (nodes) is exam-
ined. Next, a few renumbering methods are presented. As mentioned, although these methods
have been well-established for many years, it seemed useful to describe them (albeit quickly) in
the context of this book.

3.1.1. Numbering and storage of a matrix in profile mode

In Chapter 6 of this volume, it is shown how the coefficients (entries) of matrices and the
second members are calculated that emerge when solving a problem formulated by partial dif-
ferential equations using a finite element method. This presentation is carried out through some
examples of equations and for a few examples of elements, and therefore of meshes. A global
matrix, that of the system to be solved, is obtained by assembling the elementary matrices for
every element of the mesh. The coefficients of the global matrix are not zero, a priori if there is
a relationship between the nodes whose indices are those of the coefficient under consideration.
If one denotes by Aij a coefficient, it only exists if the nodes of index i and j belong to the same
mesh element. In the specific case of the first-degree simplexes, it simply means that there is an
edge whose extremities have these indices. For the other elements, it is just the membership to
the same element that allows the coefficient to exist in the matrix.

The matrix, depending on the problem being addressed, is sparse but can be symmetrical
or non-symmetrical. The memory needed to store the matrix depends on the number of a pri-
ori non-zero coefficients. Some of the resolution methods, typically direct methods (Cholesky,
Crout), consider matrices stored in profile form. It should be noted that, for other resolution
methods (iterative methods, conjugated gradient, GMRES, etc.), a better adapted and more com-
pact way of storing matrices is the morse mode in which only topologically non-zero coefficients
are stored. For a given row, each coefficient is described by its value and the index of its column.
Unlike the profile mode, a renumbering has no influence here on the necessary memory2. The

2. Let us consider the ball of a vertex of a two-dimensional simplicial mesh. We assume that this ball has
six elements. If the vertex is 7, line 7 contains seven values, if the vertex is 1,000, the line 1,000 possesses
Renumbering and Memory 83

choice of any mode depends on how the coefficients of the matrix are processed. This is thus the
chosen resolution method that guides this choice.

For a symmetrical matrix, the profile storage consists (if the lower part of the matrix is kept)
of storing the coefficients per row, from the first non-empty column to the diagonal. This is what
is shown in the diagram below. The matrix (toy) corresponds to a mesh of noe = 6 nodes, the
“x” indicate the existing coefficients, on the left, and the order of storage of the latter is shown
on the right. This storage is sequential. The requested memory resource is 17 adding to a pointer
to the diagonal coefficient, namely seven values, all instead of 36 (the whole matrix) or only 21
(to take symmetry into account). This example is so small that the magnitude of the gain in place
that this mode of storage implies is not perceived.

x 1

x x 2 3

x x x 4 5 6

x x x 7 8 9

x x x 10 11 12

x x x x x 13 14 15 16 17

Indeed, if, for the row of fixed index i (i starts at 1), we denote by indj (i) the index of the
column j of the first coefficient3 Aij a priori non-zero, the number of coefficients in the line
i is δi = i − indj (i) + 1 and is called half-bandwidth. The memory requirement, for merely

i=noe
storing the coefficients, is thus the sum δi . To minimize this value, it is ideally necessary
i=1
to minimize the various δi or, at least and at best, some of them (the largest ones). The goal is
therefore to increase the number of spaces at the beginning of a row or, in other words, to “pack”
the coefficients toward the diagonal. This is where a renumbering algorithm will intervene that
will strive to reduce, for every mesh element, the maximal difference between the indices of its
vertices (nodes).

For a non-symmetrical matrix, profile storage is significantly different. We are going to take
the same matrix (seen now as non-symmetrical) and show the corresponding diagram.

1,000 values, of which only seven are non-zero. In morse mode, there are only 14 values regardless of the
indices of the vertices.
3. This index is easily deduced from the successive diagonal indices. For example, in the diagram, line 4,
the coefficient 7 is in column 2 = 4 − (9 − 6) + 1.
84 Meshing, Geometric Modeling and Numerical Simulation 3

x + + 1 3 7

x x + + + 2 4 8 12 24

x x x + + + 5 6 9 13 17 25

x x x + + 10 11 14 18 26

x x x + 15 16 19 27

x x x x x 20 21 22 23 28

The sequential storage follows a slightly different logic. For a given index i, we store in row i
the coefficients of the index columns comprised between that of the first coefficient a priori non-
zero and the one preceding the diagonal, then in column i the row coefficients of index lines
comprised between that of the first a priori non-zero coefficient and that preceding the diagonal,
and finally, the diagonal coefficient is stored.

The memory size (as previously) is given by the index of the last diagonal coefficient. To
minimize this value, the number of empty spaces at the beginning of the row and at the beginning
of the column need to be increased, therefore again pack the coefficients toward the diagonal.
This goal will be achieved as before by reducing, for all the elements of the mesh, the maximal
difference between the indices of its vertices (nodes).

3.1.2. Numbering and algorithm performance

After seeing the relationship between numbering and memory (resource), we give an idea
of what the relationship between numbering and performance can be. This may come from the
behavior of the algorithm and/or memory accesses (cache misses). To illustrate this point, we
consider a few examples, but there are many more.

In meshes based on point insertion Delaunay-based algorithms (Volume 1, Chapters 4 and 5,


and Volume 2, Chapter 4), it was seen that the order to insert a number (a stack) of points had
an impact on the speed of the algorithm. To optimize speed, one must navigate between two
antagonistic criteria to define in what order the points should be processed. First, an insertion
by choosing the points in a random fashion ensures that the size (as the number of elements)
of the cavities is on average smaller (and more constant), therefore the number of elements ma-
nipulated (destroyed and built) is minimized and the necessary memory resource is also reduced
on average. Second, an insertion by choosing points by geometric neighboring minimizes the
localization phase since the point being addressed is a neighbor of the point, which has just been
inserted. This second criterion is clearly in conflict with the former. Another subtlety to be taken
into account is the bias that can be introduced by the fact that data have a particular structure, for
example, due to the way in which they were built. The purpose of renumbering will be to break
this bias, to make the algorithm insensitive to particular cases or to a specific numbering and to
respond at the same time to the two above criteria.
Renumbering and Memory 85

We have seen that renumbering affects the complexity of the algorithm, independently of
memory access problems. In general, with equal complexity, renumbering will now be seen as
a tool to minimize access times to memory by minimizing at best cache misses. It is known
that the time dedicated to managing these misses can go so far as to completely hide the actual
algorithmic complexity of a given algorithm4.

Any algorithm, such as an optimizer, that loops over the elements and, for each element,
loops over the vertices will obviously take advantage of renumbering the vertices a fortiori, if it
is accompanied by a renumbering of the elements (see below). In fact, any processing of edges
or vertex balls can benefit from renumbering the vertices.

Another aspect concerns the possibility of memory gains by reducing the space needed to
store an element or a group of elements. The vertices being renumbered according to a proximity
criterion, the vertex indices, for instance of a tetrahedron, are close. For a given item, the average
of the four indices of its vertices are calculated and it is rewritten based on the (vertex) index the
closest to this average; then, for the other three indices, there is the difference between their and
this basis. If this difference, an integer, is small, it can be stored with a very small number of
bytes.

We are now considering the context of parallel computations and loop processing or more
generally of sequences of computations including dependencies. In the case of shared memory,
the performance of a parallel solver is heavily impacted by the numbering of mesh entities, which
supports the calculations. Potential conflicts related to a request for simultaneous writings to the
same memory location can be managed by different techniques: (i) a conventional locking tech-
nique (by blocking access of conflicting zones to a single processor (mutex)), (ii) a momentary
duplication of memory areas likely to be written in a concurrent manner (each processor freely
writes to its own copy then copies are merged a posteriori, and again, (iii) by partitioning the
mesh into a large number (greater than the number of processors) of totally independent zones
and as such processors can process blocks of data that do not present any risk of simultaneous
writing to memory. This is typically the case when blocks of elements have no common vertex
and can therefore be simultaneously processed without memory conflict.

In the second method (ii), one is totally independent of the numbering (in terms of per-
formance) but the memory needed is greater and the duplication time adds up. The two other
methods only require little memory and additional time if the data are renumbered in order to
minimize potential conflicts during writings. In the absence of compact numbering, type (i)
methods will need a lot of locks (mutexes), which is quite expensive (shared memory) and ruins
the performance of a cluster (distributed memory). For the third method (iii), it is difficult to
find as many blocks of elements as there are processes while having no vertices in common.
After renumbering, the blocks of consecutive elements will be geometrically close. Thereafter,
two blocks taken randomly are unlikely to have common vertices and can be simultaneously ad-
dressed without fear of writing conflicts for a same vertex. This has very useful consequences
for methods (i) and (iii). In the first case, since the blocks randomly taken have few vertices in

4. The algorithm is quadratic on paper, which is formally forbidden, but since access times are predominant,
it does not seem to be noticeable.
86 Meshing, Geometric Modeling and Numerical Simulation 3

common, there will be less locks and will be less expensive. In the second case, it will be possible
to find just as many blocks with no dependency as there are processes executing in parallel.

Note that shared or distributed memory parallel systems present exactly the same problem,
but that for the latter, the cost of locks is even higher and, as a result, the effect of a renumbering
is all the more tangible.

3.1.3. Some methods for node renumbering

We propose, and this is just one of the possibilities, to classify renumbering methods into
several categories:
– topological methods that, in fact, look at the graph formed, among other things, by the
edges5 of the mesh if the focus is on the nodes or look at the graph relative to the elements (a
dual form of an edge graph) if the elements are the focus;
– geometric methods that are based on coordinates;
– index methods that are based on indices.

We are going to try to explain what a topological method is with the famous Cuthill-McKee
method as an example [Cuthill, McKee-1969], [Cuthill-1972], and it will be possible to refer
to different optimizations such as the one present in a Gibbs method [Gibbs et al. 1976] or in
[George, Liu-1979], methods that are all based on a frontal approach. Next, we will explain
what a geometric method is by choosing an approach based on a space-filling curve. For fun, an
idea is also given of what an index-based method can be.
• Cuthill-McKee topological renumbering, variants and optimizations
The following lines have to be read bearing in mind that these methods were developed a long
time ago following the prevalent ideas at the time regarding the methods for solving systems that
were used, regarding the issues of available memory resources and, in addition, devised for the
first-order triangle elements or, at best, (1 × 1)-order quadrilaterals.

Having said this, the case of a symmetrical matrix is looked into, the aim is to avoid having
large half-bandwidths, the values δi = i − indj (i) + 1, defined above, where i designates a line
and indj (i) is the first column of this row where there is a coefficient (even zero). The purpose
of renumbering will be to minimize the average of the δi or still the maximum of those δi , which
is the (half) bandwidth of the matrix. With the matrix is associated its graph. The graph edges
comprise at least the mesh edges. As a matter of fact, there is a coefficient of index ij in the
matrix (even if it is zero) only if there is an edge (of the mesh) between the nodes of i and j or
if the nodes with these indices belong to the same element (see below). For a simplicial mesh of
order 1, the degree of a graph node is just the number of incidental edges at this node if it is a
vertex. This being set and in this precise case6, the principle of the method is as follows:

5. This is strictly the case of simplicial meshes of degree 1.


6. During this distant era, the method was developed for the first-degree meshes, in the broadest sense and
without more subtleties.
Renumbering and Memory 87

Frontal node renumbering algorithm [3.1]


i) Choose a node, the choice of one with minimal degree is quite natural (but not compulsory).
Such a node is generally on the boundary of the domain or is even a singularity. It should be noted
that several nodes can verify the property sought for. The retained node is assigned to number
one.
ii) The adjacent nodes are sorted according to their degree. They are renumbered in this order
and form the first level of the progeny.
iii) The process is repeated for every node at this level and then for every level that is formed,
considering as adjacent only the nodes not already renumbered.

The advancing front character of this method should be noted, and the different levels are looked
at as fronts moving forward in the domain. This leads to looking at the renumbering algorithm
as a simple frontal method and can give the idea of an even simpler method than the one de-
scribed above. On the other hand, the method can be optimized by using its pseudo-diameter or
topological diameter. The latter indicates the topological distance between the two most distant
elements: one that includes the first node of the renumbering and the other that includes the last
node of the renumbering. The idea then is to perform a scan, to determine the pseudo-diameter
and its end and to repeat a scan (in the opposite direction) starting from this end. In general, this
strategy improves the profile. Iterating once again is possible but does not necessarily bring any
significant gain.

Is the result of the initial method optimal? This is not guaranteed. This is the reason why
many variants, some of them discussed above, have been developed to improve the coarse result.
The cost may then be more significant for moderate improvement, but the gain in the resolution
of the system is for its part significant.

In any case, this method (and its variants), although somewhat outdated, is completely au-
tomatic and effective, especially in two dimensions (a little less in three-dimensional), both for
mesh with a specific structure (for which a more specific solution could be chosen) and mesh
with completely arbitrary structures. However, again, the method was devised for triangular or
quadrilateral elements of degrees 1 and direct solvers (see below, for non-simplicial or arbitrary
degree meshes).

• Geometric renumbering based on a filling curve

The goal will be to iterate through a grid (a tree, [Löhner-2008]) including the mesh along a
particular path defined by a curve and to sequentially number the grid (the tree) cells. A purely
frontal approach could more simply be adopted starting from a corner cell of the grid and the
numbering obtained effectively allows for renumbering (the vertices) but it is easy to see that it is
not adapted for the purpose in mind. This is the reason why one resorts to a space-filling curve.

We thus revisit in more detail these space-filling curves already mentioned in Volume 1,
Chapter 5 (Delaunay) and in Volume 2, Chapter 9 (parallelism). A renumbering based on a
curve is geometric in type because the coordinates of the mesh vertices are effectively used.
The basic idea is that if an entity (here a mesh node) of the two- or three-dimensional space is
88 Meshing, Geometric Modeling and Numerical Simulation 3

close to another entity (another node), then their images on the curve are close to one another.
Conversely, and this is the result we are interested in, if two one-dimensional entities are close,
then their antecedents (nodes here) in their space (the mesh) are close. Regardless of the chosen
curve (Z-curve, Peano curve or Hilbert curve, etc.), the principle is the same, traverse the whole
space (a grid containing the mesh but this grid could be replaced with a tree observing, however,
that the tree structure implicitly possesses an order close to that could result from a space-filling
curve) by following the curve and using that curve to assign an index (so-called Hilbert indices
if this type of curve is retained) to the grid cells (of the tree, with the same remark as above).
Specifically, the cells of the grid (if this choice is retained rather than a tree) are sequentially
numbered by moving along the chosen curve (see Figures 3.1 and 3.2 for a Hilbert curve, and
Figure 3.3 for a Z-curve (on top) and for a Peano curve (at the bottom)).

Figure 3.1. Grid path following the Hilbert curve (on the left). Cell numbering along this path
(on the right). In this case, the number of cells per direction of the underlying grid is a power
of 2

Figure 3.2. From left to right, localization of the point (here five points shown through their
index) in the grid cells of Figure 3.1, associated indices, sort and renumbering

In the following, the curve retained is a Hilbert curve and the steps of the method are as
follows:
Renumbering and Memory 89

Node renumbering algorithm using a space-filling curve [3.2]


i) Construction of a grid containing the mesh (Chapter 1). In two dimensions, the grid cells
are referred to by two indices i and j.
ii) The space-filling curve used allows any filling curve to be associated with any couple (i, j)
an index, that is, ind = f (i, j).
iii) Localization (still Chapter 1) of the mesh vertices in the grid. If (i, j) is the couple corre-
sponding to the cell where the point is located, the index ind of the cell is assigned to this point
ind = f (i, j).
iv) Sort the points by index ind.
v) Renumber the points following the result of this sorting.

We choose the number of cells in i and j of the grid and the space-filling curve corresponding
to this grid size is applied to renumber the cells. It is considered that the filling curve is a datum,
a precalculated table, which can be found at a site. Explaining how such curves are built is far
too technical and falls outside the scope of this book but two examples are given in Figure 3.4 in
three dimensions.

It should be noted that several points may have the same index and that the result of the sort
does not differentiate them. As a result, the solution is not unique, but the proximity criterion
remains, of course, still satisfied. To partly mitigate this pitfall, a certain depth must be chosen
therefore, implicitly, a grid must be taken as being fine enough. The grid can also be replaced
with a tree decomposed recursively based on a criterion for the size of its cells. Nevertheless,
a grid can be kept though adopting an iterative approach. If, for a given grid size, certain cells
contain too many points, the process is reapplied, a space-filling curve is built in these cells after
decomposing them and, as such, the points are more finely separated. It is also known that, since
the grid is not built in practice, a grid as thin as possible can be simulated in order to have only
one or very few vertices inside its virtual cells. The important point is to consider the function
that enables one to associate an index, ind = f (i, j) or ind = f (i, j, k), with a cell a priori
defined by as many indices as desired (the dimension of the space).

We will mention below that this same technique also allows the renumbering of the elements
of a mesh.

• Index renumbering

Here, instead of considering connections or coordinates, only the indices are taken into ac-
count. In fact, we are within the context of a crossed renumbering (see below). It is assumed that
the elements have been renumbered (by a topological or geometric method) and the vertices are
renumbered considering the elements from the first to the last. The test is thus simply an index,
the index of the elements.

• Renumbering of the nodes for a mesh of arbitrary degree

The above point relates to a first-degree mesh (therefore simplicial meshes), in which the
nodes are only the element vertices. The links between nodes then correspond to the edges of
90 Meshing, Geometric Modeling and Numerical Simulation 3

the mesh and the associated graph is formed by the mesh edges. For meshes, still of the first-
degree but non-simplicial, the links between nodes are not limited to the edges, all the nodes of
an element are linked together. For example, for a quadrilateral, the two diagonals participate in
the graph by linking the nodes that are their ends, which are not connected by an element edge.

Figure 3.3. On top left, the path through the grid following the Z-curve. Numbering of the
cells according to this path (on the right). In this case, the number of cells per direction of the
underlying grid is also a power of 2. At the bottom left, the path through the grid following the
Peano curve. Numbering of the cells according to this path (on the right). In this case, the
number of cells per direction of the underlying grid is a power of 3

For meshes of arbitrary degree, the same property holds; all nodes, vertices, edge nodes, face
nodes and internal nodes, are linked and all these links participate in the graph. The purpose
of renumbering remains the same, narrowing the gap between node indices to minimize the
bandwidth of the matrices built from the mesh.
Renumbering and Memory 91

Figure 3.4. A three-dimensional Hilbert curve for two grid depths

Figure 3.5. Graph edges relating to the first node for a triangle, a quadrilateral and a triangle
of degree 2

An (again so-called) topological renumbering method will take into account the index nodes
of the graph, while an (again so-called) geometric method will rely on the position of the nodes
without taking the graph into account.

In the first case, at the first degree (quadrilateral), the method above can be applied which,
although not explicitly taking into account the diagonals, leads to minimizing the objective. For
other degrees, we propose to apply the same method to the first-degree mesh obtained by de-
composition (subdivision) of the current mesh with the same observation; although the links are
broken (not explicitly taken into consideration), the result achieved improves the criterion. Here,
too, there is very little literature on the subject. However, let us mention the previous study
[Lai-1998] that proposes renumbering only the mesh vertices first, and then the elements based
92 Meshing, Geometric Modeling and Numerical Simulation 3

on this numbering and finally all the nodes, including vertices, following the numbering of the
elements obtained during the previous step.

In the second case, using a space-filling curve, the above method can be applied almost as it
is. All the mesh nodes have simply to be considered (and not only the vertices). The geometric
proximity of two nodes will lead to assigning them nearby indices.

3.2. Renumbering of the elements

As above for the vertices (nodes), we shall give some motivations pushing to renumbering
the elements of a mesh and then some methods will be quickly described. It is also known that it
is often beneficial to renumber the vertices as well as the elements (as already mentioned above).

3.2.1. Motivation examples

A priori, renumbering the elements has no impact on the complexity of an algorithm, so


renumbering does not seem to show any usefulness. On the other hand, and again, the concern
about memory accesses is well presented.

Many algorithms process mesh vertex balls (nodes); these balls are made up of the set of
elements that share a given vertex. Similarly, the shell of an edge is formed by the set of elements
that share it. It is clear, therefore, that if the elements (of a ball, of a shell) have neighboring
indices, they are neighbors in memory and the number of cache misses, during their reading, is
reduced.

In general, every algorithm that processes an element and then its neighbors (fronts, stacks,
queues, balls, shells, etc.) will have a high chance that these neighboring elements are already in
the cache memory, speeding up access.

We saw in Chapter 9, Volume 2, the benefit that could be obtained from cleverly renumbering
the elements of a mesh in order to achieve a partition of the latter. This problem will be reviewed
below.

3.2.2. Some methods for renumbering elements

The three types of methods seen above will be met again.

• Neighborhood topological renumbering

The simplest method is to adopt the frontal strategy that is underlying this approach. A germ
element (a seed) is chosen and then the elements of its crown (complete or incomplete if we are
on the mesh boundary) that are renumbered are identified by striving to define a sequential path
to move from one to the other of this crown. The process continues by exploring the neighboring
elements of the current crown not already renumbered.
Renumbering and Memory 93

It should be noted that the elements of a given crown are those, not already renumbered, that
share a vertex of the previous crown (and not just a face or an edge).

For the first-degree simplexes issuing from a front or tree generation method, the initial num-
bering already follows a front that advances in the domain or follows the recursive decomposition
of the tree. For this reason, neighborhood renumbering is not of great interest.

• Geometric renumbering based on a space-filling curve

We simply reuse what has been described to renumber mesh vertices and the method is
adapted for processing the elements. In fact, the solution is very simple. The principle is ex-
actly the same, with each element is associated its barycenter7 and it is the one that is going to
govern the algorithm:

Elements renumbering algorithm with a space-filling curve [3.3]


i) A grid including the mesh is constructed (Chapter 1). In two dimensions, the grid cells are
referred to by two indices i and j.
ii) The filling curve used allows associating with any couple (i, j) an index, namely ind =
f (i, j).
iii) Each element is associated with its barycenter and it is localized (still Chapter 1) in the
grid. If (i, j) is the couple corresponding to the cell where this point is found, the index ind of
the cell is assigned to the element, ind = f (i, j).
iv) Next, sort the elements according to indices ind.
v) Renumber the elements following the result of this sorting.

Nonetheless, it should be noted that this method is not necessarily very effective for anisotropic
meshes, a fortiori, those that are strongly anisotropic. As such, topological neighborhoods and
geometric neighborhoods can be totally different because of the size of the elements, which
varies greatly from one direction to another. In other words, to move from one given element to
another located at a certain distance, one or a few elements can be traversed in one direction and
clearly more in another. On the other hand, the method applies to all types of elements and even
in the event that a mesh comprises elements of a different geometric nature.

The same comments can be found as in the case of renumbering nodes regarding the possi-
bility of having several elements of the same index and the solutions are the same, with a finer
grid or adaptive grid.

The same remark as above applies for simplexes of degree 1 built by a frontal method or a
tree-based method. The initial numbering already follows the fronts or the terminal leaves of the
tree.

7. Only one of the element vertices could be considered.


94 Meshing, Geometric Modeling and Numerical Simulation 3

• Index renumbering
Instead of considering the neighborhoods or coordinates, here only the indices are taken into
account. In fact, and again, we are in the context of cross-renumbering (see below). It is assumed
that the vertices have been renumbered (by a topological or geometric method) and the elements
are renumbered considering the vertices from the first to the last. The criterion is thus just an
index, the index of the vertices.

• Cross-renumbering
Let us emphasize, in general, and as seen above, the interest of renumbering the vertices
(nodes) as well as the elements but also the interest to renumber the nodes after having previously
renumbered the elements and vice versa.

Since the elements have been renumbered, the nodes are renumbered starting from those of
the first element, and then moving on to those of the second element not yet renumbered, and, in
this way, while the loop does not end. It is easy to see that the opposite approach makes sense.
The nodes having been renumbered, the elements of the ball of the first node then those of the
ball of the second node and, this, as long as the stopping condition is not met.

Finally, applying such a process several times (elements and then nodes and then nodes fol-
lowed by elements) is a solution that can be proposed.

In general, any kind of renumbering can be more or less effective and more or less expensive,
but the gain may only be moderate. On the other hand, when effectively solving systems, even a
modest gain can translate into a gain, this time, quite significant and the cost of renumbering is
thus largely recovered.

3.2.3. Renumbering and mesh partition

Already discussed in Chapter 9, Volume 2, the construction of a partition of a given mesh,


typically for parallel processing purposes and regardless of the nature of memory, is again ap-
proached through the prism of renumbering methods as described above. In the following, there
is no prejudice (at the outset) of how the mesh was built and whether or not it possesses any
particularities in its numbering that could be taken advantage of. First, we look at how to use a
space-filling curve and then a frontal approach to achieve our goals. Finally, it is however indi-
cated that a mesh construction using a frontal method or a tree method defines a specific partition
method that, in itself, takes advantage of the way the mesh was built. On the other hand, in
an iterative mesh-computation process (as is the case when a calculation is achieved with mesh
adaptation), regardless of the method used to build the first mesh, the meshes of the various
stages are generally obtained through local processes (insertion of new vertices, removal of ver-
tices, edge flips or other topological operations) and, thus, resorting to a space-filling curve seems
appropriate again. At the end of this section, the problem of mesh partitioning is mentioned via
the graph partitioning problem seen in an abstract way.

A renumbering consists of rearranging the data (elements and/or vertices) in such a way that
if they are geometrically adjacent, then they are adjacent in memory. Conversely, if the data are
Renumbering and Memory 95

adjacent in memory, they are geometrically close. This observation will allow us to define a mesh
partitioning tool that is very (or too) simple, flexible and, moreover, of minimal cost.

• Use of a space-filling curve

To decompose a mesh into blocks of the same size (for the purpose of balancing the workload
when processing each block), the simplest method is:

Mesh partition algorithm with a space-filling curve [3.4]


i) To use a grid containing a mesh and to renumber its cells by means of a space-filling curve.
ii) To renumber mesh elements based on this curve.
iii) To define the blocks by fixing their size, n, by taking n elements of consecutive indices.
The first block includes the elements of indices 1 to n, the second those of indices n + 1 to 2 n,
and so on.

This naive method has two significant advantages. First, it is particularly simple and therefore
has a low cost and access to blocks is trivially defined by two indices (beginning and end of the
block). Second, it applies regardless of the spatial dimension of the mesh. In effect, the space-
filling curves project two- and three-dimensional data over a one-dimensional entity, thus making
possible a sequential traversal of this multidimensional data.

In addition, all types of elements and meshes with several types of elements can be processed,
which is not the case for other approaches that are instead related to the fact that all the elements
have the same geometry.

The choice of the block sizes is dictated by the concern of distributing the elements and a
constant size will be taken (as in the diagram above) or by concerns of distributing the workload.
For a mesh comprising elements of different geometry (tetrahedra, pyramids, hexahedra), it can
be estimated that processing an element has a different cost depending on its geometry. Weights
can then be assigned to the elements so that blocks can have an optimal size to balance this
workload.

However, this method is not optimal with respect to several criteria, the connectivity of the
blocks, the size of the interfaces8 between blocks and the regularity of these interfaces. These
questions have been discussed in Chapter 9, Volume 2 and the answer consists of essentially
migrating elements from one block to another to best meet the criteria. It will therefore be
retained that using a filling curve is a simple and low-cost way to obtain a partition, even if it
means that it has to be subsequently improved based on specific criteria.

As indicated previously, in the case of dynamic meshes (where the topology and geometry
change) therefore typically for mesh-computation loops, resorting to a partitioning method based

8. Which, in the case of a parallel process, has a direct role in the number of exchanges between processors
when dealing with distributed memory.
96 Meshing, Geometric Modeling and Numerical Simulation 3

on a space-filling curve is very interesting because of its high speed. The more elaborate parti-
tioning methods have indeed a significant cost and too frequent partitions may take longer than
the simulation itself.
• Using a frontal renumbering
If, instead of using an approach with a space-filling curve, the elements are renumbered by
a frontal technique such as Gibbs’s or Cuthill-McKee’s technique, the partitioning will follow
the underlying advancing front and produce concentric (lines) surfaces, such as onion layers. It
will produce blocks of uneven sizes (the first blocks are necessarily smaller) and also potentially
large sizes. To obtain blocks of more even sizes (therefore smaller on average), it is possible
to renumber and repartition each of these layers, which will produce “slices” if we follow this
culinary analogy. Implicitly, the meshes addressed here are essentially made up of simplices.
• Explicit use of the mesh generation method
To finalize, a word about the possible utilization of the generation method9 (for a first mesh).
In an advancing front, in simplicial meshes, the elements are created by following the way the
front advances in the domain. This is reflected by the fact that the neighboring elements have
neighboring numbers, and it is again possible to create blocks by taking a series of elements from
a given number to another (for example, from 1 to n, then from n+1 to 2 n as above to get blocks
of n elements). However, it is known that even in this case, renumbering (frontal or otherwise)
improves the situation but the issue of the connectivity of the blocks remains.

In a tree-based method, the same property is also found. The elements (simplexes or quadri-
laterals or hexahedra, Volume 2, Chapters 4 and 5) are built by iterating in a given order the
terminal leaves of the tree and therefore the neighboring elements have neighboring numbers
(and the connectivity problem still remains).
• Partition and graph
There are partitioners that work directly on a graph (seen in an abstract way). A graph is
defined by its nodes and its edges. There is an edge between two nodes if these two nodes
are connected. For mesh partitioning, partitioners rely on the graph associated with this mesh.
This graph concerns the links (edge or face neighborhoods depending on the dimension of the
space) between elements (the graph of the elements will be discussed in contrast with the node-
related graph seen above). Still in the particular case of a mesh, it should be noted that graph
nodes are the mesh elements and that the graph edges are none other than the neighborhoods
between elements. Thereby, there is a graph edge between two nodes if the two mesh elements
(which are these nodes) are adjacent through an edge (a face) of the mesh. In essence, this
graph is only topological but it can be given a geometric representation. When considering a
Delaunay triangulation, the edges of the graph can be seen as the dual edges (Voronoï edges) of
the triangulation. In two dimensions, the edges of the graph will be seen as the dual edges of the
mesh edges. In three dimensions, the graph edges will be seen as the dual edges of the mesh faces.

9. It is known that this issue does not make sense for Delaunay methods since the numbering of the elements
is unpredictable.
Renumbering and Memory 97

For arbitrary meshes (namely other than Delaunay triangulations or comprising non-simplicial
elements), a naively similar geometric representation of the graph edges will be found. The
graph edge relating to two elements sharing a common edge (face) can be geometrically seen as
the segment joining the barycenters of the two elements in question. This possible geometrical
interpretation of the graph edges allows having information about their lengths, which can be
used to give weights to these edges.

It should be noted that the graph of a high-degree mesh does not differ from that of its first-
degree counterpart (because it is the elements and their topological neighborhoods that define the
graph, independently of their nodes).

The construction of a graph partition is a priori a NP-complete problem. As such, a plethora


of heuristics can be found in the literature trying to solve the problem. Regardless of the method
used, the problem is still to find an acceptable compromise between the quality of the partition
and construction cost. From the perspective of parallel computation, a partition is said to be
of quality if it optimally verifies the criteria already mentioned – balancing and interface size.
The first criterion is obvious because having the same number of elements in each partition block
implies that the workload of each processor is identical and, as a consequence, the situation where
a processor waits for another processor is avoided. The second criterion consists of minimizing
the number of cut edges of the graph. For a two-dimensional mesh, this amounts to minimizing
the number of common edges to two partition blocks. In three dimensions, it is the number of
common faces between two partition blocks that will matter. The smaller the number of edges
(faces), the lower the number of communications between processors.

In general and apart from geometric methods seen above (including the use of space-filling
curves), there are two large families of algorithms. The first family makes use of spectral meth-
ods. It consists of reformulating the graph partitioning problem into the optimization of a discrete
quadratic function. This brings nothing to simplify the problem, but provides a new technique
to solve it. One merely has to calculate the second eigenvector of the discrete Laplacian of the
graph. The discrete Laplacian matrix LG of the graph G is given by:


⎨1 if i = j with i and j neighbors,
(LG )ij = −deg (i) if i = j, [3.5]


0 otherwise,

with i and j two nodes of the graph and deg(i) the degree of the node i, in other words the
number of incident graph edges at i.

It is shown [Hendrickson, Lelan-1995], [Pothen et al. 1990] that by calculating the second-
highest eigenvalue of this matrix and its own associated eigenvector, called the Fiedler vector, a
graph bisection is obtained. This method produces high-quality partitions. However, it requires a
significant computational cost even using the Lanczos algorithm [Parlett et al. 1982] to calculate
the eigenvalues.

Finally, the second family relies on so-called multilevel methods. These methods aim to
simplify the partitioning problem by considering a coarse graph. The cause of the difficulty is
98 Meshing, Geometric Modeling and Numerical Simulation 3

then the generation of this coarse graph and the resulting partitioning of the initial graph from
the coarse graph. As such, there are three steps to be distinguished:
– unrefinement of the graph by agglomerating its nodes in order to obtain a coarse graph;
– bisection of this coarse graph (for example, with the spectral method described above);
– refinement of this coarse graph to finally obtain a bisection of the initial graph.

Once again, numerous heuristics have been developed to choose which nodes to agglomerate
together to obtain the coarse graph. An exhaustive list of them will not be carried out, but it
is necessary to know that the chosen heuristics will depend on the quality of the partition. With
these methods, the bisection of the coarse graph is, one may say, the simplest step because we are
then working with a very small graph. Finally, during the refinement stage to find the partitioning
of the original graph, the quality of the partitions may become deteriorated. The Kerninghan-
Lin/Fidducia-Mattheyses algorithm (KL/FM) can then be used. Without going into too much
detail, the principle of this heuristic is quite simple. The nodes have to be migrated from one
partition to another. If this improves the quality of the partitions, the migration is carried out.
Otherwise, nothing is done and one tries other nodes and so on. This is an iterative process that
can prove to be costly.

Unlike geometric methods that can produce partitions into several blocks in a single step,
both previous methods are just bisections. To obtain several (more than two) blocks, the method
will have to be applied to the blocks already built.

There are many program libraries to solve the construction problem of graph partitioning.
However, it seems that multilevel methods are preferred for their good compromise between par-
tition quality and computational cost. We shall quote METIS and its parallel version ParMETIS
[Karypis, Kumar-1998a], [Karypis, Kumar-1998b], [Karypis, Kumar-1998c] as well as Scotch
and PT-Scotch [Chevalier, Pellegrini-2008]. The latter methods are based on a multilevel ap-
proach. As noted above, the graph being considered is successively unrefined in order to obtain
a coarse graph that is easy to partition (to slice). The resulting graph is then refined to build
the partitioning of the original graph. This heuristics allows in practice very large graphs to be
processed with good results as soon as a computer with enough memory is utilized. The graph
partitioning, used here to decompose meshes into blocks, is also used in other disciplines (linear
programming, very large scale integration [VLSI], etc.).

One of the possibilities, to deal with meshes, is to assign weight to the edges of the graph
(depending on the local characteristics of the mesh). These weights will then be taken into
account during partitioning to reinforce certain criteria.

3.3. Some examples

Let us reconsider the various points discussed in this chapter and give some illustrations
related to the results obtained.
Renumbering and Memory 99

To see the effect of renumbering mesh nodes on the matrix profile built on this mesh, an
example will be used (necessarily small to have a chance of seeing something) and then extract
a few statistics concerning meshes of more realistic sizes in two and in three dimensions.

Without elaborating further on the method used, Figures 3.6 and 3.7 show a triangle mesh of
degree 1 possessing 31 nodes. The matrix (supposedly symmetrical) built on the initial mesh,
first figure, comprises 267 coefficients (the sum of the δi ), a maximal profile of 30 (maxi δi ,
1
value reached due to the existence of the edge [2 − 31]) and an average profile of 8.6 ( noe δi ).
After renumbering the nodes, second figure, these values, respectively, becomes 218, 9 and 7.
The fact that these last two values are quite close indicates that this is optimal or close to it.

5
4
6
3
7 24 23 10
28
2
16
25 27
8 31
29
19 1
22 20
9 29

17 26 21 15
10 20
18
14 30
11
12 13
1 10 20 30

Figure 3.6. The mesh before renumbering its nodes and the profile of the lower part of the
associated (supposedly symmetrical) matrix

In Table 3.1, a few examples are given in two and three dimensions by indicating how the
profile10 evolves. The effectiveness of a renumbering will be verified, but it will also be seen
that in three dimensions, with a roughly equivalent number of nodes (noe), efficiency is signif-
icantly lower. Table 3.2 considers two three-dimensional examples involving larger meshes (in
the order of 1 and 4 million tetrahedra) and two methods (classic frontal and Gibbs’ methods)
are compared.

Table 3.1 shows the difference in efficiency between cases according to size (around 60 in two
dimensions and around 10 in three dimensions for matrices of comparable size). The explanation

10. Let us point out that the profile of a full symmetrical matrix corresponding to 40,000 nodes (the first of
the examples given) is in the order of 800 million when storing only the lower or upper triangular part.
100 Meshing, Geometric Modeling and Numerical Simulation 3

is relatively simple. The gain is related to the topological diameter of the mesh that corresponds
to the ratio between the area and the perimeter of the two-dimensional object and the volume and
surface area of the three-dimensional object.

22 15
30
8
31 23 16 10
9
28 3
20 14
29 7
21 1
13 2 20
6
27
19 12 21 4
26 11
18
10 30
25
17
24
1 10 20 30

Figure 3.7. The renumbering of the nodes (on the left) and the profile inferred therefrom (on
the right)

noe Initial profile Final profile Ratio Max. initial width Max. final width
42, 378 (2D) 465, 893, 408 7, 663, 878 61 39, 410 308
43, 434 (3D) 483, 132, 896 39, 873, 528 12 42, 516 1, 390
59, 513 (2D) 827, 498, 368 12, 647, 528 65 53, 580 303
61, 020 (3D) 907, 140, 480 81, 040, 528 11 60, 868 2, 161
106, 687 (2D) 2, 862, 701, 568 32, 967, 754 87 102, 901 443
92, 051 (3D) 2, 194, 965, 760 254, 601, 584 9 89, 530 4, 527
136, 561 (2D) 4, 451, 360, 768 65, 745, 960 68 128, 572 770
116, 579 (3D) 3, 586, 510, 080 241, 324, 640 15 116, 104 3, 920
Table 3.1. Comparison of the effectiveness of renumbering with matrices relating to two- and
three-dimensional meshes

In Table 3.2, the clear difference in efficiency between the two cases can be observed, which
are nonetheless both examples in three dimensions. On the other hand, the two methods used
give comparable results. The first example is a typical case in fluid mechanics (an aircraft in
its infinity) and the renumbering is very ineffective, while the second example is a thin object
in solids mechanics and the renumbering gives a relatively reasonable gain. The fact of being
thin means that the tendency is to approximate the behavior of the two dimensions. These two
examples corroborate the fact that the renumbering presented above were originally devised for
Renumbering and Memory 101

two dimensions and for the mechanics of solids and, for the resolution of systems, for direct
methods.
noe Initial profile Final profile Ratio Max. initial width Max. final width
202, 875 (Frontal) 5, 705, 970, 000 1, 561, 540, 000 4 169, 748 16 574
202, 875 (Gibbs) 5, 705, 970, 000 1, 222, 500, 000 5 169, 748 11, 822
742, 679 (Frontal) 150, 332, 000, 000 4, 104, 710, 000 37 741, 182 8, 053
742, 679 (Gibbs) 150, 332, 000, 000 4, 099, 560, 000 37 741, 182 8, 053
Table 3.2. Comparison of the effectiveness of renumbering using a frontal method and a Gibbs
method on matrices relating to three-dimensional meshes

Figure 3.8. From left to right, block partitioning (a color per block) as achieved by a classic
advancing front, an advancing front with an “inverted” pass and a Hilbert space-filling curve

To conclude with the illustrations, some examples of mesh partitioning are shown with a
simple two-dimensional geometry (therefore immediately readable) in order to see the behavior
of many of the methods presented above.

Figure 3.8 shows, from left to right, block-based partitioning (one color per block) as obtained
by a classic advancing front, an advancing front with an “inverted” pass and a Hilbert space-
filling curve. It is easy to imagine that the initial mesh is uniform (in element size). Subsequently,
102 Meshing, Geometric Modeling and Numerical Simulation 3

since the blocks were built to contain a number of elements, each of them covers about the same
surface area. On the left, it can be seen that the start of the process is located at a point in the
low left corner of the domain and that successive fronts are developed from this first front. In the
middle, a second renumbering pass was made starting from the end of the first pass. On the right,
finally, the underlying Hilbert curve can be guessed that furiously evokes that of Figure 3.1. The
presence of defects in the regularity of the interfaces between blocks should also be noted, which
results in small-sized areas (almost) surrounded by elements of another block. It is clear that this
aspect deserves to be taken into account to minimize these defects. The simplest solution is to
migrate the elements from one block to another based on the likelihood of this migration in view
of the desired criteria (balancing and regularity).

Figure 3.9. From left to right and as in Figure 3.8, block-based partitioning as obtained by a
classic advancing front, an advancing front with an “inverted” pass and a Hilbert space-filling
curve. Here, weights have been assigned to elements, changing the creation of blocks

Figure 3.9, to be compared with the previous one, shows block partitioning as obtained by an
advancing front classic advancing front, an advancing front with an “inverted” pass and a Hilbert
space-filling curve. The difference with previous results is that a weight has been assigned to
the elements of the initial mesh. It can be assumed that this weight is a function of the size
(area, volume) of the elements and that for a balancing criterion, a region where the elements are
larger will, for an equivalent number of elements, cover more surface area. Again, the presence
Renumbering and Memory 103

of defects in the regularity of the interfaces between blocks can be observed for the space-filling
curve-based method.


∗ ∗

In this chapter, we have striven to show the link between numbering and memory. On the
memory side, two aspects have been highlighted, the required memory size (with the example of
the storage of matrices associated with a given mesh within the context of finite elements) and,
above all, the concern to minimize access times and as such the number of cache misses occurring
during the execution of a process. The importance of taking into account these problems with
cache misses has been highlighted.

Then, the typical gains obtained by means of different renumbering methods in terms of
memory space (for storing a matrix) are illustrated through a few statistics.

Finally, a surprisingly simple solution has been proposed for partitioning a mesh, which con-
sists of a solution essentially based on the use of space-filling curves. The simplicity of this
approach contradicts the usual methods, much more erudite but much more expensive, which
rely on a frontal approach or graphs seen in an abstract way but applied here in the case of
meshes without really taking advantage of this particular context.
Chapter 4

High-Degree Mesh Visualization

Mesh visualization, with the different desirable points to be shown, presents some difficul-
ties, especially if we want to be able to quickly visualize large meshes that are even composed
of classical elements (read of degree 1). The increasing potential of high-order methods for both
mesh elements and for the solutions provided by such meshes requires the design and develop-
ment of specific algorithms. On the other hand, recent changes in standards and graphic libraries
(and the ones to come that should be anticipated) require staying very close to these standards
and tools.

In this chapter, we shall go beyond the mere visualization of a mesh, in fact, we will show
how to develop investigation tools, a fine and interactive local analysis possible (and quick to
do). The functionalities that seem interesting will be defined, in light of our studies, and then we
shall show how they will be implemented based on the available graphic primitives (for example,
by considering the OpenGL environment).

The point here is also (see Chapter 5) not about competing with big (commercial) software
programs that propose many possibilities for visualizing the mesh1. This is also not about casting
a rather aesthetic look in nature on the results obtained but to represent meshes as they are
simultaneously offering the possibility to observe them (for analysis) in a conventional manner
(using enlargements, rotations, cross-sectional views, etc.) but also at the level of certain details
(vertex ball, edge shell, crown of elements). This chapter will consider planar, surface and solid
meshes. It will be seen that for surface meshes, it is instructive to take into account, in the
rendering, the normals (as known or approximated) and, then, this will be covered in the next
chapter, where the (vector) function visualized is the vector field of normals. The latter allows
the introduction of criteria of light or shading which, in other words, in the blink of an eye, show
the possible defects of the mesh (for example, the elements constructed in counter-curvature).

1. With the caveat, at the moment of writing these lines, that processing high-degree meshes, especially
curves, is not entirely clear and that the speed of displaying is not always achieved.

Meshing, Geometric Modeling and Numerical Simulation 3: Storage,


Visualization and In Memory Strategies, First Edition. Paul Louis George,
Frédéric Alauzet, Adrien Loseille and Loïc Maréchal.
© ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
106 Meshing, Geometric Modeling and Numerical Simulation 3

The constraints that we set for ourselves relate to the ability to deal with large-sized meshes on
a computer (modern but standard) and, with a speed that allows for image fluidity in the presence
of movements (enlargements, rotations, etc.) or typically, when slicing (a three-dimensional
mesh by a plane). That said, GUI2 aspects are not directly our concern.

4.1. Geometric operators and topological operators

To visualize a mesh, geometric operators will have to be used, allowing it to be globally or


locally manipulated. Topological operators will also be used to extract local patterns from the
mesh, such as vertex balls, edge shells, ball neighborhoods and shell neighborhoods.

4.1.1. Geometric operators

Here, the conventional tools for global mesh manipulation can be found (Chapter 2 of this
volume) in order to be able to move closer or away from it (zooming in or out), to rotate it
(rotation) to see from another side (front, back, right-side, left-side, from the top, from below,
etc.) and, in three dimensions, to slice it by a plane so that internal elements can be inspected
and not only those of the boundary (surface) of the mesh. To this panoply unsurprisingly will
be added a shrink operator, a contraction of each element around its center of gravity, which
facilitates conformity verification and also provides an overview of the interior of the domain (at
least up to a point).

In the end, there is a small number of really useful tools and more often with no major
technical complexity. They are applied to the complete mesh (to all its elements) or, see below,
to a smaller set of elements, such as only those cut by a plane (in case of a cut) or, more simply,
to the ball of a vertex only. These particular sets (ball, etc.) will be built using conventional
topological operators.

The mesh is associated with the bin in which it is contained, defined by the extrema of the
coordinates of the vertices (nodes) of the mesh elements. The barycenter of this bin will naturally
be the center of the rotations that will be carried out and the rotational axes will be the three
canonical axes transposed onto this center. If the observer wants to see the left (visible) part of
the mesh, the rotation will be to the right. In other words, it is not the observer who will move,
but the mesh that will be rotated.

To make a cut, just making a call to the desired primitive of the graphics software will be
enough and the trace of the elements will be seen on the cutting plane. To draw a cut in “hedge-
hog” mode3, which the graphic library is not really capable of achieving, one will have to write
this operator oneself. It should be noted that if the cutting plane is moved (translation, rotation),

2. Graphic User Interface.


3. In this mode, the elements intersected by the cutting plane are drawn completely, hence this hedgehog-
like aspect.
High-Degree Mesh Visualization 107

the elements will not be plotted4 and it will be necessary to request the plot once again when
needed.

4.1.2. Topological operators

The simplest of these operators is the one that finds the elements of a vertex ball, and the
one that gives the elements of the shell of an edge is then found. These operators have been
extensively described in various sections of the book (Volume 1, Chapter 4 and Chapter 1 of this
volume). They essentially rely on neighborhood relations between elements (the construction
of these relations has also been extensively described). However, it is reasonable to avoid these
relations and to use a naive algorithm (a complete path of the mesh at every request) because the
restitution time will result in not seeing the extra cost.

Given that a set of elements is selected (for example, a ball), an operator is able to find all the
neighboring elements of those of the ball that therefore have a vertex, an edge or a face of the
boundary of the ball (we see again the notion of a crown). Step by step, the neighboring elements
can be found a rank further. The possibility of exhibiting these sets and observing them allows a
fine analysis of a given area, in particular, to assess the quality of the elements (shape, size, etc.).

In this mode, in case of rotation, its center is the vertex of the initial ball to facilitate (better
control) all movements.

4.2. Representation of curved meshes

Planar meshes are represented in wireframe mode, that is, their edges are plotted. This same
mode is more or less suitable for drawing surface meshes but is not very legible for volume
meshes for which it is preferable to introduce the concept of visibility to draw only the visible
faces (therefore their edges). However, this representation has been thought to draw elements
whose edges are straight segments with, however, a few small problems in the presence of non-
simplicial elements (including the emblematic case of quadrilaterals on a bended surface). There-
fore, the question arises of how to faithfully draw such elements and, more generally, curved ele-
ments (of high degree, therefore), knowing that, in practice, only “straight” elements are capable
of being properly represented, which can be seen as plotting a segment or filling in (coloring) a
polygon (a triangle without explicitly saying so). Going down a notch, the plot becomes simply
limited to affecting a color to a pixel. This observation will lead us into looking at the visualiza-
tion problem at this elementary level, especially for curved elements and even more in the next
chapter, to represent fields of solutions. The elements will have to be appropriately subdivided
(in a transparent manner to the user) or according to users prescriptions (therefore in interactive
mode). Several types of subdivisions are discussed, some are rather conventional (and now avail-
able in some visualization software programs) and others are more innovative (particularly those
we are in the process of developing5).

4. For obvious reasons of speed and therefore of image fluidity.


5. In the program ViZiR, without mentioning names.
108 Meshing, Geometric Modeling and Numerical Simulation 3

• Uniform subdivision into straight triangular elements

The simplest method consists of subdividing curved, planar or surface elements into straight
elements. For solid elements, the faces are the ones that are going to be drawn and, therefore, we
are again in the presence of surface triangles or quadrilaterals. The most common subdivision
is uniform with the degree of the element as a level, for example, a second-degree triangle is
simply cut into four triangles of degree one whose vertices are those of the original triangle and
its edge nodes. Defining a uniform subdivision with a level of subdivision other than the degree
of the element (so generally higher) is another solution in which the position of the vertices
thus created is evaluated (which are therefore not necessarily initial nodes). For Lagrange finite
elements, we saw how to use the Bézier representation to easily calculate the position of these
points (Volume 1, Chapter 2 and Chapter 4 of this volume). If we consider the definition of a
Bézier element (here a triangle) of degree d, the current point of this triangle is written as:

σ(u, v, w) = d
Bijk (u, v, w)Pijk ,
i+j+k=d

with (u, v, w) the barycentric coordinates in the reference space6 (the parametric space),
d
Bijk (u, v, w) the Bernstein polynomials and Pijk the control points of the current element. If the
subdivision level is n, with n ≥ d, the reference triangle (the parametric space) is sampled with
the step n1 and we use the image (the physical position) of the points of this sample. This allows
the subdivision triangles to be constructed and they are the ones that are going to be drawn. The
advantage of this approach resides in its simplicity, however the result (the rendering) depends
on the subdivision level that may be insufficient or, conversely, too large (therefore expensive)
with respect to the geometry (curvature) to be considered. In this approach, the vertices of the
triangles of the subdivision are properly located (on the edges for boundary vertices and, for
surfaces, for internal vertices) but the edges remain straight (Figure 4.1).

Figure 4.1. On the left, a triangle of degree 2 with curved edges. In the middle, the plot
obtained with a subdivision into four, n = 2. On the right side, rendering with a new
subdivision into nine, n = 3

We notice that this same subdivision can be carried out without explicitly using the σ function
but by applying the De Casteljau subdivision up to the desired level. Moreover, one may not

6. In general and for curved elements, there is no barycentric coordinate system in the physical space.
High-Degree Mesh Visualization 109

calculate the position of the nodes but draw the triangles whose vertices are the control points of
the last subdivision level7.

Everything that has just been seen applies to quadrilaterals. A plane quadrilateral is first di-
vided into two triangles as the process previously described. Another way to deal with a quadri-
lateral is to subdivide  its strict definition (the σ function) with its two parameters u and
it using
v, that is σ(u, v) = Bid (u)Bjd (v)Pij ) or, again, by means of the De Casteljau subdi-
i=0,d j=0,d
vision algorithm. Second, the subelements of the final level are the ones that are cut into two
triangles. It should be noted that in the case of a surface element, the curvature is not controlled
and the triangles displayed can exhibit a counter-curvature, suggesting that the mesh has this de-
fect, which may be reality or just an artifact of the visualization. On the other hand, for a surface
element, and even at degree 1 × 1, a subdivision into quadrilateral subelements must be made8 in
order to have internal vertices so that the geometry of the element would be properly represented.

In practice, the subdivision is applied to all of the mesh elements, and there is only one
parameter, its level n. This value is the user’s choice or is imposed by the software. In the latter
case, a value such as n = d is suggested or, better yet, n = 2 d with no cost consideration.

For solids, the finite element is defined in Bézier, using a formula such as:

θ(u, v, w, t) = d
Bijkl (u, v, w, t)Pijkl
i+j+k+l=d

  
or θ(u, v, w) = Bid (u)Bjd (v)Bkd (w)Pijk
i=0,d j=0,d k=0,d

depending on whether a tetrahedron or hexahedron is considered. To represent the faces, we use


formulas, as previously seen. For example, in the first case and for the face t = 0, we shall note:
 
σ(u, v, w) = θ(u, v, w, 0) = d
Bijk0 (u, v, w, 0)Pijk0 = d
Bijk (u, v, w)Pijk0 ,
i+j+k=d i+j+k=d

and in the second case for the face w = 0, it will be found that:
 
σ(u, v) = θ(u, v, 0) = Bid (u)Bjd (v)Pij0 .
i=0,d j=0,d

7. For a sufficient level, the subdivision mesh with these points as vertices is very close to that with the
nodes as vertices.
8. And certainly not into two triangles from the start.
110 Meshing, Geometric Modeling and Numerical Simulation 3

• Subdivision or adaptive tessellation


In this approach, the subdivision will be controlled with the concern of correctly representing
the geometry of the elements while minimizing the number of subelements by eventually con-
sidering the possibility of using a non-conformal subdivision. Two different approaches will be
distinguished, a recursive subdivision and an arbitrary tessellation. Control consists of inserting
more or less points on the edges or inside the element. We are able to be flexible when it comes
to choosing these values independently of each other for each edge and so as not to be restricted
to a uniform (sub)division and even to be non-conformal.

The choice of subdivision or tessellation parameters is again left to the user’s initiative or
determined by criteria, one criterion to define how many points will be used to cut the edges and
another to control the submesh of the interior of the element (if this is a two-dimensional element
or a surface element) or the interior of the faces of the element (if we are in three dimensions).

Using these user’s choices or criteria, the following two methods can be described.

Figure 4.2. On the left, a triangle of degree 2 with a (single) curve. On the right side, the
rendering of the curved edge with a recursive subdivision

 Locally recursive adaptive subdivision


A subdivision is built uniform but not necessarily at a low level (n = 2 is a choice). A subdi-
vision is re-applied to some of the subelements originating from the previous step according to a
given criterion (Figure 4.2). In this way, conformity is lost but the total number of subelements
is reduced. The underlying idea is to minimize the costs noticing that a straight edge of a curved
element9 does not need to be cut to be properly represented. However, despite its cost advantage,
and therefore in spite of its speed, this approach, well suited to process straight elements of any
degree in two dimensions, suffers from some weaknesses in the case of curved plane elements
and surface elements (and therefore also in three dimensions for volume meshes). In addition,
as it will be seen in Chapter 5, for displaying a field of solutions (and as such, even in two
dimensions), visualization with this kind of subdivision will be imperfect.

9. In general, geometrically curved elements are located near curved boundaries. In other words, high-
degree elements far enough away from these boundaries have straight edges.
High-Degree Mesh Visualization 111

It is easy to understand why, in terms of the display, this method will produce defects. Let us
take the example of a curved edge. Since the subdivision is non-conformal, there are pathologies
in which an edge of an element of the subdivision results in two edges, one in each of the two
neighbors of the element comprising the single edge. Subsequently, a slot will appear and this
will be all the more visible for surface elements (or faces of solid elements).

 Adaptive tessellation

The objective is to achieve a conformal subdivision (tessellation) on each element governed


by several parameters, thus avoiding the construction as above of a uniform subdivision based on
a single parameter potentially leading to a process that is too time-consuming and too memory-
consuming.

The idea is, first, to cut the edges of the elements (into two dimensions or on a surface) or
the faces of the elements (into three dimensions) and then, second, to introduce one or more
points into the element (the face) and then build a conformal tessellation based on the set of
points introduced on the edges and in the interior. This tessellation is carried out in the most
simple (fast) possible manner (see below). To ensure conformity between mesh elements, the
cutting points of the edges are only a function of these edges and are therefore independent of
the elements themselves.

i) Cutting points for processing an edge. To find the number n of cutting points on an edge,
a heuristic is defined by introducing a distance criterion. For example, this criterion is calculated
by looking at the control points of the element edges under consideration. The distance to the
control point Pi is defined by:

||Pi − Pistraight ||
δ edge (Pi ) = ,
l
with Pi the ith control point of the edge being considered, I the length of this edge (seen as a
straight segment) and Pistraight the ith control point (thus in fact the equivalent to a node) of the
straight edge associated with the curved edge (thus joining its two vertices). The proposed heuris-
tics consists of finding the decomposition integer n. The integer [ 5 t δ a ] is first calculated with t
a user datum, the value 5 reflecting a control of the gap between the edge and its representation10,
δ a the maximum on the edge under consideration and δ edge (Pi ) that is δ a = max δ edge (Pi ) and
i
[.], indicating that the integer part is taken. The value of n is then determined, this is the integer
such that:
n ≤ [ 5 t δ a ] ≤ n + 1, [4.1]
that is to say that the edge will be subdivided into n + 1 segments. Applied to each edge, this
heuristic gives three (for triangles) or four (for quadrilaterals) decomposition parameters that
may be different from one edge to the other. For a straight edge, δ a = 0 thus n = 0 and such an
edge will not be cut. In general, the number of subsegments is nbre = n + 1.

10. A gap that is too large will be reflected by a plot that is too coarse. This empirical value, 5, implies, for
t = 1, a gap of 15 with the length of the straight edge.
112 Meshing, Geometric Modeling and Numerical Simulation 3

For cutting, we find the position of the points by calculating the image of the cutting points
of the reference edge, and these points being evenly distributed over this edge.

ii) Cutting points for processing a face (internal points to an element or a face). The same
principle as before will be followed by now using the control points of the element (of the face)
other than those of the edges. We denote lmax the length of the longest edge of the element (the
face) and, as seen previously, the distance from the control point Pi is defined by:

||Pi − Pistraight ||
δ f ace (Pi ) = ,
lmax
with Pi the ith internal control point of the element (the face) being considered and Pistraight the
ith control point of the straight element (the face of degree 1 or 1 × 1 for a quadrilateral). By δ f
is denoted the maximum of δ f ace (Pi ), δ f = max δ f ace (Pi ).
i

In addition, a coefficient, known as planarity, is also introduced, denoted by plan. For a


surface triangle or a triangular face, we set plan = 0. For a surface quadrilateral or a quadrilateral
face, we set:
|det(Pd0 − P00 , Pdd − P00 , P0d − P00 )|
plan = ,
||Pd0 − P00 || ||Pdd − P00 || ||P0d − P00 ||
with d the degree of the element (the face) and P.. the vertices of this element (face). This
coefficient is the ratio between a volume (in absolute value) and lengths. It somehow measures
the planarity of the element. Of these different values, we deduce the number of points of a
first cutting band. It is indeed possible to interpret the tessellation method as the processing
of successive bands (or crowns) gradually starting from the boundary of the element toward its
interior until it is completely covered (Figures 4.3–4.5 and following figures).
– Triangle or triangular face. We are looking for n such that:
n ≤ [ 5 t max(max δja ) , δ f , plane)] ≤ n + 1, [4.2]
j

with j that loops over all three edges. As such, a single value nbre = 1 + n is obtained.
– Quadrilateral or quadrilateral face. In this case, two11 values will be defined, one per topo-
logical direction, that is:
n1 ≤ [ 5 t max(max δja ) , δ f , plan)] ≤ n1 + 1 and thus nbre1 = n1 + 1,
j=1,3

n2 ≤ [ 5 t max(max δja ) , δ f , plan)] ≤ n2 + 1 and thus nbre2 = n2 + 1. [4.3]


j=2,4

The first parameter relates to sides 1 and 3, and the second parameter relates to sides 2 and 4,
topologically, the opposite sides.

iii) Band cutting. With these cutting parameters, a surprisingly simple subdivision process
will be defined. It is first observed that a band is naturally defined between the edges of the

11. This is a choice, one could remain on a value per topological side, therefore four values in total.
High-Degree Mesh Visualization 113

Figure 4.3. Triangles. Principle and control diagram of adaptive tessellation using a first
band and its cutting parameters (on the left). Band mesh (gray area, in the middle) and
inwards propagation (on the right)

Figure 4.4. Quadrilaterals. Principle and control diagram of adaptive tessellation using a
first band and its cutting parameters (on the left). Band mesh (gray area, in the middle) and
inwards propagation (on the right)

Figure 4.5. Influence of parameter t on the tessellation and, therefore, on the rendering. The
default value is that of the third drawing starting from the left

elements and the 3 (4) topological sides. By linking the cutting points corresponding to one
another, a triangle mesh is trivially constructed from this first band.

iv) Gradual cutting of the whole element. The area not already covered will be processed
in the same way by defining, one after the other, as many bands as necessary. The number of
cutting points of the (internal) sides of the band is deduced from that of the external sides (which
is known here). In fact, this number is reduced from one band to the other. In this way, we
progress toward the interior of the element until it is completely filled. For a triangular element,
the process completes when “in the center” only one triangle remains with no cutting points on
its edges or a single cutting point that will be linked “in a star fashion” with the edges of the
114 Meshing, Geometric Modeling and Numerical Simulation 3

triangle that contains it. For a quadrilateral element, the process ends in the same way, only a
single “square”, a single segment or a single point “in the center” remains.

It is noticeable that for a particular choice of the (first) cutting values, a (topologically) uni-
form subdivision will be found which, precisely, is what this method allows avoiding.

The particular case of a quadrilateral element of degree 1 × 1, d = 1, belonging to a surface,


deserves some comments and justifies the introduction of the parameter plan in the formula.
Indeed, since the edges are straight segments, we have δ a = 0 therefore the edges are not cut,
according to formula [4.1]. Similarly, one has δ f = 0 and there are no points to cut the face
(regardless of the value chosen for t). Subsequently, to represent the element, it can only be
cut in two triangles by taking either one of the diagonals. Both solutions are, a priori, different
or even very different leading to, for example, thinking that there is one fold. The introduction
of the parameter plan will contribute to finding a satisfactory representation by choosing an
adequate value for t. The edges are not cut but we have n1 = n2 where n1 (formula [4.3]),
is such that n1 ≤ [ 5 t max(maxj=1,3 δja ) , δ f , plane)] = [ 5 t plane] ≤ n1 + 1. Therefore,
nbre1 = nbre2 = n1 + 1 for the first band and then the cutting process is developed inside the
whole element as above.

Figure 4.6. Example of a quadrilateral of degree 1× 1. On the left, for a plane element, both
cuttings into two triangles yield the right rendering and very clever is one that imagines one
fold. On the right, these same cuts, for surfaces, exhibit a fold, whereas adaptive tessellation
allows a good rendering (following figure) and avoids the apparent formation of a fold

On the right-hand side, Figure 4.6 shows a fold regardless of the diagonal chosen to cut
the quadrilateral into two triangles, whereas Figure 4.7 gives a satisfactory rendering (without
fold), thus showing the advantages of the coefficient plan. Later, we will see that cutting the
quadrilateral into two second-degree triangles with control points judiciously defined is also an
(exact) solution and the problem is carried over onto the visualization of these triangles.
High-Degree Mesh Visualization 115

Figure 4.7. Example of a quadrilateral of degree × 1. Adaptive tessellation enables a good


rendering. This tessellation is shown for several values of the parameter t

• Multiparameterized tessellation

This solution is the one that is proposed in OpenGL standard for both triangles and quadrilat-
erals [Sellers et al. 2014]. It relies on a tessellation of each element governed by several subdivi-
sion parameters related, particularly but not only, to the edges of the elements. Accordingly, this
is a utilization (illustration) of the notion of adaptive tessellation such as described above and of
the entire underlying subdivision process.
TessLevelOuter[3]

TessLevelInner[1]
TessLevelOuter[0]

TessLevelInner[0]

TessLevelOuter[2]
TessLevelInner[0]

Tes
]
r[0

s
Tes
0]

Lev
ute

er[

s
Lev

elO
elO

nn

elI
elI

ute
Lev

nn
Lev

r[2
s
Tes

er[
s

]
Tes

0]

TessLevelInner[0]
TessLevelInner[1]

TessLevelOuter[1] TessLevelOuter[1]

TessLevelOuter[0]

Figure 4.8. Tessellation control parameters in the notation used by OpenGL for a
quadrilateral, a triangle and an edge (at the bottom)

In Figure 4.8, the OpenGL notation is used to designate the subdivision parameters for an edge
or for the first band or crown for a triangle and a quadrilateral.
– Segment or edge:
- T essLevelOuter[0]: the subdivision factor for an edge (wireframe plot).
– Triangle or triangular face:
- T essLevelOuter[0 : 2]: the three subdivision factors, one per (external) edge;
- T essLevelInner[0]: the subdivision factor for the (internal) edges, the same for the
three ones.
116 Meshing, Geometric Modeling and Numerical Simulation 3

– Quadrilateral or quadrilateral face:


- T essLevelOuter[0 : 3]: the four subdivision factors, one per (external) edge;
- T essLevelInner[0 : 1]: the two subdivision factors for the (internal) edges, the same
for two edges topologically opposite.

See, below, the application of these principles for the development of a visualization tool by
offering the ability to correctly process curved entities of any degree.

• Uniform subdivision into curved triangular elements


The problem here is different and will be described in detail in Chapter 5 because this type of
subdivision will allow bounds to be estimated on the solutions that will then be plotted. In short,
the initial element of arbitrary degree is cut, for example into four by taking the “midpoints” of
the edges, such that the subelements remain of the same degree (Figure 4.9).

Figure 4.9. Subdivision into four of a triangle of arbitrary degree by introduction of a vertex
per edge, the midpoint appears as a natural choice but this is not mandatory

For a quadrilateral element (a face), the De Casteljau algorithm (Volume 1, Chapter 9), gives
the solution directly. Subsequently, for example, to cut an element of any degree into 4, the point
of parameter u = v = 12 is evaluated and the control points of the subelements are naturally
reached. However, for a triangle, this algorithm allows a cut into 2 (u = v = 12 and w = 0) or
a cut into 3 (u = v = w = 13 ) but not, at least directly, a cut into 4 via the “midpoints” of the
edges (Figure 4.9), which is explained in Chapter 5.

4.3. Quick introduction to OpenGL and to the design of a graphics software program

A few indications will be given here on how to use the possibilities offered by OpenGL to
implement a mesh visualization process. It would not be practical to describe every possibility
available, only the main ones, which are going to be explicitly used, will be described. Strictly
speaking, it is clear that users do not need to know how their graphics software works. Nonethe-
less, this chapter on visualization will also serve the purpose of introducing some of the under-
lying ideas and concepts.
High-Degree Mesh Visualization 117

The philosophy is based on the use of different shaders. This notion hides a set of source files
written in the GLSL language12. These programs13 will be compiled on the fly in the computer
graphics card when executing the software on this machine. If the software calls such a Shader,
the role of the latter is to provide default values or values calculated from information that it
makes available to the programmer. For example, coordinates (Shader inputs) make it possible
to calculate one or more values (Shader outputs).

In the end, a pipeline can be built composed of Shaders branches, each having its precise role,
and raw data will be processed to achieve the plot with, then, the data required for this plot.

In this way, it will be possible to customize the visualization software, in which branches per-
form, automatically, a certain number of tasks and for their part shaders can be programmed. For
example, how elements of any degree are plotted will be understood by replacing this problem
(which is not anticipated with primitives) by a problem capable of being addressed (in fact, here,
this will be similar to plotting simple straight triangles).

To see the contribution of OpenGL in writing graphic software programs and, in particular, the
use of GPUs, a synthetic description will be given of what a standard graphic program can be in
order to see how and where this same software can be improved by resorting to Shaders. Efforts
will be made to maximize the number of operations carried out on GPUs and to minimize making
use of the CPU, therefore, the number of transfers between these two types of computational units
as well as the necessary memory resource.

Algorithm [4.4] therefore proposes a possible organization to develop a graphics program in


a standard way (here for viewing meshes and in the next chapter for this same visualization with,
in addition, the plotting of solution fields).

Mesh visualization algorithm [4.4]


i) Mesh reading, vertex coordinates (nodes) and element connectivity (ordered list of vertices
(nodes)), and attributes.
ii) Construction of neighborhood relations (per edge or per face depending on the dimension).
iii) Mesh extraction of the surface of the object.
iv) [Loop over elements, plotting (initial plot in automatic mode), end Loop].
v) User request, ACT ION .
vi) Depending on the request, END or plotting the result of ACT ION and then return to v).

We are now going to give some information14 on the various points mentioned in the scheme.
Beforehand and conventionally, let us assume that a request is specified either on the keyboard
or via the mouse.

12. GLSL, OpenGL Shading Language, is a programming language specific to OpenGL.


13. In other disciplines, they would be referred to as user subprograms.
14. It is clear that the following is not intended for specialists but rather for neophytes.
118 Meshing, Geometric Modeling and Numerical Simulation 3

Then, and in the order of the scheme, step (i) is performed in accordance with the defined file
format (Volume 1, Chapter 4 and Chapter 1 of this volume). In the case of high-degree elements,
it is convenient, at this stage, to establish the table of the coordinates of the control points (Volume
1, Chapter 3). In step (ii), the construction of neighborhood relations between elements was
already seen (Volume 1, Chapter 4). In step (iii), mesh extraction from the outer surface of
the object will allow for quickly plotting the scene. This extraction is based on neighborhood
relations, or, if not, on the fact that the faces that compose this mesh are unique (they belong to
only one single element). In step (iv), the plot of the scene (here the mesh elements according to
an automatic mode thereby without user intervention) deserves some more precise explanations.

• Displaying, basic principle. The aim is to find, for a given point in the physical space (in
two or three dimensions), the pixel of the screen that corresponds to it. By assigning a color to
every pixel, the representation (image) of the scene is built. Therefore, one could think to define
a triplet [i, j, c] with i and j the position (the screen being a space in two dimensions, the image
is obtained using a projection of the scene onto this space) and c a color. In fact, a quadruplet
[i, j, c, d] will be defined by adding a parameter called depth. This parameter will help to address
the visibility problem (hidden faces).

Figure 4.10. From the physical space to the pixels. On the left, the initial frame of reference
and a cube including the scene enable the triplet (i, j, p = d) to be defined; in the middle (the
position of the observer is chosen). On the right, the depth (p = d) is taken into account which
makes it possible to have the only visible pixel (i, j) if the visibility criterion is activated

To travel from a point of the physical space, (x, y, z), to a point on the screen (i, j, c, d), a
succession of transformations is performed (defined through their respective matrix, Chapter 2)
that allow different frames of reference to be constructed.

The frame of reference of the physical space is the conventional reference frame. A new
system of coordinates is defined by building a cube (a box) aligned with the canonical axes, and
the cube surrounds the scene (the mesh). The center of this cube becomes the origin of this new
frame of reference whose axes are the initial axes carried forward onto this origin. The cube is
normalized so that its edges are units. The observer’s position is then defined. The simplest is
to choose a point on a specific axis at distance 1 of the origin (the center of the cube). This thus
gives six possibilities, and the observer is above, underneath, in front, behind, to the right or to
High-Degree Mesh Visualization 119

the left of the cube. The chosen axis determines a new frame of reference. The distance to the
orthogonal plane to this axis is chosen upon which a projection is carried out, and this plane will
be the screen. In addition, the axis determines which is the coordinate that will play the role of
p, depth and how the other two coordinates will be translated in terms of (i, j). The plot then
consists of coloring the pixel (i, j) with color c by following the depth p = d if the visibility
criteria is to be taken into consideration.

• Default plot, step (iv). The point is to find a set15 of parameters leading, without any user
intervention therefore in automatic mode, to a quick plot of the mesh.

The extrema of the mesh are calculated and a surrounding bin (a cube) is built. This box
is resized to be centered at the origin O = (0., 0., 0.) and so that its edges have a unit size.
A rotation of 60◦ around the z-axis is applied. The observer is then fixed at position Obs =
(1., 0., 0.). These data are used to achieve a view, and then the user takes control.

So let us go back to Algorithm [4.4] to see how the user can now interact. We are at stage (v)
and the user intervenes to clarify what he/she wants to do (end the visualization (leave the appli-
cation) or rotate, get closer (enlarging), move away, make a cut, switch to shrink mode, switch
to minimesh mode, plot curved elements and fine-tune the representation mode of these ele-
ments, print out numbers, etc.). Hereafter, a brief description is given of some of the possible
ACT ION s.

• Geometric transformation. Any transformation is completely defined by a matrix (Chapter 2


of this volume). This matrix is applied to the coordinates of each point (vertex, node, generic
point) and this is the new set of coordinates that will be visualized. In practice, it is convenient
to center the scene to facilitate movement and avoid any unintentional movement.

• Perform a cut. Consider a cutting plane (a straight line in two dimensions) and therefore
only some of the elements of the mesh will be visualized. Three modes are generally proposed.
For the first, we draw the trace of the elements (on the line) in the cutting plane; for the second,
cut elements are actually drawn and then (if we are in three dimensions) those located behind
this plane; this will be referred to as “hedgehog” mode and, for the third, only cut elements are
drawn, namely a slice. In three dimensions, one naturally has here a notion of visibility.

The elements interested by the cut are detected using the equation (of the line) of the cutting
plane, denoted as f (x, y) = 0 or f (x, y, z) = 0. To elements whose edges are straight segments,
this function is evaluated at their vertices and the sign is analyzed. A change of sign at both ends
of an edge indicates that the element is cut (Figure 4.11 on the left). For curved elements, the
same strategy is proposed, taking into account the sign of the function at their (edge) nodes, a
change of sign between two consecutive nodes on an edge indicates that the element is cut (Fig-
ure 4.11 on the right and Figure 4.12 for a non-detectable case by looking only at the vertices).
Nevertheless, some configurations, in the same figure, on the right side, remain undetected via

15. We indicate what it can be, but this is merely a possibility.


120 Meshing, Geometric Modeling and Numerical Simulation 3

Figure 4.11. Detection of an element cut by a line, two-dimensional example

the nodes. The distance to the points16 are then considered and we can also introduce points on
the edges (for example, at the quarter). It should be noted, however, that making a mistake about
an element or two is not really critical since the cutting plane can always be moved.

In general, the cutting process needs to be fast. In particular, a rapid rejection of unconcerned
elements is a solution to speed up the process. To this end, each element is associated with a
container bin by taking control points into account and in case of obvious rejection, the necessary
computation is immediate (thereby for free).

Figure 4.12. Detection of an element cut by a line, two-dimensional example in which the
three vertices have the same sign and do not detect the cut, whereas the nodes see it (on the
left). Another situation, on the right, where the control points (Bézier) are examined and where
the decision remaining uncertain requires finer analysis (the red line cuts the edge twice and
the black line does not cut the edge)

 For meshes of any degree but with straight edges, the sign of the vertices detects whether
there is a cut or not. Unnecessary in hedgehog mode, the computation of the trace of the cut
in the element is obtained from the intersection points that are easy to calculate. The intersec-
tion between a line (a plane) and a segment simply has to be calculated. Knowledge and use of

16. The curved edge is wedged between the segments connecting the control points and those connecting
the nodes if there is no inflection point(s) or, more generally, is contained in the convex hull of the control
points.
High-Degree Mesh Visualization 121

neighborhood relations between elements enable minimizing the overall cost because some of
the information about an element has been calculated (sign and intersections) when examining
a nearby element. In addition, if an element (face) edge is cut, one knows in which neighbor(s)
the cutting line (plane) is likely to be active. The cutting is thus followed, element-wise, at lower
costs because the number of elements to be analyzed is minimized (usually for no purpose). Hav-
ing said that, and in practice, it is simpler and barely more expensive to analyze every elements in
a blind fashion (without considering the neighborhoods, which have become unnecessary) even
for meshes of significant but reasonable size.

 The case of curved meshes is singularly more complicated and little or not documented at
all. In the first place, the Lagrange elements will be transformed by formulating them in their
Bézier notation. In practice, the control points of this formalism (and not of the underlying
analytical expression) will be used. The control edges of the polygon (polyhedron) are then
considered, known as control segments. These segments are thus straight segments that join the
control points. It is then observed that any element in which none of the control segments is cut
(by the line or by the cutting plan) is not affected by the cutting, the opposite being false (see
Figure 4.12, on the right). All that remains is to analyze every other element, likely susceptible
(but not all, as thus shown in this figure, on the right side) to participate in the cutting. To
continue, we limit ourselves to planes and two situations will be found:
– the (cutting) line cuts only one of the control segments of a curved edge of one element and
two such edges are concerned (Figure 4.13);
– the (cutting) line cuts two of the control segments of a curved edge17 and only one such
edge is concerned, in the same figure.

In the end, with the points and control segments, there are several possible configurations (Fig-
ure 4.13) but to simplify, only cases with a single edge are going to be discussed (this reasoning
should be repeated if there are other edges involved). A method must be found to identify the
situation and discard the element if it is not affected by the cut despite the existing intersections.
Next, the intersection between the line and the curved edge has to be found such that to find the
trace of this cut in the element. The idea is pretty simple, the point of intersection will be looked
for by dichotomy (or, at the very least, a very close point). The basic ingredient is the use of the
De Casteljau evaluation algorithm coupled with a dichotomy method as explained below.

The algorithm (in its version with at most one intersection per edge) is synthesized in Algo-
rithm [4.5] and illustrated by Figure 4.14 by means of a third-degree example. The edge control
points are calculated and we look at the sign (vis-à-vis the equation of the cutting line) of the
ends, nodes and control points of the edge. If the edge is not discarded at this stage, its midpoint
is calculated (independently of its degree). The De Casteljau algorithm gives this midpoint and

17. It is implicitly assumed that there is no inflection point on a curved edge of a finite element, regardless
of its degree (in our opinion, a vertex may very well be a curve inflection point, but an edge node cannot).
Within a context of finite elements, this assumption seems natural and non-constraining. Otherwise, the
number of cut control segments may be higher than 2 and the algorithm, although based on the same idea,
must be adapted.
122 Meshing, Geometric Modeling and Numerical Simulation 3

Figure 4.13. Intersection of a line with control segments of curved edges of degree 2 on top,
degree 3 at the bottom. A few possible configurations (number of control segments and
intersected edges for a given element, number of effective intersections for a given edge)

builds the associated control segments. If d is the degree, d control segments will be found be-
tween the start of the edge and its midpoint as well as d other segments between this midpoint
and the end of the edge. To find the desired intersection, it is necessary to perform at most 2 d
computations (at most d to the left, at most d to the right). If no intersection is found, the edge is
discarded. Otherwise, the intersection belongs to either of the half-curves (on either side of the
midpoint) and this portion of the curve is retained to proceed again with the analysis in the same
way (computations of successive midpoints).

Intersection between a curved edge and a line [4.5]


i) Computation of the edge control points.
ii) Sign evaluations (ends, nodes, control points), edge rejection or examination.
1
iii) Evaluation of the midpoint of the edge (De Castlejau for u = 2) and update of control
segments.
iv) Computation of P , the intersection between the line and the control segments.
v) No intersection, rejection.
vi) Intersection, retain the “good” half of the curve.
vii) Evaluation of the midpoint, M , of this half (De Castlejau) and update of control segments.
viii) Stop test: if dist(P, M ) ≤ ε, P is the solution, END.
ix) Otherwise, go to (iv).
High-Degree Mesh Visualization 123

The algorithm terminates either via a rejection (the edge is not cut) or via the stopping test,
the point P is the solution18 sought for. In the event that there are several intersections on the
same edge, the same process is repeated as many times as necessary.

It should be noted that this highly simple algorithm is applied to find the intersection point(s)
between a plane and a curved edge. The intersection operator just has to be replaced and straight
geometries still have to be dealt with (line-segment or plane-segment now).

Figure 4.14. Intersection of a line, Γ, with a curved edge, example of a curved edge of
degree 3. First step involves calculation of the edge midpoint and second step involves
calculation of the midpoint of the selected half. This calculation is identical to the first, and
only the relevant control segments are updated

To obtain the trace of a two-dimensional cut, the calculated intersection points just have
to be linked for each element. In three dimensions, the trace is also obtained by linking the
intersection points, element by element. This problem is similar to that of the construction of an
implicit surface (Volume 1, Chapter 7), complicated by the fact that the intersected entities are
(may be) curves. We will revisit this problem in the next chapter, Chapter 5, and this will be an
opportunity to address multiple questions.

In three dimensions, which is the only relevant case, the hedgehog mode consists of plotting
the visible faces of the cut elements (the intersection points are useless) as well as the remainder
of the visible faces of the other elements.

In slicing mode, only the cut elements are plotted (again, the intersection points are useless).

• Shrinking. The elements are contracted around their center of gravity. The common edges
(faces) are duplicated, and any lack of conformity is immediately detected.

• The minimesh mode. This mode is very useful for locally observing a mesh in the neigh-
borhood of a vertex or face. For example, for a vertex, this vertex (or germ) is selected and the

P +M
18. Any disgruntled people still will be able to write P = 2
.
124 Meshing, Geometric Modeling and Numerical Simulation 3

view is centered on this point. Next, the vertex ball is printed out, then if needed, this ball is
magnified by adding its neighbors by vertex, edge and face (this is the notion of a crown). By
increasing per neighborhood and increasingly more the set of elements of this local mesh (known
as minimalsh), the whole mesh is refound. In the opposite direction, this set can be reduced and
the original ball recovered. For the faces, we have the same process, a face (or germ) is selected,
and the view if centered on that face. Next, the face is then plotted and next, if needed, the area
displayed is enlarged by adding its neighbors per vertex and edge.

• Curved elements. Automatic tessellation and modification (refinement or unrefinement)


of this tessellation. Formula [4.1] is applied elementwise to find the value of the subdivision
parameter of the edges. Depending on the geometry of the element (of the face), formula [4.2]
or [4.3] is applied, t is set or given, to find the values of the subdivision parameter(s) of the inner
sides of the first band. From these values are deduced successive bands and their cutting. The
cutting is topologically defined on the reference edge or element and transposed into the physical
space (following the De Casteljau algorithm and we can see here the interest of having built a
table to store the coordinates of the control points). For each element, one thus has its tessellation
and it is this one that will be pixelated (displayed on the screen).

By changing the value of t (adding or removing 1) in the formulas, the rendering accuracy is
increased or decreased and the De Casteljau algorithm recalculates the position of the vertices of
the corresponding tessellation.

It should be noted that the vertices of the tessellations drawn are not, except in special cases,
the nodes of the original element.

• Printing numbers. Here, it is simply a question of finding out where on the screen the
number should be displayed. For a vertex number, we look at its position and a slight offset is
given. For an element number, one might decide to print its number at its barycenter.

• Pointing to a face to select it or to center the view. The question is to find the coordinates
in the physical space of a designated pixel on the screen. The faces visible to the user are
associated with virtual faces (not visible). Each virtual face is associated with a color (RGBA
system). There are 232 possible colors19 and each color is associated with a number, in other
words, each color encodes a number. We look at the color of the virtual face associated with the
face pointed by the user, and its number is deduced. We thus have access to the vertices of the
face and therefore to the physical coordinates.

To see all of the possibilities offered by a graphic program20, its documentation will be re-
ferred to (paper, online or interactive).

The scheme above is reconsidered and it is rethought based on OpenGL, that is, based on
the use of Shaders and utilizing the computing power of GPUs, this is Algorithm [4.6] and the
following figures.

19. That is the capability of considering more than 4 billion faces.


20. Which is not the objective here.
High-Degree Mesh Visualization 125

Therefore, from the previous scheme, we reuse a displaying step, namely:

iv) [Loop over the elements, displaying, end Loop].


to see what is really hiding or to see anything that can hide behind this simple word, displaying.

The only elements we (can) know how to draw are straight segments (two vertices), triangles
(three vertices) and plane quadrilaterals (four vertices) whose edges are straight line segments.
This holds for the faces, and we know how to draw, with no errors, triangular faces and plane
quadrilateral faces whose edges are straight segments. Therefore, for solid elements, one will
know how to draw the faces of a tetrahedra of degree 1 or any degree but straight. For a hexahe-
dra, it is more problematic, in general and even at degree 1 × 1 × 1, the faces are not plane.

To draw a single element (of this nature), the physical coordinates (x, y, z) just have to be
shifted into screen coordinates and the associated pixelated polygon filled in with a color or the
3 (4) edges should be plotted in wireframe, which again amounts to finding pixels traveling an
edge whose physical coordinates are known at the two ends.

Obviously, this problem is not the one that arises, it is desirable to draw (for example, all) the
elements of a mesh or the visible faces of a whole mesh. Immediately, the notion of visibility
appears but we saw how to do this previously. In addition, for other elements (faces), curved
for instance, a decomposition into straight subelements will have to be used and, here too, we
previously saw how to do it. The plot must therefore incorporate a whole set of processes which,
in the end, corresponds to what can be done – print out a straight triangle taking into account the
notion of visibility. In fact, the graphics card deals with this visibility problem and we do not
have to worry about it explicitly; all that is left now is to look at how curved elements are to be
drawn. Here again, the notion of a pipeline mentioned earlier is found, which will allow data to
be processed (for example, the elements of a curved mesh) in order to extract the information
that is directly usable (the straight triangles of a tessellation built for this purpose) by the display
primitive.

Displaying is achieved by the graphics card (the GPU). Conventionally, all the data are pre-
pared on the CPU and they are sent to the GPU for printing out, this means that all computations
are achieved on the CPU and that there are numerous CPU-GPU transfers. Since GPUs are also com-
putational units, this ability will be taken advantage of to request them to perform computations
(the maximum number of computations) as well as for displaying. As such, (expensive) CPU-GPU
transfers will be limited and speed increased by the same amount.

To illustrate the use and role of Shaders in the pipeline, we are going, from the simplest case
to the most complicated, to present how to process vertices (Figure 4.15), edges or triangles, or
straight quadrilaterals (planes) of degree 1 (Figure 4.16), edges, triangles or quadrilaterals of any
degree (Figure 4.17). Figure 4.18, summarizes all of these cases, the pipeline being adapted to
the case dealt following through the right branches. As noted, the pipeline includes a succession
of operators, some of which are programmable to customize the software. In particular, we will
be able to deal with curved elements, which (fixed) operators do not know how to do, but pro-
grammable operators (so-called Shaders), inserted into the pipeline, are capable of performing.
126 Meshing, Geometric Modeling and Numerical Simulation 3

The expected Shaders are of several types – vertex, fragment, geometry, tessellation control
and evaluation control (hence the suffixes .vs, .fs, .gs, .tcs and .tes). They are applied to precise
geometric features – point, segment, triangle, etc., as entities that can be identified using the
prefix xxx. The principle is that the output information of a Shader be understood by the rest of
the pipeline; a branch that performs certain operations before addressing the following Shader
of the pipeline that processes current information and, if need be, sends them (to the following
branch) to the next Shader, etc., and so on up to the final Shader (the fragment Shader) that will
actually perform the display by sending all the information (packets of pixels and colors) that
is useful to the display (the frame buffer). For further understanding, we shall describe several
cases of increasing complexity.
• Displaying a point (vertex or node)
Figure 4.15 shows the part of the pipeline related to this case. The fragment Shader receives
on input what comes out of the pipeline (thereby previous branches and Shaders) and determines
the desired color of the pixels involved; it is the outputs that are going to be sent for effective
displaying (this is the frame buffer). The vertex Shader has as input the coordinates (x, y, z) of
the point and, possibly, attributes (number, reference number, normal, etc.).

Figure 4.15. The pipeline for the vertices: the vertex Shader, the branch
and then the fragment Shader

The view is entirely determined by the transformation matrix the (so-called view matrix) that
is known to all. For a cut, the signed distance to the cutting plane is a data element transmitted
to the vertex Shader. The latter applies the view matrix and takes into account, if applicable,
the cutting plane. Therefore, we calculate the transform of the point. The pipeline branch im-
mediately after transforms the physical coordinates transformed into a square of pixels that is
determined by a size (which is set or that users can themselves set in the xxx.vs) and the pixel
representation associated with the point. Each pixel of the above set is declared active except,
for cutting, if the point is not seen, in which case it is declared inactive. Finally, all pixels (active
or not) are processed by the fragment Shader, which ignores inactive pixels.

The fragment Shader can be programmed to define the symbol (for example, a circle) that
will be used to represent the point. We start from the square above and only certain pixels are
retained. The color is then set and the result is sent for displaying.

To process a set of points, a loop is executed in parallel.


• Displaying a segment (edge)
Figure 4.16 shows the part of the pipeline related to this case. Compared to the previous
case, the pipeline contains an additional Shader, the geometry Shader, which corresponds to the
High-Degree Mesh Visualization 127

geometrical entity (or primitive) processed, here a segment, and below a triangle of degree 1
or a quadrilateral (plane) of degree 1 × 1. It should be noted that the quantities attached to
the vertices of the primitive will be linearly interpolated along (inside) the primitive; this is the
reason why processing curved primitives will require a tessellation beforehand (see below and
see, in Chapter 5, the plot of a curved edge such as that of a line of isovalues).

The vertex Shader here is very simple and has as inputs the coordinates of the points (the mesh
vertices) to which it applies the view matrix. The geometry Shader processes here as geometric
primitive an edge that is defined by the datum of a first point, its attributes and its signed distance
with respect to an (eventual) cutting plane and by the datum of the same information for a second
point.

Figure 4.16. The pipeline for the straight edges or triangles and quadrilaterals with straight
edges

The edge is automatically defined, by linear interpolation, or defined in a personalized way,


for example by an elongated quadrilateral (thereby two triangles), which amounts to giving a
small thickness to the edge.

Taking into account the linear interpolation relative to the signed distance and the shift from
coordinates to pixels allows knowing whether a pixel is active or not. In general, every associated
data is linearly interpolated on the edge (in a triangle hereafter).

All pixels are sent to the fragment Shader and are assigned a color and only the active pixels
are sent for display.

• Displaying a straight triangle or a planar quadrilateral with straight edges

In Figure 4.16, the part of the pipeline related to this case is shown and the geometry Shader
is related to the geometric entity being processed. Similarly, for a triangle as primitive, the
vertex Shader will consider three points (vertices), for a quadrilateral, four points and the signed
distances with respect to a (possible) cutting plane. A quadrilateral will be seen as a strip of two
triangles. A triangle is defined by linear interpolation and the potential attributes are processed
in the same way. For example, if a normal is assigned to points, we will be able to define it at
any point of the triangle and use this information in the fragment Shader to obtain a rendering
with shading (Phong method or other method). A new attribute is also added that will enable
going back to the parametric coordinates (in the reference space) of each pixel. This attribute,
for triangles, is simply the barycentric coordinate of its vertices, that is (1, 0, 0), (0, 1, 0) and
(0, 0, 1), which, via linear interpolation, will give that of any point and then any pixel, namely
the triplet (u, v, w). For a quadrilateral (although cut into two triangles), it will be possible to
access the pair (u, v).
128 Meshing, Geometric Modeling and Numerical Simulation 3

Again, taking into account the linear interpolation relative to the signed distance and the shift
from coordinates to pixels allows knowing whether a pixel is active or not. As mentioned above,
all associated data are linearly interpolated in the triangle.

All pixels are sent to the fragment Shader and are assigned a color and the active pixels
are sent for display. It should be noted that the elements can be plotted without their edges
themselves explicitly plotted. This is due to the fact that pixels corresponding to the edges are
found according to the barycentric coordinate system added previously, so if a coordinate is zero
and if a color is affected (let us say black) to this pixel, then the edge, without really being
defined, will appear during displaying.

• Displaying a curved triangle or a non-planar or curved quadrilateral

Figure 4.17 shows the part of the pipeline related to this case and, compared to the previous
case, two new Shaders are added whose goal will be to be able to feed a slightly “standard”
geometry Shader, that is, close to that of the previous case dealing with a straight triangle.

Figure 4.17. The pipeline for the edges or triangles and curved quadrilaterals

In this case, the coordinates of the vertices will be used but also those of the control points of
the elements.

The vertex Shader has as inputs the coordinates of the control points. Therefore, we have
those of the vertices (and we could have those of the nodes, if necessary).

The tessellation control Shader will define the parameters that will govern the element tes-
sellation. This tessellation is performed on the reference element.

The tessellation evaluation Shader transports the tessellation of the reference element into
the physical space. A few attributes are then added, such as normals and coordinates reference.
Specifically, the geometric definition is used to evaluate the normal (to be able to make shades
further on) at the tessellation vertices. In addition, for each of them its coordinate is calculated
in the reference space. This will allow the calculation of the barycentric coordinates of all the
points of the triangles of the tessellation expressed in relation to the subtriangles containing them.
Finally, this Shader applies the view matrix to the coordinates and to normals.

The geometry Shader receives, in packs of 3, the points issuing from the previous Shader as
well as their attributes, normals, and barycentric coordinates, etc. It defines the signed distance
(in case of cut) at each point, it determines the (straight) triangles to be displayed and finally
translates these results into pixels.
High-Degree Mesh Visualization 129

The fragment Shader receives all the pixels issued from the previous stage, assigns them a
color and sends the active pixels for displaying.

It should be noted that if the mesh comprises several types of elements (say triangles and
quadrilaterals of a given degree), several Shaders will have to be defined, and the role of the xxx
suffix is to identify the latter.

For example, there will be a file tria2.tcs and a file quad2.tcs that will correspond to both
types of elements (therefore to the two Bézier definitions, the σ() functions, each specific to a
geometry).

• Generic pipeline

The generic pipeline (Figure 4.18) potentially comprises all branches, but these are or are not
covered depending on the case being addressed.

Figure 4.18. The generic pipeline.


Depending on the case, we run through the relevant branches

From these interaction and customization possibilities due to the use of Shaders, a synthetic
diagram can be built of what can be a visualization algorithm for meshes of any degree using
the capabilities of these Shaders and the computational potentials of GPUs. As such, part of the
algorithm is going to be developed on the CPU, then it will transmit useful information to the GPU,
which in turn, will perform computations (via the Shaders), carry out the display, return control
to the CPU, etc.
130 Meshing, Geometric Modeling and Numerical Simulation 3

Mesh visualization algorithm [4.6]


i) Mesh reading, vertex coordinates (nodes) and element connectivity (ordered list of vertices
(nodes)), and attributes.
ii) Construction of neighborhood relations (per edge or per face depending on the dimension).
iii) Mesh extraction from the surface of the object.
iv) Compilation of the Shader pipeline – Creation of arrays that will be the input parameters of
the Shaders – Transfer to GPU.
v) [Rendering loop, end Loop].
vi) User request, ACT ION .
vii) According to the request, END or return to (iv) with the data relating to the new mesh or
new state of the mesh to be processed according to ACT ION .

The first step, which differs from the traditional case, consists of compiling the graphical
pipeline composed of the different Shaders similarly to Figure 4.18. To execute the display de-
fined by this pipeline, it is necessary to define input arguments. So, compared to the standard
scheme (Algorithm [4.4]), the surface mesh to be displayed is transformed on the CPU into dif-
ferent simple arrays – namely the coordinates and indices (of the vertices of the elements). The
arrays are transferred to the GPU and constitute the input data of the graphics pipeline previously
compiled.

It is therefore no longer necessary to keep the arrays residing on the CPU, all the more reducing
the memory footprint on this CPU.

To provide a practical example and illustrate the use of these Shaders, we give here a concrete
example of displaying a triangle P1 and a triangle P2. In the case of P1, we take the standard
graphics pipeline (Figure 4.16), which is composed only of vertices, geometry and fragment
Shaders. The vertex Shader is given for a first-degree triangle that receives the coordinates of the
triangle vertices to be displayed and transforms them following the view matrix.

=============== Vertex Shader : P1 triangle =============

#version 400

layout (location = 0 ) in vec3 VertexPosition;

uniform mat4 ModelViewMatrix;


uniform mat4 MVP;

out vec3 VPosition;

void main()
{
VPosition = vec3(ModelViewMatrix * VertexPosition);
High-Degree Mesh Visualization 131

gl_Position = MVP * vec4(VertexPosition,1.0);


}

Here, we show the geometry Shader for a first-degree triangle that receives for each straight
triangle the vertices that have been transformed by the Shader above. This Shader computes the
normal of the triangle and transmits the positions and normal to the next Shader.

============ Geometry Shader : P1 triangle: flat shading ==========

#version 400

layout( triangles ) in;


layout( triangle_strip, max_vertices = 3 ) out;

in vec3 VPosition[];
out vec3 GNormal;
out vec3 GPosition;

void main()
{
vec4 vertex_0 = gl_in[0].gl_Position;
vec4 vertex_1 = gl_in[1].gl_Position;
vec4 vertex_2 = gl_in[2].gl_Position;

vec3 A = VPosition[1]-VPosition[0];
vec3 B = VPosition[2]-VPosition[0];
vec3 gFacetNormal = normalize(cross(A, B));

gl_PrimitiveID = gl_PrimitiveIDIn;
GPosition = VPosition[0];
GNormal = gFacetNormal;
gl_Position = vertex_0;
EmitVertex();

gl_PrimitiveID = gl_PrimitiveIDIn;
GPosition = VPosition[1];
GNormal = gFacetNormal;

gl_Position = vertex_1;
EmitVertex();

gl_PrimitiveID = gl_PrimitiveIDIn;
GPosition = VPosition[2];
GNormal = gFacetNormal;
132 Meshing, Geometric Modeling and Numerical Simulation 3

gl_Position = vertex_2;
EmitVertex();

EndPrimitive();
}

The fragment Shader is now shown for a triangle of degree 1, which computes the shading and
color of each pixel.

=============== Fragment Shader : P1 triangle =============

#version 400

layout( location = 0 ) out vec4 FragColor;

struct LightInfo {
vec4 Position; // Light position in eye coords.
vec3 Intensity; // A,D,S intensity
};

uniform LightInfo Light;

struct MaterialInfo {
vec3 Ka; // Ambient reflectivity
vec3 Kd; // Diffuse reflectivity
vec3 Ks; // Specular reflectivity
float Shininess; // Specular shininess factor
};

uniform MaterialInfo Material;

in vec3 GPosition;
in vec3 GNormal;

/*--------------------------------*/
/* standard ADS */
/*--------------------------------*/
vec3 phongModel( vec3 pos, vec3 norm )
{
vec3 s = normalize(vec3(Light.Position) - pos);
vec3 v = normalize(-pos.xyz);
vec3 r = reflect( -s, norm );
vec3 ambient = Light.Intensity * Material.Ka;
float sDotN = max( dot(s,norm), 0.0 );
High-Degree Mesh Visualization 133

vec3 diffuse = Light.Intensity * Material.Kd * sDotN;


vec3 spec = vec3(0.0);
if( sDotN > 0.0 )
spec = Light.Intensity * Material.Ks *
pow( max( dot(r,v), 0.0 ), Material.Shininess );

return ambient + diffuse + spec;


}

void main()
{
FragColor = vec4( diffuseModel(GPosition, GNormal ), 1.0 );
}

For more details, the motivated reader will be able to visit reference [Sellers et al. 2014].
Here, we illustrate the few important points to show the rendering mechanics using Shaders. For
each type of Shader, there is a declaration of input variables, provided by the user in the case of
the first Shader or calculated by the previous Shader. If the sequence vertex Shader → geometry
Shader → fragment Shader is followed. This array is usually transferred from the CPU to the
GPU and corresponds to the coordinates of the vertices of the triangle list to be displayed. This
Shader creates the variables VPosition and gl− Position, which are therefore the input variables
of the geometry Shader. The physical “coordinates” VPosition are used to calculate the normal
of each triangle or have a full face rendering (flat shading) without interpolation of the normals.
It is important to see that the system gives us in the geometry Shader the primitives (triangles)
and assembles the previously defined variables into three vertices. On output, triangles are also
created by associating a unique normal at each triangle vertex. The fragment Shader only uses
this unique normal (interpolated at the pixel level) to compute the shading based on the physical
characteristics of the material. In all Shaders, the uniform variables correspond to scene data or
user data. For each type of rendering, these variables can be modified or adjusted.

In the P2 case, the graphics pipeline has two additional Shaders that are used to control
the tessellation level (see trip2.tcs) using the estimator previously described. The pointwise
evaluation of normals is to get a smooth rendering. Normals are interpolated in the fragment
Shader at the pixel level, unlike the P1 case.

In the case of a P2 triangle, the tessellation control Shader has the shape hereafter. This
Shader calculates the parameters of the tessellation level based on the gap between the middle
control point and middle of the cord associated with the edge (curve).

==== Tessellation Control Shader : P2 triangle trip2.tcs ========

#version 400

// number of CPs in patch


layout (vertices = 6) out;
134 Meshing, Geometric Modeling and Numerical Simulation 3

uniform int tess_level ; // controlled by keyboard buttons

float distanceToEdge(vec3 P20, vec3 P02, vec3 P11)


{
if (length(P20 -P02) < 1.e-8 )
return (length((0.5*(P20 + P02) - P11))
/(length(P20 - P11) + length(P11 - P02)));
else
return (length((0.5*(P20 + P02) - P11))/(length(P20 - P02)));
}

void main ()
{
gl_out[gl_InvocationID].gl_Position =
gl_in[gl_InvocationID].gl_Position;

//--- control points


vec3 P200 = gl_in[0].gl_Position.xyz;
vec3 P020 = gl_in[1].gl_Position.xyz;
vec3 P002 = gl_in[2].gl_Position.xyz;
vec3 P110 = gl_in[3].gl_Position.xyz;
vec3 P011 = gl_in[4].gl_Position.xyz;
vec3 P101 = gl_in[5].gl_Position.xyz;

int t0 = min(1 + int(5*tess_level*distanceToEdge(P020, P002, P011)),32);


int t1 = min(1 + int(5*tess_level*distanceToEdge(P002, P200, P101)),32);
int t2 = min(1 + int(5*tess_level*distanceToEdge(P200, P020, P110)),32);

// Calculate the tessellation levels


gl_TessLevelOuter[0] = t0; // times to subdivide first side
gl_TessLevelOuter[1] = t1; // times to subdivide second side
gl_TessLevelOuter[2] = t2; // times to subdivide third side
gl_TessLevelInner[0] = min(max(t0,max(t1,t2)),32);
// number of nested primitives to generate
}

Still for a P2 triangle, the tessellation evaluation Shader has the form hereafter. We recognize the
2
six control points, PIJK alias Pi jk, and the six Bernstein polynomials, BIJK alias Bijk (u, v, w).
This Shader computes the normal at the pair (u, v) from the two derivatives at u and v.

======== Tessellation Evaluation Shader : P2 triangle trip2.tes ========

#version 400
High-Degree Mesh Visualization 135

// triangles
layout (triangles, equal_spacing, ccw) in;

out vec3 VPosition;


out vec3 VNormal;

uniform mat3 NormalMatrix;


uniform mat4 ModelViewMatrix;
uniform mat4 MVP;
uniform mat4 ViewportMatrix;
uniform int tess_level;
uniform float shrink;

void main ()
{
//--- barycentric (with shrink)
float us3 = 1.0/3.0;
float u = us3 + shrink*(gl_TessCoord.x - us3);
float v = us3 + shrink*(gl_TessCoord.y - us3);
float w = 1.0 - u - v;

//--- control points


vec3 P200 = gl_in[0].gl_Position.xyz;
vec3 P020 = gl_in[1].gl_Position.xyz;
vec3 P002 = gl_in[2].gl_Position.xyz;
vec3 P110 = gl_in[3].gl_Position.xyz;
vec3 P011 = gl_in[4].gl_Position.xyz;
vec3 P101 = gl_in[5].gl_Position.xyz;

//--- Bernstein
float B200 = u*u;
float B020 = v*v;
float B002 = w*w; //(1-u-v)*(1-u-v)
float B110 = 2*u*v;
float B011 = 2*v*w; // 2*v*(1-u-v)
float B101 = 2*u*w; // 2*u*(1-u-v)

vec3 pos = B200*P200 + B020*P020 + B002*P002 + B110*P110 + B011*P011 +


B101*P101;

//--- compute normal


vec3 du = 2*u*P200 + 2*(u+v-1)*P002 + 2*v*P110 - 2*v*P011 +
(2-4*u-2*v)*P101;
vec3 dv = 2*v*P020 + 2*(u+v-1)*P002 + 2*u*P110 + (2-2*u-4*v)*P011 -
136 Meshing, Geometric Modeling and Numerical Simulation 3

2*u*P101;

VNormal = NormalMatrix*normalize(cross(du,dv));
VPosition = vec3(ModelViewMatrix * vec4(pos,1.0));
gl_Position = MVP * vec4(pos,1.0);
}

4.4. Some examples

In this section, a few examples are shown in order to give an idea of what one is able to
address and to appreciate the nature of the obtained renderings. Beyond the classic meshes of
degree 1 (straight edges), we have examples where the meshes are of high degree. The next
chapter will show other examples (and fields of solutions).

The first figures, Figures 4.19–4.21, illustrate some of the possibilities offered by the software
to look at different aspects of a mesh (triangles of degree 1 for a surface, tetrahedra of degree 1 in
a volume). This is the mesh of a box around half an airplane. The mesh of the surface comprises
97,190 triangles (first two sets of figures), and that of the volume contains 4,740,567 tetrahedra
(last set of figures). The goal here is to show that with a relatively small number of (judicious)
functionalities, a mesh can be finely analyzed.

The automatic view is obtained without user intervention, the software is simply launched by
indicating the name of the mesh file. Based on this display, users take control. Here, they rotate
the object, select a face, destroy the boundary component thus identified and make the airplane
appear. When zooming in, they get closer to the aircraft, see the different surface components
(referred to by a number and thereby a color) and then draw the mesh of the aircraft surface.
Pointing to one face of the reactor and centering the view on this face, one gets closer to the
object to see its mesh in filling-face mode and then in shrink mode. Finally, from a face and in
minimesh mode, we look in detail at the elements of the successive crowns linked to this germ.

Figures 4.21 and 4.22 address the solid mesh of the box. The only means to visually assess
such a mesh is to make cuts. Therefore, a cutting plane is defined, toward the tail, and then it
is moved toward the wing. It is thus easy to observe how the mesh evolves from the boundary
toward the interior of the domain. The fluidity of displaying the elements, in hedgehog mode, is
assured because when the cutting plan is moved, these elements are not displayed.

Figures 4.23 and 4.24 concern a hybrid mesh composed of 311,326 tetrahedra, 52,944 pyra-
mids and 89,468 hexahedra. The mesh of this mechanical part is almost uniform in size and the
part is aligned with the canonical axes. The interior, that is, at a certain distance of the bound-
aries, is covered by hexahedra; in the neighborhood of the boundaries, especially when curved,
tetrahedra are used, and the connection between these two types of elements is ensured by means
of pyramids.

We continue with examples leaving aside meshes with straight edges (for which there are
also illustrations in Volume 2 of the present work) to move to curved meshes. We are going to
be able to evaluate methods specific to this case, namely the methods described above. The first
High-Degree Mesh Visualization 137

Figure 4.19. On top, automatic view (on the left), rotation, selection of one face of the
boundary, removal of the associated component, the aircraft appears at the bottom of the box
(on the right). At the bottom, enlargement of the aircraft, connected components of the surface
(on the left) and mesh of the aircraft surface (on the right)

example (Figure 4.25) is an object whose surface, defined by an analytical function, is meshed
with triangles of degree 3. By means of a cut, the graphic rendering of the triangles resulting
from this cut can be seen on the left (in hedgehog mode); this rendering was obtained via the
tessellation shown on the right-hand side.

The following figures are relative to computations in fluid mechanics, the mesh domain is
the complement to an aircraft in a box. Figure 4.26 shows the configuration. The airplane (its
surface) is meshed with the second-degree triangles. Given the size of the aircraft, one can
imagine that the mesh of the computational domain is already of a reasonable size, namely,
1,807,291 tetrahedra. The surface mesh includes about 100,000 triangles.

Figures 4.27 and 4.28 show the tessellations used to be able to properly represent the curved
regions. Different levels of refinement can be seen (this is the parameter t provided by the user).

Several surface meshes follow for a torus and a shuttle. From left to right, the elements are of
degrees 1, 2 and then 3. The drawings, with their shading (Chapter 5), enable a good appreciation
of the meshes. It can be seen (it is visually verified) in particular that the geometry is correctly
captured with fewer and fewer nodes and/or elements as the degree of the triangles increases. The
geometry of the torus is given by two NURBS of degree 5 with 6 control points and 12 knots.
138 Meshing, Geometric Modeling and Numerical Simulation 3

Figure 4.20. On top, centered view on one reactor face and then enlargement (on the left),
faces shrinking (on the right). At the bottom, selection of a face and “minimesh” mode with
this face as germ

Figure 4.21. Solid mesh evaluated via a few cuts. The initial cut is located toward the aircraft
tail

The shuttle is defined by two third-degree NURBS with, respectively, 8 (13) control points and
12 (17) knots.

Compared to the standard display of a P 1 mesh (flat shading), when rendering a surface
mesh of higher order, the normals at each point (pixel) are exactly evaluated on the GPU using the
appropriate Bézier form (see Chapter 5 for more details).
High-Degree Mesh Visualization 139

Figure 4.22. The cut progresses toward the wing

Figure 4.23. Some views of a hybrid mesh

Figure 4.24. Some views of a hybrid mesh (continuation)


140 Meshing, Geometric Modeling and Numerical Simulation 3

And to finish, here is a game by means of a question (Figure 4.31): “what is this mesh?” The
answer can be found at the end of this chapter.

Figure 4.25. Mesh of degree 3 (cross-section). On the left, the rendering obtained with the
tessellation shown on the right

Figure 4.26. The airplane in its surrounding box


∗ ∗

In this chapter, dedicated to the visualization of meshes, methods have been presented whose
scope of application is much broader than this specific framework.
High-Degree Mesh Visualization 141

Figure 4.27. On the left, the surface mesh of the aircraft; on the right, the tessellation of a
certain associated level

Figure 4.28. Two other levels of finer tessellation for the same example. We observe the
refinement of the tessellations according to the curvature

Figure 4.29. A torus meshed at degrees 1, 2 and 3. The numbers of nodes and elements are
successively 2,787 and 5,486, 616 and 298, and then 252 and 48

On the visualization side, we confirm the contribution of the representation in Bézier form
of high-order elements as well as the use of De Casteljau algorithms to develop computational
methods for intersections and the construction of cutting planes, which are questions that fall far
142 Meshing, Geometric Modeling and Numerical Simulation 3

beyond this visualization context. We have seen that using De Casteljau evaluation and subdi-
vision algorithms allows one to formulate geometrical problems relating to curved entities with
problems involving only straight entities, all the more simplifying the difficulties by reducing the
problems to classic situations.

Figure 4.30. A shuttle meshed at degrees 1, 2 and 3. The numbers of nodes and elements are
successively 1,647 and 3,123, 1,690 and 743, and then 1,791 and 359

Figure 4.31. Wireframe mode plot of a mesh

We show the flexibility offered by the use of programmable Shaders inserted into the graphic
pipeline. We have seen that this allows for an accurate and fast rendering of curved elements of
any degree and even, quite simply, of elements other than simplexes of degree 1, and even bended
quadrilaterals of degrees 1×1 (whose edges are nonetheless straight segments). The visualization
High-Degree Mesh Visualization 143

methods based on simple subdivisions of the elements are thus replaced by methods that allow a
singularly more faithful rendering.

In Chapter 5, we will look at how to render fields of solutions of a certain degree carried by
meshes of a certain degree (eventually different). Once again, we are going to verify that Bézier
formalism and De Casteljau algorithms are key players when it comes to addressing the issues
that arise by offering faithful solutions.

Answers the question on page 140


This is a mesh reduced to a single element. This element is a four-dimensional cube
and its faces are represented. The latter are cubes, therefore topologically in three
dimensions, with the adopted choice to address the fourth dimension, the model of the
figure is obtained. The initial cube has 16 vertices (or 24 ), 32 edges (or 2 × 12 +
8, which is twice the number of edges of the lower dimension added to the number
of vertices of this same dimension) and eight faces (that is 2 × 4). On the figure,
seven faces can be directly seen while the eighth “surrounds” the whole (note that
some vertices are duplicated).
Chapter 5

Visualization of a Solution Field Related to a


High-Degree Mesh

We have a mesh of degree 1 (or 1 × 1, quadrilateral case) or, more generally of any degree
and the solution computed from this mesh, which for its part is a first-degree (or 1 × 1) solution
or of any degree. In this chapter, it will be shown how this (discrete) solution can be represented
and precisely when its degree is not 1, for example, when it is of 1 × 1 degree or of any degree.
These cases are already covered by some software programs but, and even just at degree 1, the
drawing method is not always documented.

The functions that we are considering are scalar (temperatures, pressure, etc.), vector (move-
ments, normals, velocities, etc.) or still tensor functions (stress tensors, reference frames, metric
fields, etc.). Depending on these types, the representation sought for may appear under different
forms. It seems natural to visualize a scalar field via colors or via curves (or surfaces) of iso-
values, a vector field via vectors (small arrows) and a tensor field in the same way via (small)
frames of reference or, for example, for a metric field, via drawing ellipses or ellipsoids. In the
surface case, we can use normals to take shading into account, so this case is also discussed in
this chapter.

These functions are known discretely, usually at the nodes of the mesh being considered, but
may also be known element by element (for example, one by element that may be supposed to
be associated with its barycenter or several values associated with a few points of the element),
and, in this case, a given vertex has a value of the function for each of the elements that consist
its ball.

Moreover, the spatial dimension of the mesh (depending on whether it is a plane, a surface
or a solid) and the fact that the display will be necessarily carried by a (topologically) two-
dimensional entity, we impose that in three-dimensional, it is generally necessary to resort to
cuts, except to represent the results only on the surface of the object.

Meshing, Geometric Modeling and Numerical Simulation 3: Storage,


Visualization and In Memory Strategies, First Edition. Paul Louis George,
Frédéric Alauzet, Adrien Loseille and Loïc Maréchal.
© ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
146 Meshing, Geometric Modeling and Numerical Simulation 3

The point here is also (see Chapter 4) not about competing with big (commercial) software
programs that propose many possibilities for visualizing a field1; the point is also not to cast a
rather aesthetic look in nature on the results obtained but to represent the functions as accurately
as possible, that is to say as they have been calculated. This effectively excludes the use of
artifices2 (using averages, for example) that, to a certain extent, make it impossible to see what
(the behavior of) the function really is. Another aspect that concerns us is to offer the ability to
observe (for analysis) the function at the level of certain details (in the neighborhood of a vertex,
etc.).

Current methods are essentially based on two approaches. In the first, the elements of the ini-
tial mesh are cut into linear elements (for triangles or of degree 1 × 1 for quadrilaterals) and these
subelements are then drawn. There are two problems emerging: the required memory resource
(related to the decomposition level) and interactivity (since the elements that can be seen are not
the elements of the mesh). Another approach is based on using ray casting techniques to find the
value (color) of each pixel. This technique leads to solving nonlinear problems that, as such, are
time-greedy. We are therefore going to propose a different approach to try to avoid the above
drawbacks, namely the memory resource, the reliability of the representation and execution time.
For the latter point, the displaying speed is especially important in the case of movements or cuts
(this concern was already mentioned in the previous chapter) to ensure image fluidity (and to
avoid having to wait3 that it is rebuilt). This will (may) promote the use of acceleration tech-
niques as well as the use of GPU or the use of a certain amount of parallelism (current machines
all have several cores). Again, our aim is to be able to visualize large-sized meshes on standard
computers.

In this chapter, we begin with the description of some methods that make it possible to cut
an element into subelements, geometrically speaking, and then build the solution field induced
by this cutting. We look first at a method based on a subdivision (possibly recursive). Then, we
discuss methods based on a local element tesselation, this cutting being standard or adaptive.
Once achieved (next), we indicate what the possibilities of a few graphic primitives are (by
adopting the framework of the OpenGL 4.0 standard) and show how to use these possibilities
with the previously built subdivisions (tessellations), as input data.

5.1. Element recursive subdivision

The objective is to subdivide an element into subelements while keeping the same degree. It
should be noted, as a curiosity, that a d-degree quadrilateral (actually d × d) can be subdivided
into two triangles of degree 2 d and that for the control points defined4 for this purpose, the

1. With the caveat, at the moment of writing these lines, processing high-degree functions, although pro-
posed here and there, are not entirely clear.
2. Which are perfectly legitimate, or even essential in other scientific disciplines.
3. We could always go for a cup of coffee, but it should not take the whole day.
4. For a quadrilateral of degree 1 × 1, with control points P1 , P2 , P3 , P4 , one will have
P1 , P1 +P
2
2
, P2 , P1 +P
2
3
, P4 , P1 +P
2
4
and P2 , P3 +P
2
2
, P3 , P4 +P
2
3
, P4 , P1 +P
2
3
for the two degree 2 triangles
of one of the two (topological) possible cuts (here with the choice of diagonal P2 P4 ).
Visualization of a Solution Field Related to a High-Degree Mesh 147

covered polynomial space is the same one. As seen from the geometric patch perspective, the
initial patch and the two triangular patches are identical.

• Element subdivision
With the uniform or non-uniform subdivision of a first-degree element being trivial, we look
directly at that of an element of any degree considering, first, the triangles. Thereby, consider a
triangle of degree d. Its expression in Bézier formalism (Volume 1, Chapter 2, among others) is:

σ(u, v, w) = d
Bijk (u, v, w)Pijk [5.1]
i+j+k=d

d
with Bijk , the Bernstein polynomials of degree d, Pijk as the control points and (u, v, w) the
barycentric coordinates (0 ≤ u, v, w ≤ 1 and u + v + w = 1).

Figure 5.1. Subdivision into four of a triangle of arbitrary degree by introduction of a vertex
per edge, the midpoint appears to be a natural choice but this is not mandatory

The idea is to subdivide a triangle of degree d into four based on a point per edge5. These three
points are defined through their barycentric coordinates (Figure 5.1) for the three parameters (one
per edge) 0 < α, β, γ < 1. Based on this definition, we want to express what the control points
of the four d-degree subtriangles are thus formed. It will be shown how to find a parameterization
of these elements. First, we look at the subtriangle whose vertices are the images of the triplets
{(1, 0, 0); (γ, 1−γ, 0); (β, 0, 1−β)}. If one denotes by σ 1 (u, v, w) this element, it can be written
that:
σ 1 (u, v, w) = σ(u + βw + γv, (1 − γ)v, (1 − β)w),
with 0 ≤ u, v, w ≤ 1 and u + v + w = 1.

5. A conventional De Casteljau algorithm can only subdivide a triangle into three by way of the appropriate
construction of an internal point (Volume 1, Chapter 9), with the degenerate case where the point is on an
edge and only allows finding a subdivision in two triangles.
148 Meshing, Geometric Modeling and Numerical Simulation 3

To obtain the expression of σ 1 (u, v, w), the above formula has been elaborated, and we have
successively:

σ 1 (u, v, w) = d
Bijk (u + βw + γv, (1 − γ)v, (1 − β)w)Pijk
i+j+k=d

 d! i j k
= {u + βw + γv} {(1 − γ)v} {(1 − β)w} Pijk .
i!j!k!
i+j+k=d

However, as one has:


 i!
(u + βw + γv)i = ui1 β k1 wk1 γ j1 v j1 ,
i1 !j1 !k1 !
i1 +j1 +k1 =i

one gets:
 d!
σ 1 (u, v, w) = (1 − γ)j v j (1 − β)k wk
i!j!k!
i+j+k=d
Ñ é
 i!
× ui1 β k1 wk1 γ j1 v j1 Pijk .
i1 !j1 !k1 !
i1 +j1 +k1 =i

The combinations relative to the two sums are then concatenated into one single sum, that is:
 d!
σ 1 (u, v, w) = P(i +j +k )jk (1−γ)j v j (1−β)k wk ui1 β k1 wk1 γ j1 v j1
i1 !j1 !k1 !j!k! 1 1 1
i1 +j1 +k1 +j+k=d

 ß ™
d!
= ui1 v j+j1 wk+k1
i1 !(j + j1 )!(k + k1 )!
i1 +j1 +k1 +j+k=d
ß ™
(j + j1 )!(k + k1 )!
P(i1 +j1 +k1 )jk β k1 γ j1 (1 − γ)j (1 − β)k
j!j1 !k!k1 !
  K!J!
= Bi1 (j+j1 )(k+k1 ) (u, v, w)
d
( P(i +j +k )jk β k1 γ j1 (1−γ)j (1−β)k )
j!j1 !k!k1 ! 1 1 1
i1 +J+K=d k+k1 =K
j+j1 =J

= Bid1 JK (u, v, w)Pi11 JK ,
i1 +J+K=d

with J = j + j1 and K = k + k1 , which is an expression that is preferably written, by setting


I = i1 as: 
σ 1 (u, v, w) = d
BIJK 1
(u, v, w)PIJK . [5.2]
I+J+K=d

The control points of this subtriangle therefore read:


 J!K!
1
PIJK = (1 − β)k β k1 (1 − γ)j γ j1 P(i1 +j1 +k1 )jk .
j!j1 !k!k1 !
i1 =I
j+j1 =J
k+k1 =K
Visualization of a Solution Field Related to a High-Degree Mesh 149

J!K!
It should be noted that (1 − β)k β k1 (1 − γ)j γ j1 = BkK1 (β)BjJ1 (γ).
j!j1 !k!k1 !

With the same notations, we look at the subtriangles associated with triplets {(0, 1, 0);
(0, α, 1 − α); (γ, 1 − γ, 0)} and {(0, 0, 1); (β, 0, 1 − β); (0, α, 1 − α)}, which gives:

σ 2 (u, v, w) = σ(γu, v + (1 − γ)u + αw, (1 − α)w)


 
2 2
= BIjd
1K
(u, v, w)PIj 1K
denoted as d
BIJK (u, v, w)PIJK
I+j1 +K=d I+J+K=d
3
σ (u, v, w) = σ(βu, αv, w + (1 − β)u + (1 − α)v)
 
3 3
= d
BIJk 1
(u, v, w)PIJk 1
denoted as d
BIJK (u, v, w)PIJK ,
I+J+k1 =d I+J+K=d

whose control points are expressed by:

 K!I!
2
PIJK = (1 − α)k αk1 γ i (1 − γ)i1 Pi(i1 +j1 +k1 )k ,
k!k1 !i!i1 !
i+i1 =I
j1 =J
k+k1 =K

 I!J!
3
PIJK = αj (1 − α)j1 β i (1 − β)i1 Pij(i1 +j1 +k1 ) .
i!i1 !j!j1 !
i+i1 =I
j+j1 =J
k1 =K

It remains to look at the last subtriangle that corresponds to the three triplets {(0, α, 1−α); (β, 0, 1−
β); (γ, 1 − γ, 0)}, and it is written as:

σ 4 (u, v, w) = σ(βv + γw, αu + (1 − γ)w, (1 − α)u + (1 − β)v).

As the control points of the edges are already known (through the edges of the other three sub-
triangles), in principle the only thing which needs to be established is what the internal control
points are. However, we resume the whole calculation by expanding the formula. This gives
successively:

σ 4 (u, v, w) = d
Bijk (βv + γw, αu + (1 − γ)w, (1 − α)u + (1 − β)v)Pijk
i+j+k=d

 d!
= (βv + γw)i (αu + (1 − γ)w)j ((1 − α)u + (1 − β)v)k Pijk
i!j!k!
i+j+k=d
  
 d!  i! i1 i2 i1 i2  j!
= β γ v w α (1 − γ) u w
j1 j2 j1 j2
i!j!k! i +i =i i1 !i2 ! j !j !
j1 +j2 =j 1 2
i+j+k=d 1 2
 
 k!
(1 − α)k1 (1 − β)k2 uk1 v k2 Pijk
k1 !k2 !
k1 +k2 =k
150 Meshing, Geometric Modeling and Numerical Simulation 3

 d!
=
i1 !i2 !j1 !j2 !k1 !k2 !
i1 +i2 +j1 +j2 +k1 +k2 =d

β γ αj1 (1 − γ)j2 (1 − α)k1 (1 − β)k2 uj1 +k1 v i2 +k2 wi2 +j2 P(i1 +i2 )(j1 +j2 )(k1 +k2 )
i1 i2


= d
BIJK (u, v, w)
I+J+K=d
⎧ ⎫

⎪ ⎪


⎪ ⎪

⎨  I!J!K! ⎬
β i1 γ i2 αj1 (1 − γ)j2 (1 − α)k1 (1 − β)k2 P(i1 +i2 )(j1 +j2 )(k1 +k2 ) .

⎪ i1 !i2 !j1 !j2 !k1 k2 ! ⎪


⎪ j1 +k1 =I ⎪

⎩ i1 +k2 =J ⎭
i2 +j2 =K

In the end, one finds:



σ 4 (u, v, w) = d
BIJK 4
(u, v, w)PIJK ,
I+J+K=d

and the control points are expressed as:

4
PIJK =
 I!J!K!
αj1 (1 − α)k1 β i1 (1 − β)k2 γ i2 (1 − γ)j2 P(i1 +i2 )(j1 +j2 )(k1 +k2 ) .
i1 !i2 !j1 !j2 !k1 k2 !
j1 +k1 =I
i1 +k2 =J
i2 +j2 =K

It should be noted that the most natural choice for parameters is to set α = β = γ = 12 . To have
an idea of how the subdivision control points depend on the initial control points, if d = 3, the
value 12 is retained as a parameter and the first subelement σ 1 (u, v, w) is detailed. Then one has,
for I + J + K = 3:
 J!K! 1
1
PIJK = ( )3−I P(i1 +j1 +k1 )jk ,
j!j1 !k!k1 ! 2
i1 =I
j+j1 =J
k+k1 =K

which gives6:
1
P300 = P300 , the first vertex of the starting element,
1 P300 + P210
P210 = , subdivision of the initial edge by a classic De Casteljau,
2
1 P300 P210 P120
P120 = + + , idem,
4 2 4
1 P300 3 P210 3 P120 P030
P030 = + + + , identical to the calculation of the middle node via σ,
8 8 8 8

6. And the reader is invited to make a drawing to see how the initial control points contribute.
Visualization of a Solution Field Related to a High-Degree Mesh 151

1
so nothing is surprising here. More interesting are the control points of the edge joining P030
1
and P003 and the central point. For the latter, we find:

1 1
P111 = (P300 + P210 + P201 + P111 ),
4
and for the edge (non-boundary of the initial element), we find:

1 P300 P210 P120 P201 P111 P021


P021 = + + + + + ,
8 4 8 8 4 8

1 P300 P201 P102 P210 P111 P012


P012 = + + + + + .
8 4 8 8 4 8
4
The last interesting point is P111 , with the others similar to previous ones. One finds

4 1
P111 = (P210 + P120 + P021 + P012 + P102 + P201 + 2 P111 ),
8
as a solution and that seems natural (equal weights for the Pijk of the edges and predominant
weight for P111 ).

• Element recursive subdivision

Starting from the subelements above and applying, to each of them, the same process, yields
a cutting of the initial element into same degree elements, etc.

• Other elements

The quadrilateral case is resolved by directly using the De Casteljau formula7 (Volume 1,
Chapter 9) to evaluate the point (u, v) = (1/2, 1/2). This evaluation gives every control point
useful for the definition of the four subelements. Hexahedra are treated in the same way. As
an illustration, we shall explain the case of a quadrilateral of degree d as we did with triangles.
As indicated, the evaluation of an internal point, here the point (u, v) = ( 12 , 12 ), gives the result.
Therefore, the following algorithm is unrolled:

7. One could follow the same method for triangles by directly expanding the Bernstein polynomials. The
case should simply be looked at in one dimension. For example, one will have σ 1 (u) = σ( u2 ).
152 Meshing, Geometric Modeling and Numerical Simulation 3

1
Algorithm for the evaluation of the point u = v = 2 of a quadrilateral [5.3]
Initialization:Pij00 = Pij for i = 0, d and j = 0, d.
Do for k = 0, d − 1:
– do for i = 0, d − (k + 1) and j = 0, d:
- Pijk+1,0 = 12 (Pijk,0 + Pi+1,j
k.0
);
– end Do for i and j.
End Do for k.
Do for l = 0, d − 1:
– do for i = 0, d and j = 0, d − (l + 1):
- Pijk,l+1 = 12 (Pijk,l + Pi,j+1
k.l
);
– end Do for i and j.
End Do for l
=⇒ P00dd
is the solution.

Then the four subelements are defined through their control points:
1 2 d−I,J 3 d−I,d−J 4 I,d−J
PIJ = P00
IJ
, PIJ = PI0 , PIJ = PIJ and PIJ = P0J ,

thus, the first subelement is written as:


 
σ 1 (u, v) = 1
BId (u)BJd (v)PIJ .
I=0,d J=0,d

1
For a tetrahedra, it should be noted that the subdivision (even with a single parameter set to 2 to
cut through the edge midpoints) does not result in eight homothetic subelements.

The cases of other elements (pyramids and prisms) are not addressed here.

5.2. Recursive subdivision of a solution field

Consider a solution field (scalar or not) known at the nodes of a degree-d mesh; the field has
itself been approximated to this degree. More precisely, if f designates the underlying function,
on each element (we will take triangles), we have:

f (x, y) = pi (x̂, ŷ)fi ,
i

with the notations in Chapter 1 of Volume 1, pi refers to the ith shape function of degree d and fi
designates the value of f as calculated at node i of the triangle and, finally, f (x, y) is the value
of f at the point (x, y) of the triangle under consideration, this point being the image of the point
(x̂, ŷ) of the reference element (it looks like a parameter space in the world of geometry). This
notation with the Lagrange polynomials and the values at nodes is replaced by a notation in the
Bézier formalism. First, the above relation is rewritten (with obvious notations) as:

f (x, y) = pijk (x̂, ŷ)fijk ,
ijk
Visualization of a Solution Field Related to a High-Degree Mesh 153

and then one writes: 


g(u, v, w) = d
Bijk (u, v, w)gijk , [5.4]
i+j+k=d

thus defining a g function that is none other than the above function f . We explain the notation
by stating that u, v and w are the barycentric coordinates, that gijk is the control value (known,
and calculated as explained in the example given hereafter) of index ijk and that g(u, v, w) is
the value of g at the point of barycentric coordinates (u, v, w), therefore that of f at point (x, y).
It should be noted that gijk coincides with fijk if the triplet ijk corresponds to a vertex of the
triangle and, is calculated using the classic formulas (Volume 1, Chapter 3), if this triplet defines
a non-vertex node. This mechanism for shifting from fijk to gijk is exactly the same as the one
for shifting from a node to a control point and, likewise, the reverse shift, from gijk to fijk , is
also identical to that of the shift from a control point to a node.

To clarify these notions, consider a triangle of degree d = 3, of nodes Aijk , with control
points Pijk and one denotes by fijk the value of f at Aijk . We know that nodes are written
according to control points as:
8P300 + 12P210 + 6P120 + P030
A300 = P300 , A210 = ,
27
P300 + 6P210 + 12P120 + 8P030
A120 = and finally A030 = P030 ,
27
for the first edge (and similar expressions for the other edges), while the central node is equal to:
1 1 2
A111 = (P300 + P030 + P003 ) + (P210 + ... + P201 ) + P111 .
27 9 9
and, conversely, the control points are expressed as functions of the nodes as:
−5A300 + 18A210 − 9A120 + 2A030
P300 = A300 , P210 = ,
6
2A300 − 9A210 + 18A120 − 5A030
P120 = and P030 = A030 ,
6
for the edges and:
1 3 9
P111 = (A300 + A030 + A003 ) − (A210 + ... + A201 ) + A111 ,
3 4 2
for the central point.

It is therefore considered, with the same example, that the values of the function have been
calculated at the nodes of a triangle, that is fijk for i + j + k = d with, here, d = 3. Therefore,
we deduce the values of gijk , namely:
−5f300 + 18f210 − 9f120 + 2f030
g300 = f300 , g210 = ,
6
2f300 − 9f210 + 18f120 − 5f030
g120 = and g030 = f030 ,
6
154 Meshing, Geometric Modeling and Numerical Simulation 3

for the (control) values associated with the edges and:


1 3 9
g111 = (f300 + f030 + f003 ) − (f210 + ... + f201 ) + f111 ,
3 4 2
for the central (control) value.

The subdivision algorithm can now be applied to the triangle (of degree d) and to the func-
tion f . For example, if we only look at the first subtriangle, the control points are (as seen above)
given as:
 J!K!
1
PIJK = (1 − β)k β k1 (1 − γ)j γ j1 P(i1 +j1 +k1 )jk .
j!j1 !k!k1 !
i1 =I
j+j1 =J
k+k1 =K
with I + J + K = d. The function control values have the same form, namely:
 J!K!
1
gIJK = (1 − β)k β k1 (1 − γ)j γ j1 g(i1 +j1 +k1 )jk ,
j!j1 !k!k1 !
i1 =I
j+j1 =J
k+k1 =K

(control) values from which the function f is reconstructed in the subelement, that is, in the
triplet (u, v, w):

f (x, y) = g 1 (u, v, w) = d
BIJK 1
(u, v, w)gIJK . [5.5]
I+J+K=d

For other types of elements, the same mechanism is applied. From the nodes, the control points
are deduced and with these the subdivision is defined. Then, from the nodal values of the func-
tion, the control values of the initial element are deduced, and then we find the control values of
the subelements, before going back to the corresponding nodal values.

In conclusion, we have just provided a way to subdivide (once or recursively) an element


and the function (scalar or not) that it supports. It will be seen further on that this method is
used in some algorithms for visualizing (solutions) fields. It will be used to find the finer bounds
(extremal values) of the solution; these bounds8 will be used to set the color palette.

In the end, the solution will be represented through triangles whether the domain is plane or
a surface and, for the third dimension, via cuts enabling the return to triangles (the only element
that, in the end, graphics software are able to manipulate). How to build such cuts will be one of
the points discussed below.

5.3. Classic or adaptive tessellation

The goal is the same as in the previous chapter, to decompose an element into triangles
(plane or surface domain or after a cut, and three-dimensional domain), and represent the solution

8. Bounds that are too large crush the really useful range of colors and do not allow the solution variations
to be accurately seen.
Visualization of a Solution Field Related to a High-Degree Mesh 155

function based on this decomposition. Let us first clarify the meaning that we give to tessellation.
The tessellation of a domain is none other than its decomposition into simple geometric elements
(triangles in practice). It can be conformal when the intersection of two elements (seen as closed
sets) is empty, reduced to one vertex or an edge or non-conformal if this intersection is simply
empty (no overlapping). The tessellations that we shall see have no other specificities (do not
think of Delaunay tessellations that have no particular interest here).

Subdivision methods (uniform, recursive, or even not conformal, etc.) seen in the previous
chapter, with their flaws, are only, and a fortiori, approximate or even very approximate solutions
to represent fields of solutions (Figure 5.2). Furthermore, although the solution is correctly
calculated (at the right degree) at the vertices of the subdivision elements, it remains linearly
approximated in the latter. This bias, already visible for triangular elements, is even more evident
for quadrilateral elements, even straight, as shown in Figure 5.3.

Figure 5.2. On top, the reference solution (on the left) and uniform subdivisions with multiple
refinement levels. At the bottom, an adaptive and non-conformal recursive subdivision. Plot of
the field and/or of its isovalues

As mentioned in the previous chapter, the first idea is to achieve an adaptive tessellation
instead of a simple subdivision. A tessellation of each element is built by adapting its elements
(size, number, etc.) to the solution field and, more specifically, to the variations in that field. Here,
we find the classical philosophy of building adapted meshes (Volume 2, Chapter 8). The precision
of the result is controlled by a tolerance parameter (Figure 5.4). The tessellation is thus linked to
the field to be represented, but, in the end, the interpolation on each of its elements remains linear
and based on the values of the solution at the right degree, but only at the tessellation vertices.
156 Meshing, Geometric Modeling and Numerical Simulation 3

Figure 5.3. On top, the quadrilateral given and its two naive cuts into triangles. On the left, in
the middle and at the bottom, a uniform partition and a first-degree interpolation on each
subtriangle in the partition. Middle row, at the center and on the right, linear interpolation
(degree 1) on the two triangles of the decomposition. Bottom row, at the center and to the
right, bilinear interpolation (degree 1 × 1) on the two triangles of the decomposition based on
linearly calculated positions on each triangle

The purpose of the following will be to allow for a representation process without resorting
to a linear interpolation of the solution. The representation has a double aspect, plotting the
geometry (as in Chapter 4) and plotting the solution field.

5.4. Toward the design of graphic software based on OpenGL

Here, we reuse the fast presentation in Chapter 4 regarding, more specifically, the represen-
tation of the fields and it will be shown how it is not linear. However, first, some ingredients and
tools are introduced, which will be used in the program.

It should be noted that one must consider potentially curved meshes of a certain degree car-
rying solutions fields of a certain degree, and not necessarily the same one.
Visualization of a Solution Field Related to a High-Degree Mesh 157

Figure 5.4. Standard technique based on adaptive, coarse tessellation (on the left) and a more
accurate on the right

5.4.1. Palette definition

The aim is to automatically define the color palette (if not asking users to do it themselves).
To this end, the extrema of the function have to be found in a fairly accurate manner. In effect,
the impact of a more (at the bottom) or less (on top) accurate evaluation of these extrema can be
seen in Figure 5.5. This effect is manifested by the arrangement of colors (see, for example, the
red zone, which is very diluted or more circumscribed) and that of isolines.

To find and refine these extrema, we shall rely on the subdivision algorithms seen at the
beginning of this chapter, in their purely geometric aspect, namely cutting a patch into four sub-
patches of the same degree (Figure 5.1), and in their functional aspect, cutting a function into
four subfunctions of the same degree and here we shall focus on this second aspect. Just like a
patch σ(u, v, w), relation [5.1], is replaced by four patches, the σ i (u, v, w) with σ 1 (u, v, w) rela-
tion [5.2], the function, relation [5.4] is replaced by the functions g i (u, v, w) with as g 1 (u, v, w)
relation [5.5]. It is this decomposition that will now be used to fine-tune the bounds of the func-
tion to be represented.

In Lagrange, such a function is defined by its value at the nodes by means of the shape
functions of the desired degree, d, namely:

f (x, y) = pi (x̂, ŷ)fi ,
i
158 Meshing, Geometric Modeling and Numerical Simulation 3

Figure 5.5. Impact of the palette definition according to a more or less accurate evaluation of
the function extrema. This is a P 3 function and the extrema are more accurate at the bottom,
allowing for better narrowing the colors

with fi the value at the node i, pi the shape function number i, (x, y) the current position, and
as the point image (x̂, ŷ) of the reference element. In Bézier notation, if a triangle is considered,
this is written as:

f (x, y) denoted nowf (u, v, w) = d
Bijk (u, v, w)gijk ,
i+j+k=d

with Bernstein polynomials and the control values gijk defined as seen above. For any element,
it is known that:
min gijk ≤ f (u, v, w) ≤ max gijk ,
i+j+k=d i+j+k=d

first bounds9, via the control values, which will have to be refined (unless at degree 1 where we
cannot do better).

To fine-tune the function bounds (so to find an adapted palette), the elements of the mesh will
be looped over. For each of them, depending on the value of the extrema of the control values

9. This bounding property is the counterpart of the geometrical convex envelope property.
Visualization of a Solution Field Related to a High-Degree Mesh 159

vis-a-vis the function extrema at every mesh node, we shall decide whether or not to refine the
bounds. As a matter of fact, there are three situations (Figure 5.6) for an illustration in one space
dimension with a third-degree function.

Figure 5.6. Improving the global bounds according to local extrema compared to these same
+
bounds. On the left, the upper bound fmax can be refined and becomes fmax because of the value
of f at M . On the right, nothing changes, the bounds are not affected. The case in which the lower
bound can be refined is not illustrated, but is “symmetrical” to the left-hand side configuration

The method starts by finding fmin and fmax , the extrema of the values at the nodes of func-
tion f , the fi of Lagrange notation, fmin = mini fi , fmax = maxi fi . The control values for
all of the elements are computed, that is, for each element, the gijk of the Bézier notation. By
looping over the elements, the three illustrated cases in the figure are found, that is for a given
element:
– if fmin < min gijk and if fmax > max gijk , case 1, then the element is not involved and
ijk ijk
fmin and fmax are unchanged, go to the next element;
– (A) if fmin > min gijk , case 2, cut the function and update fmin with the vertex values of
ijk
the subelements;
– or if fmax < max gijk , case 3, cut the function and update fmax with the vertex values of
ijk
the subelements;
– for l = 1, 4, the10 subelements, if fmin > min gijk
l
or fmax < max gijk
l
, return to (A) by
ijk ijk
cutting the sub-lements again.

A maximal number of refinements can be imposed for a given element. The result of this
algorithm is that fmin and fmax could be updated (decreased for one or increased for the other)
and that the palette will be much more representative of the function range of variation.

10. Two in one dimension, four in two dimensions, etc.


160 Meshing, Geometric Modeling and Numerical Simulation 3

5.4.2. Cut definition

One looks at the intersection of a plane with, first, a tetrahedron and then with a different
element. In the case where elements have straight edges being classic, we only look at the case
of curved elements (but the classic case is covered with strong simplifications in the algorithms
proposed below).

• Intersection of a plane and a tetrahedron


The intersection of a plane with a tetrahedron is the portion of the plane delimited by the
plane trace on the faces of the tetrahedron cut by this plane. This trace is empty, reduced to one
point and one curve or several curves that need to be determined or approximated.

The equation of the face is known, it is the Bézier patch of the desired degree, the plane equa-
tion is a datum, nevertheless finding an analytical expression of the intersection between the face
and the plane is inextricable, and we shall therefore propose a method making it possible to build
a (reasonable) approximation of this intersection. The basic idea is to rely on Algorithm [4.5] in
Chapter 4. This algorithm computes, in an approximate manner, but also accurately as desirable,
the11 intersection between a plane and a curve of any degree. Since one looks for the intersection
with one face, the curves to be considered are the edges of the faces.

Figure 5.7. Intersection between the faces (of a tetrahedron) and a plane π. From left to right,
the intersection forms a “triangle”, a “quadrilateral” that slices several faces of the
tetrahedron or a different “polygon” that slices only one face (and no edges)

The analysis of the signs associated with vertices, nodes and control points of a given tetra-
hedron provides insight whether it is potentially cut by the plane. In the classic case, there can
be two kinds of intersections. The plane can intersect three edges (forming a “triangle”) or the
plane can intersect four edges (forming a “quadrilateral”) (Figure 5.7 on the left). However,
since curved elements are being considered, there is another existing possibility (Figure 5.7 on
the right), the plane does not intersect any element edge, but intersects one of its faces.

 The first two cases are addressed in the same way. For a given face, the two intersection
points of the plane with the two edges involved are calculated, as in Algorithm [4.5] and these

11. In principle, there is zero or one such point. Finding several points assumes that the curve has inflection
points.
Visualization of a Solution Field Related to a High-Degree Mesh 161

points are denoted A and B. There is a curve that joins these two points, which is a curve carried
by the face. It is then assumed12, by looking for a Bézier curve of the same degree, that the
element, has a curve whose ends are the two intersection points. To this end, it is necessary
to find points (d − 1 points, if d is the degree) between these two ends. To find this (or these)
point(s), we shall plot (a) curve(s) on the face, namely curve(s) issuing from the common vertex
(denoted C) at both edges, carrying the end intersections of the curve sought for.

To clarify the ideas, we consider the example of degree d = 2 and then the one with d = 3.
The face is defined by the classic expression:

θ(u, v, w, t) = d
Bijkl (u, v, w, t)Pijkl ,
i+j+k+l=d

with, for example, for the face t = 0 thus for I = 0, one has Bijkl
d
(u, v, w, 0) = Bijk
d
(u, v, w),
which gives: 
θ(u, v, w, 0) = d
Bijk (u, v, w)Pijk0 ,
i+j+k=d

that we denote σ(u, v, w) = d
Bijk (u, v, w)Pijk by omitting the index l = 0.
i+j+k=d

For d = 2, it is tempting to look for the curve defined by the parameters (u, v = u, w) with
u + v + w = 1. The result is obtained without difficulty. The expression and the result are written
by grouping the terms with the change of adequate variable so that u ranges from 0 to 1 (and not
from 0 to 12 ). This successively gives:
 
σ(u, v, w) = d
Bijk (u, v, w)Pijk = d
Cijk ui+j wk Pijk
i+j+k=d i+j+k=d
 u
1
for 0 ≤ u ≤ 2, thus d
Cijk ( )i+j wk Pijk for 0 ≤ u ≤ 1, which gives the curve
2
 i+j+k=d
γ(u, w) = d
Bik (u, w)Qik with:
i+k=d

P200 + 2 P110 + P020 P011 + P101


Q20 = , Q11 = and Q02 = P002 .
4 2
It should be noted that P200 +2 P4110 +P020 is the middle node of the edge [P200 P020 ] and, as such,
the curve found joins this midpoint to the vertex P002 with the average of midpoints of the other
two edges, [P200 P002 ] as a control midpoint and [P020 P002 ].

For d = 3, if the same idea is followed, one will look for the image of the points of parameters
(u = 2v, v, w) and (u, v = 2u, w). The same calculation successively gives the following
curves. The first curve (for u = 2v) is written as γ(v, w) = d
Bjk (v, w, )Qjk , with:
j+k=d

12. Or rather a choice is made since the result will only be an approximation (consistent from one element
to its neighbor).
162 Meshing, Geometric Modeling and Numerical Simulation 3

8 P300 + 12 P210 + 6 P120 + P030 P021 + 4 P201 + 4 P111


Q30 = , Q21 = ,
27 9
P012 + 2 P102
Q12 =
3
and Q03 = P003 , that is the curve going from the third point (the first
Lagrange node of the edge)
to P003 and the second curve (for v = 2u) is written as γ(u, w) = d
Bik (u, w, )Qik , with:
i+k=d

P300 + 6 P210 + 12 P120 + 8 P030 4 P021 + P201 + 4 P111


Q30 = , Q21 = ,
27 9
2 P012 + P102
Q12 =
3
and Q03 = P003 , that is the curve starting from the two-thirds point (the second Lagrange node
of the edge) to P003 . It is easy to convince oneself that this method applies
 to any value of
degree d. The curves found have the classic generic shape, γ(u, v) = d
Bik (u, v)Qij and
i+j=d
join the Lagrange nodes of the edge carrying neither A nor B to the common node of the edges
carrying A and B (Figure 5.8).

Figure 5.8. Construction of the curve (u, u, w), d = 2 and curves (u, 2u, w) and (2u, u, w)
for d = 3. In these examples, it is assumed that this is the face t = 0

However, it is easy to observe that these choices are not appropriate because, when applied to
a straight element, the “midpoint” of the curve is generally moved away or even far-away from
the middle of the straight segment supporting this curve with a consequence that the element of
boundary edges of these curves might be false. The solution is to figure out how to define the
sweeping curve(s). At degree 2, the choice is to replace the definition of the points in the curve
that are the triplets (u, α u, w) with α a parameter to be determined, in general different from 1.
For d = 3, two curves will have to be determined therefore with two parameters α. If we go back
to the previous calculation, for d = 2, it successively follows that:
 
σ(u, v, w) = d
Bijk (u, v, w)Pijk = d
Cijk αj ui+j wk Pijk
i+j+k=d i+j+k=d
Visualization of a Solution Field Related to a High-Degree Mesh 163

 u i+j k
1
for 0 ≤ u ≤ α+1 , thus d
Cijk αj ( ) w Pijk for 0 ≤ u ≤ 1, which gives the curve
α+1
 i+j+k=d
γ(u, w) = Bik (u, w)Qik
d
with:
i+k=d

P200 + 2 α P110 + α2 P020 α P011 + P101


Q20 = , Q11 = and Q02 = P002 .
(α + 1)2 α+1

We note that Q20 is a point of the edge [P200 P020 ] and, thus, the curve found joins this point
to the vertex P002 with an average of control midpoints of the other two edges, [P200 P002 ] and
[P020 P002 ] as a control midpoint.

It remains to calculate the parameter α. If A and B denote the intersection points of the plane
with the (two) face edges, Algorithm [4.5] that has found these points also gives their antecedents,
a = (ua , va , wa ) and b = (ub , vb , wb ), in the parameter space (the reference triangle). The
point c is defined as the midpoint of [ab], its coordinates are denoted (uc , vc , wc ), one then sets
vc
α= . This calculation applies to any degree; for example, for d = 3, the point at the third
uc
(at the two-thirds) of [ab] will be used to find the desired parameters α.

These curves being established, and using Algorithm [4.5] again, the intersection point(s)
of these curves with the plane are found. A Bézier curve is immediately deduced thereof13 of
degree d. Applied to each face, this process gives three (four) curves that border the portion
of the intersection plane of this latter with the tetrahedron. This portion can be seen as a plane
triangle (quadrilateral) whose edges are curves (the hatched areas in Figure 5.7 on the left and in
the middle). Rebuilding a Bézier element is then easy by inventing the possible internal control
points missing.

 The last case can be dealt with by again looking for an approximated solution. To do this,
the same approach can be followed: with Algorithm [4.5] using control segments of edges or by
reasoning directly on the control triangles of the face.
– In the first method (Figure 5.9), a curve (or curves) is (are) constructed linking each vertex
to one (or more) point(s) of the opposite edge. The choice of the number of curves used to sweep
the face is related to the degree and can be adjusted to better investigate the face examined.
Algorithm [4.5] is applied with every curve, that allows finding one, or rather two, intersection
points. The set of these points enables the (rough) definition of the outline of the face portion cut
by the plane. This polygon is then decomposed into coplanar triangles (using a naive method)
and its complementary in the face is addressed as the face would be.
This method remains relatively simple, but may “miss” all or part of the solution. There
are two possible answers thereto, a priori, cut the face into four sub-faces of the same degree
(Figure 5.1), and apply the algorithm to each of these sub-faces and/or increase the number of

13. Recall that this curve is only an approximation of the “real” curve which, at degree 2, is not a parabola
in general. It should also be noted that the image of segment [ab] of the parameter space, with a as the
antecedent of A and b that of B is indeed, for d = 2, a parabola but is not, in general, in the cutting plane.
164 Meshing, Geometric Modeling and Numerical Simulation 3

sweeping curves. It should be noted that if no intersection is found, even with a significant
number of sweeping curves, this means that the face and plane do not cut one another.

Figure 5.9. Construction of curves to sweep a face. On the left, one curve per vertex (for
example, if considering degree 2 and, according to α, the three curves cut each other at a
single point or not); on the right, two curves per vertex

Figure 5.10. On the left, the face [134] of the tetrahedron [1234] is a boundary face. On the
right, the face [134] is shared by the two tetrahedra [1234] and [1345]. The edges [15], [35]
and [45] are cut by the plane at the points denoted as 15, 35 and 45

This configuration is present in two different situations depending on whether the face is a
boundary face of the domain or the element has a neighbor through this face (Figure 5.10). On
the left side, the plane, denoted as π in the figure, cuts the face [134], delimiting a portion (drawn
in blue) thereof located above the plane and the result is the portion of the plane delineated by
this portion of the face. On the right side, the plane cuts the (same) face of the blue tetrahedron
and delimits the same portion, but also cuts the red neighboring tetrahedron. As the sign of the
vertex 5 is necessarily different from the sign of the other three vertices, the intersection defines
a “triangle” as in the first case (intersection with three edges). However, from this triangle the
(blue) contribution of the neighbor must be removed, the result is thus the portion (plane) in red
on the figure.
Visualization of a Solution Field Related to a High-Degree Mesh 165

– In the second method (Figure 5.11), we consider whether the control triangles of the face
can be used to achieve our goals. In this case, the signs of the face vertices and nodes14 are
identical and the sign of at least one control point is different. This implies that the plane cuts the
control edges joining such a point with a vertex. It may also cut other control edges (for example,
joining two non-vertex control points) and there are many possible situations. Analyzing the
possible sign combinations appears to be tedious, therefore we are going to propose a method
capable of recovering the first classic case seen above.
First, the situation discussed potentially arises only for a face including internal control
points. For a triangle, it must be at least of degree three (see the figure).

Figure 5.11. The nine control triangles of a third-degree face. The “corner” triangles, in red,
bear the tangent plane of the corresponding vertex. In red, nodes (including vertices); in black,
non-vertex control points

The idea is to use the De Casteljau algorithm to evaluate the point P as the image of triplet
( 13 , 13 , 13 ) (review Figure 9.2 in Chapter 9 of Volume 1). It should be noted that this point is
actually the 10th node of the face. The evaluation algorithm builds the control points of the three
sub-faces incidental at P on the spot. The trick15 then consists of interpreting these sub-faces
as the faces of a tetrahedron (Figure 5.12, on the right). The face [123] is not used; on the other
hand, the three incidental faces at P are correctly defined and their degree is that of the original
element.
Through this construct, valid for any degree, we find ourselves faced with the case, as shown
in Figure 5.7 (on the left). The method described for this situation is then applied, according to
the sign of vertex P .

• Intersection of a plane and a non-tetrahedral element

We just saw the case of a tetrahedron and the intersection problem concerning the faces of
the elements, therefore the case of triangles. For other elements (hexahedra, prisms, pyramids),
the faces encountered are triangles and/or quadrilaterals and these are the last faces that will be
discussed. To recover the previous case, the quadrilateral faces of degree d×d will be subdivided

14. Otherwise, at least one edge of the face would be cut by the plane.
15. Somewhat daring, that is true.
166 Meshing, Geometric Modeling and Numerical Simulation 3

into two triangular faces of degree 2d. Consider again the case where d = 1, already seen above,
and then we look at a case where d = 2 before seeing how to solve the general case.

Figure 5.12. Evaluation at node ( 13 , 13 , 13 ), denoted by P . Construction of the subdivision


control points, on the left. Interpretation of the subdivided face as a tetrahedron

 Cutting into two triangles a quadrilateral face of degree 1 × 1

To be sure that one finds the same polynomial space (as that of a quadrilateral), a triangle of
degree-2 must be considered and then to remove unnecessary monomials, an accurate choice of
the control points has to be made.

The organization of the control points is given in the following scheme with, on the left,
the quadrilateral face and, on the right, the first triangular face of degree 2 of the subdivision
(diagonal “Q00 Q11 ”). We have omitted the last index; in fact, we have a face and one should
write Qijk or Pijkl , but, for example, for the face w = 0, one has k = 0 and for the face t = 0,
we have l = 0 and so the notation is lightened by leaving this index behind.

Q01 Q11 P002

P101 P011

Q00 Q10 P200 P110 P020

To have the same definition of the geometry, it is necessary to choose the control points, the Pijk ,
depending on those of the quadrilateral, the Qij , as follows:
Q00 + Q10 Q10 + Q11
P200 = Q00 , P110 = , P020 = Q10 , P011 = , P002 = Q11 ,
2 2
Q01 + Q10
P101 = .
2
Visualization of a Solution Field Related to a High-Degree Mesh 167

The second triangle has the same reading and the choice of the other diagonal leads to similar
expressions. The definition of P110 (of P011 ) ensures that the associated edges are indeed the
initial straight segments (the other edge being what it is, possibly a parabola).

 Cutting into two triangles a quadrilateral face of degree 2 × 2

In order for the same polynomial space (as that of a quadrilateral) to be found, one must
consider a triangle of degree 4 and then to remove superfluous monomials, an accurate choice of
the control points has to be made.

The organization of the control points is given in the following scheme with, on the left,
the quadrilateral face and, on the right, the first triangular face of degree 4 of the subdivision
(diagonal Q00 Q22 ).

P004

Q02 Q12 Q22 P103 P013

Q01 Q11 Q21 P202 P112 P022

Q00 Q10 Q20 P301 P211 P121 P031

P400 P310 P220 P130 P040

To have the same definition of the geometry, it is necessary to choose the control points, the Pijk ,
depending on those of the quadrilateral, the Qij , as follows:

Q00 + Q10 Q00 + 4 Q10 + Q20 Q10 + Q20


P400 = Q00 , P310 = , P220 = , P130 = , P040 = Q20 ,
2 6 2
Q20 + Q21 Q20 + 4 Q21 + Q22 Q21 + Q22
P031 = , P022 = , P013 = , P004 = Q22 ,
2 6 2
Q12 + Q21 Q02 + 4 Q11 + Q20 Q01 + Q10
P103 = , P202 = , P301 = ,
2 6 2
and the internal control points of the face, P211 , P121 and
P 112 , remain to be determined. To
this end, one will evaluate, via the quadrilateral patch, Bi2 (u)Bj2 (v)Qij , the points of
i j
parameters ( 12 , 14 ), ( 34 , 14 ) and ( 34 , 12 ). These points are naturally denoted A211 , A121 and A112 .
We then look for the parameter values in the system of coordinates of the triangle. For example,
if (u, v, w) is the triplet associated with the point A211 , one has (u, v, w) = ( 12 , 14 , 14 ) and it is
written that: 
4
A211 = Bijk (u, v, w)Pijk .
i+j+k=4
168 Meshing, Geometric Modeling and Numerical Simulation 3

By expressing A121 and A112 in the same way, one obtains a system of three equations whose
three unknowns are P211 , P121 and P112 . These three control points are deduced therefrom and
the degree-4 triangle is perfectly determined.

The definition of P310 , P220 , P130 (of P031 , P022 , P013 ) ensures that the associated edges are
indeed the initial parabolas (the other edge being a curve of degree 4 in general).

 Cutting into two triangles a quadrilateral face of arbitrary degree

The mechanism illustrated by the two examples above is generic. If we consider a quadrilat-
eral face of degree d, the construction of two geometrically equivalent triangles is done with the
following steps.

i) Construction of the curve (of degree 2d) modeling the (one of the two) diagonal of the
quadrilateral, that is:
γ(u) = σ(u, v) with v = u,
which defines the control points of this edge.

ii) Construction of the curves related to the other two edges, for example (depending on the
diagonal chosen):

γ(u) = (1 − u + u)d σ(u, 0) and γ(v) = (1 − v + v)d σ(1, v),

which defines the control points of these edges.

iii) Rewriting these three edges in the system of coordinates of a triangle:

γ(u), u ∈ [0, 1] becomes γ(u, v), u ∈ [0, 1], u + v = 1.


(2d−1)(2d−2)
iv)] Construction of the 2 internal control points, therefore, if needed, that is, as
soon as d ≥ 2:
- construction (creation), via the quadrilateral patch, of the internal nodes;
- expression of these nodes via the triangular patch;
- solution of the resulting system.

which defines the internal control points.

In conclusion, using this subterfuge, the quadrilateral faces are replaced by triangular faces
and the previous algorithms are applied. It should be noted that the complexity (thereby the cost)
increases very rapidly with the degree, which is again an argument for confining, at the geometric
level, to degrees 1, 2 or 3.
Visualization of a Solution Field Related to a High-Degree Mesh 169

5.4.3. “Pixel-exact” or “almost pixel-exact” representation

It was seen that representing a (scalar) function consisted of giving a color to the pixels
displayed on the screen. To determine these colors, with a palette (see above) having been
defined, via the shader, one will use the shape functions of the finite element (from calculus)
written in its Bézier form. It is therefore necessary to associate with each pixel, its coordinates
in the reference element.

The function is known at the mesh nodes. For a given element (here a triangle), it is written
as: 
f (u, v, w) = d
Bijk (u, v, w)gijk .
i+j+k=d

It is therefore necessary to shift first from nodal values, the fijk , to the control values, the gijk ,
and one only has to know the triplet (u, v, w) to find the color of the corresponding pixel.

In Chapter 4, we showed how to build, element by element, a tessellation. The triangles


of this latter make it possible to find the exact triplet (u, v, w) (and this will be referred to as
“pixel-exact” representation) or in an approximated way (referred to as “almost pixel-exact”
representation). As a matter of fact, any point (pixel) of a tessellation triangle is referred to
by its barycentric coordinates in this triangle. Since the barycentric coordinates (in the refer-
ence element) of the vertices of the tessellation triangles have been memorized, we can find the
barycentric coordinates (now inside the reference element) of the point (pixel) (Figure 5.13). If
the current element is straight, the result is exact (Figure 5.14). Otherwise, it has the accuracy of
the tessellation.

Figure 5.13. Calculation of barycentric coordinates of a point of the physical space. On the
left, the reference element; on the right, the physical element with one of the triangles of its
tessellation to illustrate the correspondence process

The same process applies to all types of elements. The tessellation makes is possible to show
barycentric coordinates to travel back to the reference element to find the parametric coordinates
(barycentric or not, depending on the nature of this element) of any point in the physical space.
170 Meshing, Geometric Modeling and Numerical Simulation 3

Figure 5.14. The so-called “pixel-exact” technique, the triangle being right angled

5.4.4. Normals and shading

Based on the knowledge of the field of normals to a surface, lighting models can be used to
deduce one shading map facilitating the perception of reliefs.

The normal field of each mesh face of the surface we are interested in will be determined (sur-
face as such or surface of a solid mesh). Normals are calculated without artifacts (smoothing,
mean, etc.) in order to see exactly what the mesh is. One element is taken, namely a face from
a solid element or a surface element
 if we are in this case. In any case,  thiselement is written
in the form of σ(u, v, w) = d
Bijk (u, v, v)Pijk or σ(u, v) = Bid (u)Bjd (v)Pij
i+j+k=d i=0,d j=0,d
depending on its geometry (triangle or quadrilateral). Two vectors of the tangent plane are cal-
culated:
∂σ(u, v, w) ∂σ(u, v, w)
τ1 = and τ2 = ,
∂u ∂v
and, at the point of parameter (u, v, w) in the example of a triangle, the normal vector is written
as:
τ1 ∧ τ2
n(u, v, w) = .
||τ1 ∧ τ2 ||
Thereby, the normal is known everywhere, while another solution would be to consider normals
at the nodes only, and to infer by interpolation its values elsewhere.

If the normal is known at every point (of the face), it is possible to use the Phong model that
will calculate the light intensity emitted by reflection at every point. It is assumed that every
point is illuminated by a light source and this reflection intensity is calculated in the observer’s
direction. To have a realistic calculation, different parameters are defined, that reflect the proper-
ties of the environments involved and are taken into account in the model, such as parasitic light,
diffusion coefficient, and specularity coefficient.

Rather wisespread, the popular Phong model is not the only possible model (consider for
example the Gouraud model). Although it has some flaws, its use in our case is satisfactory
(quickly detecting the situation). The discontinuities in the rendering are immediately seen; if
such discontinuity corresponds to a singularity of the surface (sharp edge (ridge) or singular
Visualization of a Solution Field Related to a High-Degree Mesh 171

point), it is natural; otherwise, it underlines a default in the mesh (typically an element plotted
in counter-curvature). Figure 5.17 shows what one is expected to find. Reflected light indicates
that the mesh properly respects the curvatures of the object.

5.4.5. Level lines and surfaces and application to “wireframe” plots

For entities in two dimensions (triangles or quadrilaterals in a plane or surface mesh or still
elements derived from the cutting of a solid mesh by a plane), the construction and then the
plotting of a level line (isovalues) of a known function at the nodes, is tantamount to an implicit
curve (construction) problem.

Methods such as marching-cubes ([Lorensen, Cline-1987] and Chapter 7 of Volume 1) are


classically used to build, element by element, the portions of the curve (of the surface) searched
for. Within a linear context, the elements are simplexes of degree-1, the curve (the surface)
searched for is piecewise linear and its plot is merely the plot of “small” straight segments (trian-
gles). Different elements, be they simpe quadrilaterals (or cubes16), can be addressed provided
that precautions be taken to remove possible ambiguities. Nevertheless, as soon as we consider
elements of arbitrary order (and, why not curved) and since a linear curve (surface), yet mainly
other than piecewise linear, is looked for, the extension of these classic techniques does not seem
to be, both easy and, at the same time, satisfying as to what can be reasonably built (a curve or
surface of a certain degree that is not known, or of a certain regularity that can be linked to the
degree of the finite computation elements17) then plotted.

Before presenting a pixel method based on the possibilities of shaders, we shall nevertheless
try to see how to extend a marching-cubes approach to curved elements.

• Marching-cubes-based method
The underlying idea is very close to what was seen above, to make cuts in a solid curved
mesh, and the basic principle is that of conventional methods, namely to build a (portion of)
curve and then carry out the plot. The principle is based on the search, for each edge of the mesh,
for the point or points where the function reaches the value of the isovalue under study. From the
configurations obtained elementwise, portions of the isovalue curve will be built.

The tools used are calculations of intersections with a curve edge, and the subdivision of
elements via the edges as long as the solution (an approximation of the latter) is not found. To
understand the underlying philosophy, one merely has to reason on a single edge. Figure 5.15
illustrates the situation. The notation is in the system of barycentric indices, and Pij , Nij , fij
and gij successively denote the control points, nodes and values of the function f as calculated
at the nodes, as well as the control values of the function.

16. Conventionally, the spatial structure used is a grid of squares or cubes, and taking different elements, so
in this case a mesh, is a misuse of the original method.
17. As geometric elements, we can have elements of a certain degree and as computation elements, finite
elements of a different degree.
172 Meshing, Geometric Modeling and Numerical Simulation 3

We will then use the fact that the values of the function f on the edge are bounded; this is the
classic relation min gij ≤ f (u, v) ≤ max gij for (u, v) traversing the edge.
ij ij

Figure 5.15. On the left, from top to bottom, the geometrical definition of a curved edge of
degree 2 with its control points, the parameter space and the solution seen as a curve. On the
right side, still from top and bottom, the geometrical definition of a third-degree curve with its
control points, parameter space and a solution first seen as a curve without inflection and then
with such inflection

The dichotomy approach to perform cuts then follows. An examination of the control values
of a given element indicates whether the isovalue under study is likely to cross this element. If
this is the case, the examination of the control values of the element edges indicates whether the
isovalue being studied is likely to cut an edge. If this is the case, the element is divided into four
elements of the same degree. The value of the function f at the middle nodes added is compared
with the isovalue sought for. This leads to the conclusion, and the solution intersection point has
been found, or we iterate.

Once the intersection points are known, we find the classic method that involves creating
portions of the isovalue curve under study.

Recalling the fact that this is only a sketch of a method and that the possible difficulties are
not discussed, we are not taking this description any further to move to a different approach
working directly on every pixel.
Visualization of a Solution Field Related to a High-Degree Mesh 173

• Pixel direct method

The logic here is totally different, and we do not seek to build a curve and to then trace it,
but one looks directly into the pixels to identify those that correspond to the level line. The
method used follows ideas by [Taubin-1994], which directly reasons at the pixel level and, as
such, the bias of staying linear is avoided. In other words, the elements of higher degree, curves
in particular, are taken into account in a much better way.

One of the difficulties that has to be taken into account is to obtain level lines whose thickness
is independent of both the degree of the elements, but also of the viewing conditions (for example,
in case of magnifications). To understand how to control the (apparent) thickness of an isoline,
one will look at the behavior of the function considered in the neighborhood of a given level line.

We denote by f (.) the solution whose isolines are searched for, and it is assumed that f is
sufficiently regular, so at least C 2 . It is also assumed that one is able to calculate the gradient of f .
Looking at the isoline Iso(f ) of level f0 , by definition one has Iso(f ) = {P ∈ R2 , f (P ) = f0 }
where R2 is none other than the display screen. Consider then a regular point P therefore such
that ||∇f (P )|| = 0. A Taylor expansion is carried out at the first order in the neighborhood of
P , that is:
−−→ −−→
f (Q) = f (P ) + < ∇f (P ), P Q > + O(||P Q||2 ),
where P is on the isoline of level f0 , thus:
−−→ −−→
f (Q) = f0 + < ∇f (P ), P Q > + O(||P Q||2 ),

ignoring the second-order term, one writes:


−−→
f (Q) = f0 + < ∇f (P ), P Q > .

Since it is assumed that f is fairly regular, by continuity, it can be written that:


−−→
∇f (Q) = ∇f (P ) + O(||P Q||),

and, ignoring the first-order term, one simply writes:

∇f (Q) = ∇f (P ).

It is assumed that there is a parameter ε such that:

|f (Q) − f0 ) < ε||∇f (Q)||, [5.6]


−−→
then P Q is decomposed in the basis formed by the unit tangent and the unit normal at P on the
−−→
curve, it is then written that P Q = Q − P = αt + β n. Since the gradient is aligned with the
normal, we have, at the first order:
−−→
| < ∇f (P ), P Q > | = |β| ||∇f (P )||,

which leads to, since ||∇f (Q)|| = ||∇f (P )|| and ||∇f (P )|| = 0, having |β| < ε. In other
words, relation [5.6] makes it possible to control the normal component of the distance between
174 Meshing, Geometric Modeling and Numerical Simulation 3

P and Q. We also show that we can choose α = 0, and thus, β can be seen as the distance
between two lines of neighboring isovalues or still as the thickness of the plot of a given isovalue
line.

In practice, for a given function f , its two closest isovalue lines f0 and f1 are calculated, and
one looks at relation [5.6] for these two isovalues. If the inequality is verified, the desired color
is taken and, in particular, will result in the thickness of the plotted line which, as a result, does
not depend on the viewing context (magnification, etc.).

For a constant function f , if we have an extremum or if we are at a singular point (||∇f (P )|| =
0), the above does not apply. However, relation [5.6] is still unverified and there is no plot (which
is the expected result).

• Application to the “wireframe” plotting mode for an edge

Instead of plotting the edges of an element based on a subdivision (a tessellation) of the latter,
this problem is formulated as being the plotting of particular isovalue lines.

For a triangle, σ(u, v, w) with (u, v, w) the coordinates in the parameter space, therefore with
u + v + w = 1 and a variation between 0 and 1, the edges in the physical space are the images of
the edges of the reference element that correspond to the values u = 0, v = 0 and w = 0. For a
quadrilateral, the boundary edges are associated with the values u = 0, u = 1, v = 0 and v = 1.
For different (solid volume) elements, the same applies.

As seen above, the shader allows associating each screen pixel corresponding to a given
element, with a point of the parameter space, therefore a triplet (u, v, w) (for triangles or trian-
gular faces). Formally, the edge that connects the second and the third vertex of a triangle is the
image of the edge u = 0 of the reference triangle. Conversely, a pixel whose antecedent is such
that its u is zero or very close to 0 belongs to a boundary edge of the triangle and this proximity to
the zero value will allow a rendering of the edge due to the fact of imposing a particular color to
it. It will then look like a curve (rather than portions of straight segments) of a certain thickness.

Therefore, if we consider the function denoted f (x, y) in physical variables (screen) and
denoted f (u, v, w), defined by f (u, v, w) = u, the outline of the edge considered above is
identical to the problem of processing the isovalue f (u, v, w) = 0. The method described above
has to simply be applied for all three functions f (u, v, w) = u, v or w by searching for the zero
isovalue.

• Cutting

In order to recover the above plot of isovalues, it is necessary here to find the values of the
function in the cutting plane. We are in three dimensions with solid elements and the function
is known everywhere based on its values that have been calculated at the nodes of the solid
elements. The method used to define the decomposition allowed us to build elements (triangles
or quadrilaterals) of a degree equal to that of the solid mesh. These elements have been used to
build a mesh in the cutting plane. The finite elements utilized to perform the calculations are of
this same degree or a different (higher) degree. In order to be reduced to the standard outline of
Visualization of a Solution Field Related to a High-Degree Mesh 175

an isovalue, it is thus necessary to transport the solid solution onto the elements that compose the
mesh of the cut.

To simplify the discussion, it is first assumed that the elements of the mesh and the finite
computation elements are the same (same degree). In an attempt to identify more precisely,
the questions that arise, we begin by revisiting the problem involving a tetrahedron of degree 1
cut by a plane. If a triangle is the trace of the cutting plane, the calculation of the solution in
the tetrahedron at the points of the three cut edges makes it possible to consider this triangle
with its three vertices and their solutions, as a usual finite element and “one remains” in the
initial polynomial space. If a quadrilateral is the trace of the cutting plane, the calculation of the
solution in the tetrahedron at the points of the four cut edges makes it possible to consider this
quadrilateral with its four vertices and their solutions, as a usual finite element, but a priori, we
are no longer within the initial polynomial space.
 In fact, we are going to try to verify what the situation is. Revisiting Figure 5.7, con-
sider a first-degree tetrahedron of vertices [S1 , S2 , S3 , S4 ] and, first case, we denote by A, B
and C the vertices of the triangle resulting from the cut. Denote by (uX , vX , wX , tX ) the
barycentric coordinates of one point X of the tetrahedron and (λ1 , λ2 , λ3 ) the barycentric
coordinates of a point P of the triangle [A, B, C]. We have P = λ1 A + λ2 B + λ3 C,
thus (uP , vP , wP , tP ) = λ1 (uA , vA , wA , tA ) + λ2 (uB , vB , wB , tB ) + λ3 (uC , vC , wC , tC ), to
say that uP = λ1 uA + λ2 uB + λ3 uC and similar expressions for the other coordinates of
P.  In terms of solution, using the Bézier notation for tetrahedra, one has f (u, v, w, t) =
1
Bijkl (u, v, w, t)gijkl with the usual notations. Therefore, at point P , the solution
i+j+k+l=1

1
f is written as fP = Bijkl (uP , vP , wP , tP )gijkl (we are at degree 1, gijkl = fijkl the
i+j+k+l=1
solution at the vertex of index ijkl); this gives, in full:

fP = up g1000 + vP g0100 + wp g0010 + tP g0001 ,

by expressing uP , etc., it follows, step by step:

fP = (λ1 uA + λ2 uB + λ3 uC ) g1000 + ....

= λ1 (uA g1000 + vA g0100 + wA g0010 + tA g0001 ) + ...


= λ 1 f A + λ2 f B + λ3 f C ,
which, unsurprisingly, is simply the interpolation on the triangle [ABC], seen as a finite element
of degree 1, fA designating the solution at point A. For the second case, the trace of the cutting
plane is a quadrilateral denoted [ABCD]. We denote by λ1 and λ2 the values18 of the parameter
pair of P in [ABCD] and the same mechanism is unwound. One has P = (1 − λ1 )(1 − λ2 )A +
λ1 (1 − λ2 )B + λ1 λ2 C + (1 − λ1 )λ2 D. Thereafter:

uP = (1 − λ1 )(1 − λ2 )uA + λ1 (1 − λ2 )uB + λ1 λ2 uC + (1 − λ1 )λ2 uD ,

18. Parameters obtained using an iterative method. The point is given for (x, y, z) to recover λ1 and λ2 .
176 Meshing, Geometric Modeling and Numerical Simulation 3

and similar expressions for the other barycentric coordinates. The solution in P , as previously
fP = up g1000 + vP g0100 + wp g0010 + tP g0001 , is expressed, step by step, such as:

fP = {(1 − λ1 )(1 − λ2 )uA + λ1 (1 − λ2 )uB + λ1 λ2 uC + (1 − λ1 )λ2 uD } g1000 + ...

= (1 − λ1 )(1 − λ2 ) {uA g1000 + vA g0100 + wA g0010 + tA g0001 } + ...


= (1 − λ1 )(1 − λ2 )fA + λ1 (1 − λ2 )fB + λ1 λ2 fC + (1 − λ1 )λ2 fD ,
which, unsurprisingly, is simply the interpolation on the quadrilateral [ABCD], seen as a finite
element of degree 1 × 1. In the first case, the cutting triangle is in the space P 1 as well as the
tetrahedron; in the second case, the cutting quadrilateral is in the space Q1 , but the result remains
in the polynomial space of the tetrahedron.

Is it obvious that this situation will be the one encountered for elements of arbitrary degree.
This observation leads us to proposing a method in which the element formed in the cut is uti-
lized, not as an interpolator of solutions, but as a means for finding, for any point, its coordinates
in the initial element. Therefore, without hesitation, the evaluation of the function will be done in
the initial polynomial space. Cuts whose trace defines a triangle or a quadrilateral are addressed
with this in mind. On the other hand, a plane that does not cut any edges (case brought forward
above), but cuts a boundary face (there is no neighbor), is not addressable in this way.

Pragmatically, it is likely that the use of a tessellation (for a solid) will be the most realistic
approximate solution for the problem.

• Level surface

Here, we are going to reuse the approach based on building an implicit surface (Volume 1,
Chapter 7, Figures 7.4 and 7.6) and follow what was seen above for processing the level lines
using this marching-cubes-type approach. The elements of the mesh will be recursively subdi-
vided directly by way of the De Casteljau algorithm for hexahedra and through a generalization
of the method seen above for triangles in the case of tetrahedra. The subdivision criterion is
based on the range of values of the function under consideration, f (u, v, w) or f (u, v, w, t), by
the extrema of the control values, gijk or gijkl . In this way, one finds the points of intersection
between the isosurface and the edges of the elements and therefrom portions of the desired result
are deduced.

We do not give an opinion on whether a direct pixel-based approach is also a possible solution.

5.4.6. Representation of non-scalar functions

It may be interesting to represent non-scalar fields. In a classic fashion, a velocity field will
be drawn by means of arrows, a tensor field using (small) reference frames, a field of metrics
(Figure 5.16) by means of ellipses or ellipsoids depending on the dimension.
Visualization of a Solution Field Related to a High-Degree Mesh 177

Figure 5.16. Metric representation at one point by a ellipsoid. On the left side, this ellipsoid.
On the right, the vertex with this metric and some elements of its ball

5.4.7. Simplified scheme for a graphic software program

We now propose a simplified scheme of what could be the organization of mesh and solution
field visualization software.

Mesh and field visualization algorithm with shaders and GPU [5.7]
i) Reading of the mesh (nodes) and the solution (values at the nodes).
ii) Conversion to Bézier, control points (mesh) and control values (solution).
iii) Construction of neighborhood relations (per edge or per face depending on the dimension).
iv) Mesh extraction of the object surface.
v) Compilation of the shader pipeline – creation of arrays that will be the input parameters of
the shaders – creation of solution arrays in the form of texture – transfer to GPU.
vi) [Rendering loop, end Loop].
vii) User request, ACT ION .
viii) According to the request, END or return to (v) with the data relating to the new mesh or
new state of the mesh to be processed according to ACT ION .

Compared to the rendering loop of Chapter 4, the novelty consists of creating and using
textures in order to be able to compute almost exactly the color at the pixel level. To illustrate this
method, we give the example of a P 1 triangle with a P 3 solution, it is the fragment Shader that
follows. The peculiarity of this shader consists of calculating the P 3 solution on the current pixel.
Compared to a basic display (P 1 ), the function “solp3” is used with the barycentric coordinates
of the triangle. The variables “gl− PrimitiveID” and “GBarycentric”, respectively, correspond to
the triangle number and the barycentric coordinates of the point represented; these variables are
provided by the geometry Shader.
178 Meshing, Geometric Modeling and Numerical Simulation 3

=============== Fragment Shader : P3 triangle =============


#version 400

layout( location = 0 ) out vec4 FragColor;

struct LightInfo {
vec4 Position; // Light position in eye coords.
vec3 Intensity; // A,D,S intensity
};

uniform LightInfo Light;

struct MaterialInfo {
vec3 Ka; // Ambient reflectivity
vec3 Kd; // Diffuse reflectivity
vec3 Ks; // Specular reflectivity
float Shininess; // Specular shininess factor
};

/* texture/solution access */
uniform samplerBuffer tex;

uniform MaterialInfo Material;

uniform struct LineInfo {


float Width;
vec4 Color;
} Line;

in vec3 GPosition;
in vec3 GNormal;
in vec3 GBarycentric;

/* display option */
uniform float Palette[5];

/*-------------------------*/
/* color map function */
/*-------------------------*/

uniform samplerBuffer colmap;


uniform int colmapsiz;

// --- Return colormap rgb between 0 and 1.


vec3 colormap_rgb(float value, float minVal, float maxVal) {
Visualization of a Solution Field Related to a High-Degree Mesh 179

vec3 rgb;
float step ;
int point1, point2;
float clamped = clamp(value, minVal, maxVal);

if (colmapsiz > 1) {
step = (maxVal - minVal) / float(colmapsiz - 1);
point1 = int(floor((clamped - minVal) / step));
point2 = min( point1 + 1 , colmapsiz - 1);
}
else {
step = 1.0;
point1 = 0;
point2 = 0;
}
float pos = fract((clamped - minVal) / step);

rgb.x = mix(texelFetch(colmap, 3*point1).x,


texelFetch(colmap, 3*point2).x, pos);
rgb.y = mix(texelFetch(colmap, 3*point1 + 1).x,
texelFetch(colmap, 3*point2 + 1).x, pos);
rgb.z = mix(texelFetch(colmap, 3*point1 + 2).x,
texelFetch(colmap, 3*point2 + 2).x, pos);
return(rgb);
}

float solp3(float u, float v)


{

int idx = gl_PrimitiveID*10;


float P300 = texelFetch(tex, idx ).x;
float P030 = texelFetch(tex, idx + 1 ).x;
float P003 = texelFetch(tex, idx + 2 ).x;
float P210 = texelFetch(tex, idx + 3 ).x;
float P120 = texelFetch(tex, idx + 4 ).x;
float P021 = texelFetch(tex, idx + 5 ).x;
float P012 = texelFetch(tex, idx + 6 ).x;
float P102 = texelFetch(tex, idx + 7 ).x;
float P201 = texelFetch(tex, idx + 8 ).x;
float P111 = texelFetch(tex, idx + 9 ).x;

return (u*u*u*P300 + v*v*v*P030 + (1.-u-v)*(1.-u-v)*(1.-u-v)*P003


+ 3*u*u*v*P210 + 3*u*v*v*P120 + 3*v*v*(1.-u-v)*P021
+ 3*v*(1.-u-v)*(1.-u-v)*P012 + 3*u*(1.-u-v)*(1.-u-v)*P102
+ 3*u*u*(1.-u-v)*P201 + 6*u*v*(1.-u-v)*P111);
180 Meshing, Geometric Modeling and Numerical Simulation 3

void main()
{
float kc, val1, val2;
int i = 0, idx =0;
vec4 color;

float sol = solp3(GBarycentric.x,GBarycentric.y);

float kc = 0.0;
if ( sol <= Palette[0] ) {
sol = Palette[0];
idx = 1;
}
else if ( sol >= Palette[4] ) {
sol = Palette[4];
idx = 4;
}
else {
for(i=1; i<5; i++) {
if ( sol >= Palette[i-1] )
idx = i;
}
}

kc = (sol - Palette[idx-1]) / (Palette[idx] - Palette[idx-1]);

FragColor = colormap_rgb(idx - 1 + kc, 0, 4);

One should recognize here the 10 Bernstein polynomials of degree 3 of a triangle of this degree.

Figure 5.17. A torus meshed at degrees 1, 2 and 3 with shading obtained by taking the
normals into account
Visualization of a Solution Field Related to a High-Degree Mesh 181

Figure 5.18. View of the analytical solution

5.5. Some examples

Through a few examples, the results (the rendering) obtained with the aforementioned tech-
niques will be shown. Obviously, we are going to highlight cases involving meshes and fields
of high degree and/or of large sizes. First, the particular case of surfaces with the use of light to
facilitate the analysis will be quickly addressed by again showing Figure 4.29.

The torus is geometrically defined by two NURBS of degree 5 including six control points
and 12 knots. Three meshes are built from this definition, at degrees 1, 2 and 3, respectively.
The normals at each point (pixel) are exactly evaluated on the GPU using the adequate Bézier
form of the elements of the different meshes. To achieve an optimal shade effect, the fragment
Shader given previously, is adapted by considering the normals of the Bézier triangles (which are
of degree 1, 2 or 3) as a field of solutions of degree, having at least one more order.

The following example is the representation of a field of solutions strongly oscillating over a
Falcon geometry. The mesh is of degree 1 and, on this mesh, the solution (analytical given) is
represented at degree 1 and then degree 3. In the latter case, the representation is accurate down
to the pixel.

Since the solution oscillates strongly, only the P 3 representation is able to capture oscillations
within a single element as shown more precisely in Figure 5.20.

Many other examples will be found in the gallery of the program ViZir listed hereafter.
182 Meshing, Geometric Modeling and Numerical Simulation 3

Figure 5.19. On the left, P 1 representation, on the right, P 3 representation

Figure 5.20. Magnification of the P 1 and P 3 representations


∗ ∗

In this chapter, following the previous dedicated to mesh visualization, we have described
how to visualize a field of solutions associated with a mesh. The supporting mesh is of a certain
degree while the solution is itself of this degree, but may be of a different degree.

The conclusions of the previous chapter remain, almost word for word, unchanged. Difficul-
ties in representing a high-order mesh (or even just a mesh composed of first-degree quadrilater-
als) are found here more acutely. Indeed, for this example involving quadrilaterals, plane mesh
visualization is solved by subdividing the elements into two triangles, while in solution visualiza-
tion, such a subdivision does not allow for a reliable rendering. The research of such rendering
leads to working directly at the pixel level, and the solutions brought about by “straight” elements
Visualization of a Solution Field Related to a High-Degree Mesh 183

are accurately represented at the very level of these pixels. For solutions carried by “curved” or
non-simplicial elements (even straight), the reliability of the rendering is not exact at the pixel
level, but is as accurate as possible; this has been referred to as an almost pixel-exact rendering
and it seems to us that this is the best solution that can be achieved.

As in the previous chapter, the Bézier formulation (both for elements and solutions) and the
use of De Casteljau algorithms have made it possible to develop original algorithms with a more
general scope (than mere visualization). In particular, the need to define cuts (in curved elements
in three dimensions) led us to formulate intersection problems between curved entities of high
degree, by revisiting the classic case concerning straight entities.

To conclude, it is easy, using a search engine, to find visualization software (commercial, but
not only) and we shall only mention the ViZir program that we are currently developing19.

19. ViZir: https://pyamg.saclay.inria.fr/vizir4.html.


Chapter 6

Meshes and Finite Element Calculations

It is not entirely evident that the finite element method, although taught in engineering schools
and on some university courses, is mastered fully from the purely practical point of view1. In
addition, in books or courses, it is often the triangle of degree 1 that is presented without going
any further; in other words, where the difficulties begin. These are a few raisons d’etre of this
chapter.

Therefore, in the example of a simple problem with an unknown scalar (the heat equation with
temperature as unknown) and an unknown vector (the equation of elasticity with displacements
as unknowns), it will be shown how to calculate the matrices and second elementary (thereby,
per element) and global (per assembly) members intervening in the resolution by finite elements
of the system of partial differential equations associated with the problem being considered.

Synthetically, for a linear problem, a calculation using the finite element method consists, in
practice, of:
– building a mesh of the computational domain, a cover composed of geometric elements;
– shifting from geometric elements to finite elements;
– calculating matrices and right-hand sides of every mesh element;
– building, by assembly, the matrices and second global members;
– solving the corresponding system (here linear) by taking into account the possible essential
boundary conditions;
– analyzing (drawing) the solution.

1. Moreover, the use of advanced software programs makes it possible to automatically perform finite ele-
ment calculations, thus hiding all of its practical aspects and, at times, without really understanding how it
works.

Meshing, Geometric Modeling and Numerical Simulation 3: Storage,


Visualization and In Memory Strategies, First Edition. Paul Louis George,
Frédéric Alauzet, Adrien Loseille and Loïc Maréchal.
© ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
186 Meshing, Geometric Modeling and Numerical Simulation 3

This scheme is the simplest that can be imagined. As a matter of fact, the problem is linear and
the solution being calculated, the process is finished. In more difficult cases, nonlinear problems,
unsteady problems, etc., adaptive calculations (with error control), this scheme will have to be
adapted, as need be, but the same issues will arise, namely the mesh construction, the definition of
the finite elements, the construction of matrices and right-hand sides as well as the resolution of
systems. Only the Lagrange elements are going to be considered. The degrees of freedom are the
values at the nodes of the function sought. In particular, there are no other types of degrees, for
example, involving derivatives. It will be shown that the expressions sought can be generically
written (matrices and right-hand sides) and then adapted to a few chosen examples. This notation
is based on involving the reference element that will facilitate the calculations. Nevertheless, it
will be seen that the degree 1 triangle is unique in the case where a direct calculation, without
resorting to the reference element, is possible.

This chapter cannot be a substitute for the abundant literature on finite elements but will
allow us to make the effective link between several aspects related to meshes within this specific
calculation context. In particular, we are going to look at what the geometric interpretation of
the calculated quantities is, which is not necessarily usual and will be an opportunity to establish
a few formulas a priori new. It be will seen that the main difficulty is to set the right notations so
that the computation process be as clear as possible.

Glossary:
Ω, the (continuous) computational domain;
Ωh , the discrete domain, but also the mesh of this same domain;
Γ, Γh , the boundary of the continuous domain, of the discrete domain;
Ωk , Ωkh , Γk , Γkh , notations for a sub-domain (of number k), its mesh, part of its boundary
and its mesh;
K, the current element of a mesh, Ai its node of (local) number i, {Ai } its nodes;
K̂, the reference element related to the geometry and the degree of the current K;
FK (.), the transformation making it possible to shift from K̂ to K;
[DF ], the Jacobian transformation matrix;
J (.), the Jacobian polynomial of the transformation FK , that is, the determinant of the
Jacobian matrix;
[P ], [P̂ ], the basic polynomial line on K and K̂;
[DP ], [DP ˆ ], the matrix of the derivatives of these polynomials on K and K̂;
x̂, ŷ, the coordinates2 in the reference space;
x, y, the coordinates of the usual space (physical space);
xi , yi , the coordinates of node Ai , with the notation convention, xij = xj − xi ;
λ1 , λ2 , λ3 , the barycentric coordinates (for two-dimensional simplexes).

2. In the literature on finite elements, one denotes x̂, ŷ, etc., instead denoted by u, v, etc., in the geometric
literature.
Meshes and Finite Element Calculations 187

Moreover, concerning matrices and other quantities that will be involved:


[A], [AK ] or A, AK , a matrix of global stiffness or restricted to an element K;
[M], [MK ] or M, MK , a global mass matrix or restricted to an element K;
{B}, {BK } or B, BK , a global right-hand side or restricted to an element K;
t
[A], t {B} designates the transpose of a matrix and a vector.

Reminder: < ., . > designates the dot product and . ∧ . the cross product.

6.1. From continuous formulation to discrete notation

To clarify, consider a particularly simple scalar problem, the heat equation and a vector prob-
lem, also simple is the equation of linear elasticity at small strains (and the whole in two dimen-
sions).

• Heat equation

Temperature is denoted by u, the computational domain by Ω and Γ its boundary. The do-
main can be composed of several sub-domains (or material with different physical properties).
Similarly, the boundary may have several components (internal or external) presenting differ-
ent boundary conditions and different characteristics (transfer coefficients, blocked values, etc.).
Similar to any real problem, this one is modeled by a system of partial differential equations with
an operator, source terms (in the domain or on any particular part of the boundary) and a set of
physical data, namely: ß
−k Δu = F in Ω,
,
with boundary conditions
which can be of different types, for example:

⎨ u = uD on ΓD
∂n = f on ΓN
∂u
,

−k ∂n + gu = f on ΓF R
∂u

where ΓD refers to (a portion of) the boundary carrying a Dirichlet condition, ΓN is (a portion of)
the boundary carrying a Neumann condition with the source term f (fN ) and ΓF R is (a portion
of) the boundary carrying a Fourier–Robin condition with the source term f (fF R ). Boundaries
that carry boundary conditions do exist or are reduced to the empty set.

This system is then written in a variational form by introducing test functions, denoted v,
belonging to an adequate space V , which means that the functions used (just like the solution
sought) have a good regularity and are integrable in the desired sense. This notation reveals a
bilinear form, a(., .), which corresponds to a (generalized) Laplacian, a linear form, L(.), which,
if ΓD = ∅, have additional boundary conditions (not explicitly contained in the formulation),
that is: ß
a(u, v) = L(v), ∀v ∈ V
(P10 ) [6.1]
u = uD on ΓD ,
188 Meshing, Geometric Modeling and Numerical Simulation 3

 
with a(u, v) = −k Ω < ∇u, ∇v > dΩ + g ΓF R u v dΓF R (if ΓF R = ∅) with k a coefficient
 
of conductivity and g a transfer coefficient, L(v) = Ω F v dΩ + ΓN ∪ΓF R f v dΓ, F designating
a source (in Ω), f a source on the ΓN and/or ΓF R part of the boundary (N for Neumann, F R
for Fourier-Robin) and uD a value imposed  on the ΓD portion of the boundary (D for Dirichlet).
If k is not constant, we have a(u, v) = Ω < ∇u, [k]∇v > dΩ + ... with, now, [k] a sym-
metrical coefficient matrix (generalized dot product). This formulation, as concise as possible,
can be enhanced with convective transport and/or reaction terms. A discrete formulation is then
introduced (this is the principle itself of a finite element method) that is formally written as:
ß
ah (uh , vh ) = Lh (vh ), ∀vh ∈ Vh
(P1h ) [6.2]
uh = uD on ΓD ,
with, now, Vh the discrete equivalent of
 
V , ah (uh , vh ) = k Ωh < ∇uh , ∇vh > dΩ + g ΓF R uh vh dΓF R ,
h
 
and Lh (vh ) = Ωh F vh dΩ + ΓN ∪ΓF R f vh dΓ and we will see in the following that the
 
h h

integrals will involve every element, therefore Ωh ... = ..., etc., which will allow
K∈Ωh K
the construction of (the) matrix (matrices) and right-hand side(s) element per element. In the
following text, to simplify the notation, not all the coefficients will be indicated (thus, those not
explicitly mentioned will be deemed equal to 1).
• Equation of elasticity at small strains
We will denote u = (u1 , u2 ) the displacement vector and its components in x and y. To
simplify the notation, we denote x and y by x1 and x2 . We introduce the deformation tensor
εij (u): ß ™
1 ∂ui ∂uj
εij (u) = + for i = 1, 2 and j = 1, 2,
2 ∂xj ∂xi
then the constraint tensor σij (u):
 ∂uk
σij (u) = λ δij +2 μ εij (u) = λ div(u)δij +2 μ εij (u) for i = 1, 2 and j = 1, 2,
∂xk
k

with λ and μ the Lamé coefficients and δij = 1 if i = j, 0 otherwise. Since we are dealing with
an isotropic problem, this is also written (with the convention of implicit summations on indices)
as:
σij (u) = Eijkl εkl (u),
with Eijkl = λδij δkl + μ(δik δjl + δil δjk ).

Quick presentation of the method in the heat equation case can, at least formally, be repeated
here with some adjustments at the level of functional spaces and, obviously, of the operator
relative to the case under consideration. Thereby, the problem is formulated in a very similar
way, that is: ⎧
⎨ div(σ) = F in Ω,
u = uD on ΓD , ,

[σ] −

n = f on ΓN
Meshes and Finite Element Calculations 189

and in variational form:


ß
a(u, v) = L(v), ∀v ∈ V × V
(P20 ) [6.3]
u = uD on ΓD ,

with v = (v1 , v2 ) and the symmetrical bilinear form a(u, v) = Ω σij (u) εkl (v)dΩ or still
  ∂ui ∂vk
E ε (u) εkl (v), which is also simply written as Ω Eijkl
Ω ijkl ij
dΩ and then as the right-
  ∂x j ∂xl
hand side the linear form L(v) = Ω < F, v > dΩ+ Γ < f, v > dΓ, which involves the domain
(the desired sub-domains) and the part(s) of the desired boundary. The problem is completed by
a Dirichlet condition with respect to a portion of the boundary. One then introduces, as before, a
discrete formulation (finite elements), which is formally written as:
ß
ah (uh , vh ) = Lh (vh ), ∀h ∈ Vh × Vh
(P21 ) [6.4]
uh = uD on ΓD ,
 ∂ui ∂vk
with Vh the discrete equivalent of V , ah (uh , vh ) = Ωh
Eijkl dΩ (the index h is not de-
∂xj ∂xl 
noted for u and v if they already have another index) and Lh (vh ) = Ωh < F, vh > dΩ + Γh <
  
f, vh > dΓ and these integrals will involve each element K, therefore Ωh ... = ...,
K∈Ωh K
etc. In the following, any non-explicit coefficient is deemed to be equal to 1.

6.2. Calculation of an elementary matrix

Four examples of finite elements will be given, the Lagrange triangle of degree 1 with three
nodes, its right counterpart of degree 2 with six nodes and its curved counterpart of degree 2. The
Lagrange quadrilateral 1×1 will also be given. Considering the shape functions of these different
elements, one can explain the previous relations and obtain the expression of the matrices and
right-hand sides of the resulting system.

Let us just remember that a finite element is defined by a triplet denoted as (K, Σ, [P ]),
where K refers to the geometric element (triangle, quadrilateral, etc.), Σ designates the degrees
of freedom (values at the nodes of the function searched for, of the derivatives of this function,
etc.) and [P ] refers to the basic functions associated with K, polynomials in the case of the
Lagrange elements. Ideally, it is necessary to complete this definition using a triplet by the data
of one or several quadrature formulas (when exact integration is not possible for the quantity
under consideration). It should also be remembered that it is convenient to define a so-called
reference element, denoted (K̂, Σ̂, [P̂ ]), and a function FK that allows writing K = FK (K̂)
and will allow expressing a common finite element as the image of this reference element, thus
simplifying the calculations. Therefore, K is arbitrary, Σ and [P ] are defined for every K while
the entities of (K̂, Σ̂, [P̂ ]) are defined once and for all. It should be noted that the transformation
FK has a dual role. It is a geometrical transformation that characterizes the entire geometry
of the elements as an image of the reference element. This is also an interpolation function
that makes it possible to evaluate the function searched for at any point of an element, from the
knowledge of this function at the nodes of this element only.
190 Meshing, Geometric Modeling and Numerical Simulation 3

6.2.1. The special case of the first-degree triangle

This element is completely defined by using the system of barycentric coordinates. As a


result, the calculations can be directly carried out on the current element (without having to go
back to a reference element). We know how to very simply express the triplet (K, Σ, [P ]): K is
the element being considered, Σ includes the values of the desired function at three nodes (the
nodes are the vertices of K) and, finally, [P ] includes the three polynomials pi that are none
other than the barycentric coordinates, traditionally denoted by λi , pi = λi . If M refers to any
point of K, 1, 2 and 3 its vertices, one writes SK the surface of K as S123 and SK = S123 =
x12 y13 − x13 y12 SM 23 S1M 3 S12M
. One has λ1 = , λ2 = and λ3 = (Figure 6.1). Similarly,
2 S123 S123 S123
if ui is the value of the unknown at the nodes of K, the solution at point M is simply calculated
 3
as uh (M ) = λi ui . One denotes x and y the M , it follows:
i=1

SM 23 S23M 1
λ1 = = = (x23 (y − y2 ) − y23 (x − x2 )) ,
S123 SK 2 SK
S1M 3 S31M 1
λ2 = = = (x31 (y − y3 ) − y31 (x − x3 )) ,
S123 SK 2 SK
S12M S12M 1
λ3 = = = (x12 (y − y1 ) − y12 (x − x1 )) ,
S123 SK 2 SK
as a result, the gradients are equal to:
ß ™ ß ™ ß ™
1 −y23 1 −y31 1 −y12
∇λ1 = , ∇λ2 = , ∇λ3 = .
2 SK x23 2 SK x31 2 SK x12

Therefore, taking the example of the heat equation, the "domain" part of the stiffness matrix of
the element K, the first term of relation [6.5] that will be explained below, is written as:

t
[∇λ1 ∇λ2 ∇λ3 ] [∇λ1 ∇λ2 ∇λ3 ] dK.
K

Since the gradients are constant, it follows that:



t
[∇λ1 ∇λ2 ∇λ3 ] [∇λ1 ∇λ2 ∇λ3 ] dK = t [∇λ1 ∇λ2 ∇λ3 ] [∇λ1 ∇λ2 ∇λ3 ] SK .
K

That is: ⎡ ⎤
−y23 x23 ï ò
1 ⎣ −y23 −y31 −y12
−y31 x31 ⎦ ,
4 SK x23 x31 x12
−y12 x12
which gives the following (3 × 3) symmetrical matrix (of which only the lower part is outlined):
⎡ ⎤
x223 + y23
2
... ...
1 ⎣ ⎦.
x31 x23 + y31 y23 x231 + y31
2
...
4 SK 2 2
x12 x23 + y12 y23 x12 x31 + y12 y31 x12 + y12
Meshes and Finite Element Calculations 191

Figure 6.1. For the calculation of the barycentric coordinates of point M in the triangle [123]

After this aside on this particular element3 where a direct calculation is possible, we revisit
the generic case in which one will use the reference element to facilitate the calculations.

6.2.2. A generic notation for all elements

A more generic notation is now given for calculating the elementary quantities involving the
reference element, K̂, corresponding to the geometry and degree of current elements, the K. This
notation will apply independently of the geometry and degree and, in particular, for elements
(quadrilaterals, curved elements, etc.) for which there is no existing barycentric description
system.

Consider once more the two previous model problems, the heat equation and the linear elas-
ticity under the assumption of infinitesimal strains.
• Heat equation
We first look at how to calculate an elementary stiffness matrix. It comes from the term4:
   
ah (uh , vh ) = ∇uh ∇vh dΩ + uh vh dΓ = ∇uh ∇vh dK
Ωh Γh K∈Ωh K

 
+ uh vh d∂K,
K∈Ωh K∩Γh

with Γh the discretization of the boundary carrying a Fourier–Robin condition.

3. And this will be true for all straight simplexes with a uniform node distribution, regardless of the degree of
element. The expressions directly integrated on K will be exponent products of the barycentric coordinates.
 β γ 2α!β!γ!
One will then use the formula K λα 1 λ2 λ3 dK = SK .
(2 + α + β + γ)!
4. Strictly speaking, one should denote ∇uh ∇vh as < ∇uh , ∇vh >.
192 Meshing, Geometric Modeling and Numerical Simulation 3

On K, the unknown function uh is expressed from its values at the nodes of K by:

uh = [P ] {ui } ,

with ui the value at the node K of local number i and [P ]; in our case, the basic polynomial line
of the finite element being considered, [P ] = [p1 p2 ...]. Similarly, one has
⎧ ⎫
⎪ ∂uh ⎪

⎪ ⎪
⎨ ∂x ⎪ ⎬
∇uh = = [DP ] {ui } ,

⎪ ∂uh ⎪ ⎪

⎩ ⎪

∂y
⎡ ⎤
∂p1 ∂p2
...
⎢ ∂x ∂x ⎥
⎢ ⎥
with [DP ] the matrix of the first derivatives of [P ], that is [DP ] = ⎢ ⎥ . There-
⎣ ∂p1 ∂p2 ⎦
...
∂y ∂y
after, on each element K of the mesh:
  
∇uh ∇vh dK + uh vh d∂K = t {vi } t
[DP ] [DP ]dK {ui } +
K K∩Γh K

+ t {vi } t
[P ] [P ]d∂K {ui } ,
K∩Γh

and the stiffness matrix is defined on K by:


 
AK = t
[DP ] [DP ]dK + t
[P ] [P ]d∂K. [6.5]
K K∩Γh

To make the calculations easier, one will use the reference element K̂, which is related to the
common element K by way of the transformation FK . One can indeed write in a synthetic way:

M = FK (M̂ ) = [P̂ ] {Ai } ,



which is a concise notation of M = p̂i Ai . Any point M of K is the image per FK of a point
i
M̂ of K̂, as the image built with basic polynomials expressed on K̂ applied to the nodes of K.
If x and y designate the coordinates of M , the image of the point M̂ of coordinates x̂ and ŷ, one
has x = i pi (x̂, ŷ)xi and y = i pi (x̂, ŷ)yi with xi and yi the coordinates of node Ai of K.
−1
Since pi = p̂i oFK , one can express everything with the reference elements that are the
element K̂ itself and the polynomials p̂i . It follows that:
 
t ˆ t ˆ ]J dK̂ + χ
AK = [DP ] [DF −1 ] [DF −1 ] [DP Γ̂
t
[P̂ ] [P̂ ]JΓ d∂ K̂, [6.6]
K̂ Γ̂

where Γ̂ is a notation to designate an edge of K̂ corresponding to an edge K belonging to a


boundary carrying a source term (χΓ̂ = 1, 0 otherwise) and J is the Jacobian of FK . This is
Meshes and Finite Element Calculations 193

thus the determinant of [DF ], the matrix (2 × 2, since we are in two dimensions) of the first
derivatives of FK applied to the nodes of K. Moreover, JΓ is the Jacobean of the restriction of
FK on the boundary edge in question, and in this integral, we must consider the restriction of
[P̂ ] on the edge being addressed, which is the line of restricted polynomials p̂i . From the above
expression, the value of the matrix coefficients is derived [Ciarlet, Lunéville-2009], namely:
 
AKij = < t [DF −1 ] [DF −1 ]∇p̂i , ∇p̂j > J dK̂ + χΓ̂ p̂i p̂j JΓ d∂ K̂. [6.7]
K̂ Γ̂

From:
ˆ ] {Ai } ,
[DF ] = [DP
the expression of the Jacobian transformation is derived, it is equal to J = det([DF ]) =
det([DPˆ ] {Ai }). This Jacobian is an arbitrary polynomial, constant only if [DPˆ ] is constant
(triangle or tetrahedron of degree 1 or, more generally, straight simplexes of any degree with a
uniform distribution of nodes).

In the expression of AK , we have not included the physical coefficients that are necessarily
involved both for characterizing the material (the domain) and the properties of the part of the
boundary potentially contributing to this matrix.

As will be seen below, the global matrix of the system, A, will be obtained by inserting the
various contributions of the matrices at the desired global index, thus ensuring the transition of
the local numbering of nodes in each element to the global numbering of the nodes of the entire
mesh.

We now show how to calculate an elementary mass matrix. Following the same approach,
one simply finds:  
MK = t
[P ] [P ]dK = t
[P̂ ] [P̂ ]J dK̂. [6.8]
K K̂

Such a matrix may originate from a time term, ∂u ∂t in the problem being addressed. It is then
possible to formally express the coefficients of this matrix as:

MKij = p̂i p̂j J dK̂. [6.9]

Finally, the calculation of a second elementary member is given. The right-hand side is derived
from the term:
     
f (vh ) = F vh dΩ + f vh dΓ = F vh dK + f vh d∂K.
Ωh Γh K∈Ωh K K∈Ωh K∩Γh

By choosing, per element, vh = [P ] {vi }, it follows:


   
F [P ] {vi } dK + f [P ] {vi } d∂K,
K∈Ωh K K∈Ωh K∩Γh
194 Meshing, Geometric Modeling and Numerical Simulation 3

that is written as:


   
t
{vi } F t [P ]dK + t
{vi } f t [P ]d∂K,
K∈Ωh K K∈Ωh K∩Γh

and the right-hand side on K is written as:


 
BK = t
[P ]F dK + t
[P ]f d∂K,
K K∩Γh

to finalize, the reference element is used and it follows that:


 
BK = t
[P̂ ]F J dK̂ + χΓ̂ t [P̂ ]f JΓ d∂ K̂, [6.10]
K̂ Γ̂

and the index coefficient i is equal to:


 
BKi = F p̂i J dK̂ + χΓ̂ f p̂i JΓ d∂ K̂. [6.11]
K̂ Γ̂


By writing F = [P̂ ] {F } ou [P̂ ] {Fi }, one can see that K̂
t
[P̂ ]F J dK̂ = MK {F } or
MK {Fi }, thus including the mass matrix.

Similarly, the second global member of the system, B, will be obtained by inserting the
various contributions of the second elementary members at the desired global index.
• Elasticity equation at small strains

As mentioned previously, [P ] and [DP ] refer to the line of basic polynomials and the two
lines of their derivatives. Since we have here two components per unknown (two degrees of
freedom per node), we shall try to define appropriate notations.

On K, the unknown function uh is expressed from its values at the nodes of K by:

uh = [[P ]] {ui } ,

that is, per component (omitting the index h ):

u1 = [P ] {u1i } and u2 = [P ] {u2i } ,

where u1i (u2i ) is the displacement at x (at y) of the node of the element K of local number i.
The notation [[P ]] refers to the block matrix:
ï ò
[P ] [0]
,
[0] [P ]
Meshes and Finite Element Calculations 195

⎧ ⎫

⎪ u11 ⎪⎪

⎪ ⎪

⎪ u12 ⎪⎪


⎪ ⎪

⎪ u13 ⎪⎪

⎨ ⎬
...
with [0] a line of 0 and {ui } organized as the column , the displacements at x of nodes

⎪ u21 ⎪⎪

⎪ ⎪

⎪ u22 ⎪
⎪ ⎪

⎪ ⎪

⎪ u23 ⎪⎪

⎩ ⎭
...
1, 2, ... and then at y at these same nodes. For the derivatives, we look at:

∇uh = [[DP ]] {ui } ,

that is, per component (omitting the index h ):

∇u1 = [DP ] {u1i } and ∇u2 = [DP ] {u2i } ,

this means that the notation [[DP ]] designates the block matrix:
ï ò
[DP ] [0]
,
[0] [DP ]

with, now [0] two lines of 0. Having introduced these notations, we can express the elementary
quantities.

An elementary stiffness matrix is derived from the term (here, ui denotes the component
i of uh not to be confused with {ui }, which refers to the displacement vector (with its two
components) at node i of element K):
  
∂ui ∂vk ∂ui ∂vk
ah (uh , vh ) = Eijkl dΩ = Eijkl dK.
Ωh ∂x j ∂x l K ∂x j ∂xl
K∈Ωh

Since Eijkl = λδij δkl + μ(δik δjl + δil δjk ), according to the index values, one has Eijkl = 0, λ, μ
or λ + 2μ and for K, we find:
 
∂ui ∂vk
Eijkl dK = t
vh t [[DP ]] [E] [[DP ]]uh dK
K ∂xj ∂xl K

with [E] defined as: ⎡ ⎤


λ + 2μ 0 0 λ
⎢ 0 μ μ 0 ⎥

[E] = ⎣ ⎥,
0 μ μ 0 ⎦
λ 0 0 λ + 2μ
and the stiffness matrix on K is defined by:

[AK ] = t
[[DP ]] [E][[DP ]]dK,
K
196 Meshing, Geometric Modeling and Numerical Simulation 3

that can be written in blocks such as:


⎡  t   t  ⎤
K
[DP ] [E11 ] [DP ] dK K
[DP ] [E12 ] [DP ] dK
[AK ] = ⎣    t

 ,
K
t
[DP ] [E21 ] [DP ] dK K
[DP ] [E 22 ] [DP ] dK

with [EIJ ] the following sub-blocks of [E]:


ï ò ï ò
λ + 2μ 0 0 λ
[E11 ] = , [E12 ] = ,
0 μ μ 0
ï ò ï ò
0 μ μ 0
[E21 ] = and [E22 ] = .
λ 0 0 λ + 2μ
One ends up with expressions similar to the one seen for the heat problem (with a generalized
anisotropic coefficient). It should be noted that these expressions can be simplified by taking
into account the fact that [EIJ ] contain zeroes. It is then possible to obtain a simpler (but less
mechanical) notation that we do not retain. We shall see below that this peculiarity will allow the
calculations to be simplified.

An elementary mass matrix would be derived from a time derivative and is written (up to the
coefficients) simply as:
⎡  t  ⎤
K
[P ] [P ] dK 0
MK = ⎣  t

 .
0 K
[P ] [P ] dK

A second elementary member derives from the two terms of L(v), that is:
 
Lh (vh ) = < F, vh > dΩ + < f, vh > dΓ
Ωh Γh
   
= < F, vh > dK + < f, vh > d∂K,
K∈Ωh K K∈Ωh K∩Γh

that we prefer to write as:


   
t
vh {F } dK + t
vh {f } d∂K,
K∈Ωh K K∈Ωh K∩Γh

since we have vh = [[P ]] {vi }, the second elementary member is defined by:
 
BK = t
[[P ]] {F } dK + t
[[P ]] {f } d∂K
K K∩Γh
 ï òß ™  ï òß ™
t
[P ] 0 F1 t
0[P ] f1
= dK + d∂K
K 0 t
[P ] F2 K∩Γh
t
0
[P ] f2
 ß ™  ß t ™
t
[P ]F1 [P ]f1
= dK + d∂K.
K
t
[P ]F2 K∩Γh
t
[P ]f2
Meshes and Finite Element Calculations 197

If we have data such as F1 = [P ] {F1i } (ditto for F2 ) with appropriate notations ({F1i } refers
to the values at the nodes of the first component of the source term), we find as a term on K the
value [[MK ]] {F }, thus involving a mass matrix.

As for the elementary quantities seen for the heat equation, the calculations will be made
easier by involving the reference element K̂. We find exactly (per block) the same results, as
such:
  î ó î ó
t
[DP ] [E11 ] [DP ] dK = t
DPˆ t [DF −1 ] [E11 ] [DF −1 ] DP ˆ J dK̂,
K K̂
  î óî ó
t
[P ] [P ] dK = t
P̂ P̂ J dK̂,
K K̂
 
t
[P ]F1 dK = t
[P̂ ]F1 J dK̂,
K K̂
 
t
[P ]f1 d∂K = t
[P̂ ]f1 JΓ d∂ K̂, etc.
K∩Γh Γ̂

In practice and according to the case, these integrations will be replaced by quadrature formulas.

For the construction of the matrices and the second global members, as the assembly opera-
tion, we refer hereafter and it will then be necessary to consider the fact that there are two degrees
of freedom per node.

6.2.3. The generic notation for the four chosen elements and heat equation

These results with the four elements indicated above are explained for the heat equation.

6.2.4. Lagrange triangle of degree 1 with three nodes

We forget for a moment that the calculations can be done directly on the current element K,
and we apply the generic method going through K̂ (Figure 6.2). To this end, the calculations
previously seen formally are unraveled step by step to evaluate the matrices and right-hand sides.
For this element, one has:
ï ò
ˆ ] = −1 1 0 ;
− the basic polynomials [P̂ ] = [1− x̂− ŷ, x̂, ŷ] and their derivatives [DP
−1 0 1
⎧ ⎫
ï ò ⎨ x1 y1 ⎬ ï ò
ˆ ] {Ai } = −1 1 0 x12 y12
− the Jacobian matrix [DF ] = [DP x2 y2 = ,
−1 0 1 ⎩ ⎭ x13 y13
x3 y3

by recalling that xij = xj − xi , the determinant


ï of [DF ], J
ò = det([DF ]) = x12 y13 − x13 y12 ,
1 y13 −y 12
then the inverse of [DF ], [DF ]−1 = . It should be noted that J is none
J −x13 x12
198 Meshing, Geometric Modeling and Numerical Simulation 3

⎡ ⎤
x12 x13 0
other than the determinant of the matrix ⎣ y12 y13 0 ⎦ and is also expressed as the mixed
0 0 1
product5 of some vectors. To correctly formulate this question, we must consider three dimen-
sions by adding a third null component to the vectors corresponding to the edges of the elements
−−−→ −−−→ →
and introduce the vector −→
n = t {0 0 1}. Once done, one has: J =< A A ∧ A A , − n > .
1 2 1 3
−−−→ −−−→
However, in practice, the third component of the vector A1 A2 ∧ A1 A3 merely has to be calcu-
lated.

• For a stiffness matrix, we consider the expression:


 
t ˆ t ˆ ]J dK̂ then
[DP ] [DF −1 ] [DF −1 ] [DP t
[P̂ ] [P̂ ]JΓ d∂ K̂ (boundary term).
K̂ Γ̂

For the first term, t ˆ ] t [DF −1 ] [DF −1 ] [DP
[DP ˆ ]J dK̂, one successively calculates:

Figure 6.2. The reference triangle K̂, an arbitrary triangle K and the transformation FK

ï òï ò ï ò
ˆ ]= 1
[DF ]−1 [DP
y13 −y12 −1 1 0
=
1 y32 y13 −y12
,
J −x13 x12 −1 0 1 J −x32 −x13 x12
whose more elegant notation is obtained by following the edges themselves6, that is to say:
ï ò
1 −y23 −y31 −y12
.
J x23 x31 x12
⎡ ⎤
−y23 x23 ï ò
ˆ ] t [DF −1 ] [DF −1 ] [DP
ˆ ]= 1 ⎣ −y31 x31 ⎦ −y23 −y31 −y12
Then, we calculate t [DP
J2 x23 x31 x12
−y12 x12

5. And this notation will be preferred because it brings up vector products that will be easy to manipulate.
6. Here is a little tip, the edge i is naturally the opposite edge to the vertex i.
Meshes and Finite Element Calculations 199

However, since this expression is a constant, it follows that:


 
t ˆ t ˆ ]J dK̂ = t [DP ˆ ] t [DF −1 ] [DF −1 ] [DP
ˆ ]J
[DP ] [DF −1 ] [DF −1 ] [DP dK̂
K̂ K̂

Since dK̂ = 12 , it follows:

⎡ ⎤
−y23 x23 ï ò
1 ⎣ −y23 −y31 −y12
−y31 x31 ⎦ .
2J x23 x31 x12
−y12 x12

And, as such, the term in K̂ of the stiffness matrix, is equal to:
⎡ ⎤
x223 + y23
2
... ...
1 ⎣ 2 2 ⎦.
AK = x23 x31 + y23 y31 x31 + y31 ...
2J
x23 x12 + y23 y12 x12 x31 + y12 y31 x212 + y122

Since J = 2, SK , we correctly find the expression seen above (the indices do not necessarily
arrive in the same order). A more geometric view of the coefficients of this matrix makes it
explicit that these are indeed dot products between the edge, that is:
⎡ −−−→ ⎤
||A2 A3 ||2 ... ...
1 ⎢ −−−→ −−−→ −−−→ 2 ⎥
AK = ⎣ < A2 A3 , A3 A1 > ||A3 A1 || ... ⎦.
2J −−−→ −−−→ −−−→ −−−→ −−−→ 2
< A2 A3 , A1 A2 > < A1 A2 , A3 A1 > ||A1 A2 ||
Let us note that relation [6.7] enables the calculation and one just has to calculate the matrix
t
[DF −1 ]
[DF −1 ] then perform, for a coefficient with given index, the desired dot product.

The other possible terms in this matrix, intΓ̂ t [P̂ ] [P̂ ]JΓ d∂ K̂, correspond to the contributions
of the edges, if any. Let us take the case where the edge (of K) is the image of the edge ŷ =0 of
K̂, so, step by step, we outline the calculations:

[P̂ ] = [1 − x̂, x̂, 0],


⎡ ⎤ ⎡ ⎤
1 − x̂ (1 − x̂)2 ... ...
t
[P̂ ] [P̂ ] = ⎣ x̂ ⎦ [1 − x̂, x̂, 0] = ⎣ x̂(1 − x̂) x̂2 ... ⎦
0 0 0 0
⎡ ⎤
  1 (1 − x̂)2 ... ...
t
[P̂ ] [P̂ ]JΓ d∂ K̂ = ⎣ x̂(1 − x̂) x̂2 ... ⎦ JΓ dx̂.
Γ̂ 0 0 0 0

The Jacobian JΓ is the length of the edge K in question, here JΓ = x212 + y12 2 . In order to

find the contribution, one still has to integrate:


⎡ ⎤
 1 (1 − x̂)2 ... ...
⎣ x̂(1 − x̂) x̂2 ... ⎦ dx̂,
0 0 0 0
200 Meshing, Geometric Modeling and Numerical Simulation 3

which gives (exact integration):


⎡ 1 ⎤
... ...
⎢ 3 ⎥
⎢ 1 1 ⎥
⎢ ... ⎥ .
⎣ 6 3 ⎦
0 0 0
The contribution of this edge is thus written as:
⎡ 1 ⎤
... ...
» ⎢ 3 ⎥
2 ⎢ 1
x212 + y12 1 ⎥
⎢ ... ⎥ .
⎣ 6 3 ⎦
0 0 0

For the other two edges, if they contribute, one will find:
⎡ ⎤ ⎡ ⎤
0 ... ... 1
... ...
» ⎢ ⎥ » ⎢ 3 ⎥
⎢ 1 ⎥ ⎢ ⎥
2 ⎢ 0
x223 + y23 ... ⎥ and x 2 + y2 ⎢ 0 0 ... ⎥ .
⎢ 3 ⎥ 13 13 ⎢ ⎥
⎣ 1 1 ⎦ ⎣ 1 1 ⎦
0 0
6 3 6 3
 For a mass matrix, we unravel the expression:
⎡ ⎤
 (1 − x̂ − ŷ)2 ... ...
t
[P̂ ] [P̂ ]J dK̂ with t ⎣
[P̂ ] [P̂ ] = (1 − x̂ − ŷ)x̂ x̂2 ... ⎦ ,
K̂ (1 − x̂ − ŷ)ŷ x̂ŷ ŷ 2

in other words, the integrals to be computed are exponent products of the basic functions, there-
fore of the barycentric coordinates. We saw above a formula giving the results, namely:

β γ 2α!β!γ!
1 λ2 λ3 dK =
λα SK , [6.12]
K (2 + α + β + γ)!
which here becomes:

α!β!γ!
(1 − x̂ − ŷ)α x̂β ŷ γ dK̂ = .
K̂ (2 + α + β + γ)!

As a result, a mass matrix is written as:


⎡ ⎤ ⎡ ⎤
1 1
... ... ... ...
⎢ 12 ⎥ ⎢ 6 ⎥
⎢ ⎥ ⎢ ⎥
⎢ 1 1 ⎥ ⎢ 1 1 ⎥
MK = J ⎢ ... ⎥ = SK ⎢ ... ⎥ .
⎢ 24 12 ⎥ ⎢ 12 6 ⎥
⎣ 1 1 1 ⎦ ⎣ 1 1 1 ⎦
24 24 12 12 12 6
We notice, and this is a rather good sign, that MKij = SK .
Meshes and Finite Element Calculations 201

 For a right-hand side, one begins with the generic expression:


 
t
[P̂ ]F J dK̂ + t [P̂ ]f JΓ d∂ K̂.
K̂ Γ̂

The firs term is written as:


⎧ ⎫ ⎧ ⎫ ⎧ ⎫ ⎧ ⎫
 ⎨ (1 − x̂ − ŷ) ⎬ 1 ⎬ 1 ⎬
J ⎨
F ⎬
JF ⎨ SK F ⎨
x̂ F J dK̂ = F = 1 = 1 .
K̂ ⎩ ŷ
⎭ 6 ⎩
F
⎭ 6 ⎩
1
⎭ 3 ⎩
1

because relation [6.12] indicates that the three


⎧ integrals
⎫ are equal to 16 . Once again, one verifies
⎨ 1 ⎬
that this vector is also expressed as F MK 1 using the elementary mass matrix. For F non-
⎩ ⎭
1 ⎧ ⎫
⎨ 2F1 + F2 + F3 ⎬
constant but linear (written as [P̂ ] {Fi }), one finds (with obvious notations) S12K F + 2F2 + F3 .
⎩ 1 ⎭
F1 + F2 + 2F3

Finally, the second term of the right-hand side is evaluated. If it is the first edge:
⎧ 1 ⎫
⎧ ⎫ ⎪ ⎪
  1 ⎨ (1 − x̂) ⎬ » ⎪
⎨ 2 ⎪ ⎬
2 2 1
t
[P̂ ]f JΓ d∂ K̂ = x̂ f JΓ dx̂ = x12 + y12 f 2 ,
Γ̂ 0 ⎩ 0
⎭ ⎪





0
which can be outlined as (if f is not a constant and is expressed as
⎧ f1 ⎫
3 + 6 ⎪
f2

⎪ ⎪
 ⎨ ⎬
2
(1 − x̂)f1 + x̂f2 ): x12 + y12 2 f1
+ f2
.


6 3 ⎪

⎩ ⎭
0
⎧ ⎫ ⎧ 1 ⎫

⎪ 0 ⎪⎪ ⎪
⎪ ⎪
 ⎨ ⎬  ⎨ 2 ⎪

2 2 1 2 2
For the other edges, one will, respectively, have x23 + y23 f 2 and x13 + y13 f 0 ,

⎪ ⎪ ⎪ ⎪
⎩ 1 ⎪ ⎭ ⎪
⎩ 1 ⎪

⎧ ⎫ ⎧ f1 2 ⎫ 2

⎪ 0 ⎪
⎪ ⎪
⎪ + f63 ⎪ ⎪
 ⎨ ⎬  ⎨ 3 ⎬
or still x223 + y23
2 f f2
3 + f3
6 ⎪ and x 2 + y2 f
13 13 0 .

⎪ ⎪ ⎪
⎪ ⎪

⎩ f2 f3 ⎭ ⎩ f1 f3 ⎭
6 + 3 6 + 3

6.2.4.1. Lagrange quadrilateral of degree 1 × 1 with four nodes


Step by step, the same mechanism is unraveled and one will find out (verify) that this ele-
ment is singularly more complex than the triangle because there are no more constant quantities.
Therefore, the quantities used in the calculations are successively evaluated. For this element,
we have:
[P̂ ] = [(1 − x̂)(1 − ŷ), x̂(1 − ŷ), x̂ŷ, (1 − x̂)ŷ]
202 Meshing, Geometric Modeling and Numerical Simulation 3

ï ò
ˆ ]= −(1 − ŷ) (1 − ŷ) ŷ −ŷ
[DP
−(1 − x̂) −x̂ x̂ (1 − x̂)
⎧ ⎫
ï ò⎪


x1 y1 ⎪


ˆ −(1 − ŷ) (1 − ŷ) ŷ −ŷ x2 y2
[DF ] = [DP ] {Ai } =
−(1 − x̂) −x̂ x̂ (1 − x̂) ⎪ ⎪ x y3 ⎪

⎩ 3 ⎭
x4 y4
ï ò
(1 − ŷ)x12 − ŷx34 (1 − ŷ)y12 − ŷy34
=
−(1 − x̂)x41 + x̂x23 −(1 − x̂)y41 + x̂y23

J = {(1 − ŷ)x12 − ŷx34 } {−(1 − x̂)y41 + x̂y23 } −


− {(1 − ŷ)y12 − ŷy34 } {−(1 − x̂)x41 + x̂x23 }
ï ò
−1 1 −(1 − x̂)y41 + x̂y23 −(1 − ŷ)y12 + ŷy34
[DF ] = .
J (1 − x̂)x41 − x̂x23 (1 − ŷ)x12 − ŷx34
The Jacobean polynomial depends on x̂ and ŷ, and thus should be written as J (x̂, ŷ), it is ex-
pressed as:
¶ −−−→ −−−→© ¶ −−−→ −−−→© →
J (x̂, ŷ) =< (1 − ŷ)A1 A2 + ŷ A4 A3 ∧ (1 − x̂)A1 A4 + x̂A2 A3 , − n >,

with still, −

n = t {0 0 1} and vectors written in three dimensions (with a 0 in the third coordinate).

Figure 6.3. The reference quadrilateral K̂, an arbitrary quadrilateral K and the
transformation FK

ˆ ] is evaluated, that is:


• For an elementary stiffness matrix, [DF ]−1 [DP
ï ò
1 −(1 − x̂)y34 − (1 − ŷ)y23 −(1 − ŷ)y41 − x̂y34 −x̂y12 − ŷy41 −(1 − x̂)y12 − ŷy23
.
J (1 − x̂)x34 + (1 − ŷ)x23 (1 − ŷ)x41 + x̂x34 x̂x12 + ŷx41 (1 − x̂)x12 + ŷx23
We observe the mechanical nature of the coefficients. For instance, the coefficients of the first
column show the terms of the first basic polynomial and the opposite edges to the first vertex.
ˆ ] t [DF −1 ] [DF −1 ] [DP
To continue, the (4×4) matrix is expressed as t [DP ˆ ]. The coefficient
of index (11) is, up to a J12 factor:
2 2 −−−→ −−−→
{(1 − x̂)y34 + (1 − ŷ)y23 } +{(1 − x̂)x34 + (1 − ŷ)x23 } = ||(1− x̂)A3 A4 +(1− ŷ)A2 A3 ||2 .
Meshes and Finite Element Calculations 203

This coefficient is therefore:



1 −−−→ −−−→
AK11 = ||(1 − x̂)A3 A4 + (1 − ŷ)A2 A3 ||2 dK̂.
K̂ J

For the coefficient of index (21), still up to the above factor, we have:

{(1 − x̂)y34 + (1 − ŷ)y23 } {(1 − ŷ)y41 + x̂y34 } +


+ {(1 − x̂)x34 + (1 − ŷ)x23 } {(1 − ŷ)x41 + x̂x34 } ,
−−−→ −−−→ −−−→ −−−→
that is, the scalar product < (1 − x̂)A3 A4 + (1 − ŷ)A2 A3 , (1 − ŷ)A4 A1 + x̂A3 A4 >, thus:

1 −−−→ −−−→ −−−→ −−−→
AK21 = < (1 − x̂)A3 A4 + (1 − ŷ)A2 A3 , (1 − ŷ)A4 A1 + x̂A3 A4 > dK̂.
K̂ J

The other coefficients have, evidently, the same form:



1 −−−→ −−−→
AK22 = ||(1 − ŷ)A4 A1 + x̂A3 A4 ||2 dK̂,
K̂ J

1 −−−→ −−−→ −−−→ −−−→
AK31 = < (1 − x̂)A3 A4 + (1 − ŷ)A2 A3 , x̂A1 A2 + ŷ A4 A1 > dK̂,
K̂ J
1 −−−→ −−−→ −−−→ −−−→
AK32 = < x̂A1 A2 + ŷ A4 A1 , (1 − ŷ)A4 A1 + x̂A3 A4 > dK̂,
J
K̂ 
1 −−−→ −−−→
AK33 = ||x̂A1 A2 + ŷ A4 A1 ||2 dK̂,
J
 K̂
1 −−−→ −−−→ −−−→ −−−→
AK41 = < (1 − x̂)A3 A4 + (1 − ŷ)A2 A3 , (1 − x̂)A1 A2 + ŷ A2 A3 > dK̂,
K̂ J
1 −−−→ −−−→ −−−→ −−−→
AK42 = < (1 − x̂)A1 A2 + ŷ A2 A3 , (1 − ŷ)A4 A1 + x̂A3 A4 > dK̂
K̂ J
1 −−−→ −−−→ −−−→ −−−→
AK43 = < (1 − x̂)A1 A2 + ŷ A2 A3 , x̂A1 A2 + ŷ A4 A1 > dK̂
K̂ J 
1 −−−→ −−−→
AK44 = ||(1 − x̂)A1 A2 + ŷ A2 A3 ||2 dK̂.
K̂ J
Before seeing how to calculate these integrals, the expression of a possible contribution of the
boundary
 t to the stiffness matrix will be expressed. The general form, for an edge, is written as
Γ̂
[P̂ ] [P̂ ]JΓ d∂ K̂. To clarify, let us take as an edge, the image of the edge ŷ = 0. Then:

[P̂ ] = [(1 − x̂), x̂, 0, 0].


⎡ ⎤
(1 − x̂)2 ... ... ...
⎢ x̂(1 − x̂) x̂2 ... ... ⎥
t
[P̂ ] [P̂ ] = ⎢ ⎣

0 0 0 0 ⎦
0 0 0 0
204 Meshing, Geometric Modeling and Numerical Simulation 3


with JΓ = x212 + y12 2 . Obviously, one finds a contribution as for a triangle edge, that is, for

the first edge (if it contributes):


⎡ 1 ⎤
... ... ...
⎢ 3 ⎥
» ⎢ 1 1 ⎥
⎢ ⎥
2 2 ⎢
x12 + y12 ⎢ 6 3 ... ... ⎥.

⎢ 0 0 0 ... ⎥
⎣ ⎦
0 0 0 0

Let us return to the calculations of the coefficients of the stiffness matrix. We stated that a
quadrature formula had to be used. The simplest has as nodes the vertices with the weight 14 .
One writes:
  1
1 −−−→ −−−→ −−−→
AK11 = ||(1 − x̂)A3 A4 + (1 − ŷ)A2 A3 ||2 dK̂ ≈ ||(1 − x̂i )A3 A4 +
K̂ J i
4 J (i)
−−−→
+ (1 − ŷi )A2 A3 ||2 ,

with for J (i):


¶ −−−→ −−−→© ¶ −−−→ −−−→© −
< (1 − ŷi )A1 A2 + ŷi A4 A3 ∧ (1 − x̂i )A1 A4 + x̂i A2 A3 , →
n >,

Therefore, as expected:
−−−→ −−−→ →
J (1) =< A4 A1 ∧ A1 A2 , −
n >= 2S1 , J (2) = 2S2 , J (3) = 2S3 , J (4) = 2S4 ,
−−−→ −−−→ → −−−→ −−−→ → −−−→ −−−→ →
< A1 A2 ∧ A1 A4 , − n > < A2 A3 ∧ A2 A1 , −n > < A3 A4 ∧ A3 A2 , −n >
where S1 = , S2 = , S3 =
2 2 2
−−−→ −−−→ →
< A4 A1 ∧ A4 A3 , −
n >
and S4 = ,
2
the surfaces of the corner triangles associated with the four vertices of the quadrilateral. There-
after:
1 −−−→ 2 1 −−−→ 2 1 −−−→ 2
AK11 ≈ ||A2 A4 || + ||A2 A3 || + ||A3 A4 || ,
8S1 8S2 8S4
−−−→
where it can be seen that the diagonal A2 A4 is included and thus presents a simple geometric
interpretation. As for a triangle where this coefficient involved the edge in front of the first vertex,
here, it is the edges and diagonal "opposite" to this vertex that contribute. As a result, for the
other diagonal terms, one will have:
1 −−−→ 2 1 −−−→ 2 1 −−−→ 2
AK22 ≈ ||A4 A1 || + ||A3 A1 || + ||A3 A4 || ,
8S1 8S2 8S3
1 −−−→ 2 1 −−−→ 2 1 −−−→ 2
AK33 ≈ ||A1 A2 || + ||A4 A2 || + ||A4 A1 || ,
8S2 8S3 8S4
1 −−−→ 2 1 −−−→ 2 1 −−−→ 2
AK44 ≈ ||A1 A2 || + ||A2 A3 || + ||A1 A3 || .
8S1 8S3 8S4
Meshes and Finite Element Calculations 205

We proceed by expressing the non-diagonal terms.


 1 −−−→ −−−→ −−−→ −−−→
AK21 ≈ < (1 − x̂i )A3 A4 + (1 − ŷi )A2 A3 , (1 − ŷi )A4 A1 + x̂i A3 A1 > .
i
4 J (i)

1 −−−→ −−−→ 1 −−−→ −−−→


AK21 ≈ < A2 A4 , A4 A1 > + < A2 A3 , A3 A1 >,
8S1 8S2
as an expression in which cotangents7 are hidden. The other non-diagonal coefficients have the
same shape and consist of two terms; one successively finds:
1 −−−→ −−−→ 1 −−−→ −−−→
AK31 ≈ < A2 A3 , A1 A2 > + < A3 A4 , A4 A1 >,
8S2 8S4
1 −−−→ −−−→ 1 −−−→ −−−→
AK32 ≈ < A1 A2 , A3 A1 > + < A4 A2 , A3 A4 >,
8S2 8S3
1 −−−→ −−−→ 1 −−−→ −−−→
AK41 ≈ < A2 A4 , A1 A2 > + < A3 A4 , A1 A3 >,
8S1 8S4
1 −−−→ −−−→ 1 −−−→ −−−→
AK42 ≈ < A1 A2 , A4 A1 > + < A2 A3 , A3 A4 >,
8S1 8S3
1 −−−→ −−−→ 1 −−−→ −−−→
AK43 ≈ < A2 A3 , A4 A2 > + < A1 A3 , A4 A1 > .
8S3 8S4
Two types of coefficients can be seen, those of indices 31 and 42 being slightly different (referring
to the diagonal, the weights Si are those of other indices).


• The mass matrix is written as: K̂ t [P̂ ] [P̂ ]J dK̂, and:
⎡ ⎤
(1 − x̂)2 (1 − ŷ)2 ... ... ...
⎢ x̂(1 − x̂)(1 − ŷ) 2
x̂ 2
(1 − ŷ) 2
... ... ⎥
t
[P̂ ] [P̂ ] = ⎢
⎣ x̂(1 − x̂)ŷ(1 − ŷ)
⎥.

x̂2 ŷ(1 − ŷ) x̂2 ŷ 2 ...
(1 − x̂)2 ŷ(1 − ŷ) x̂(1 − x̂)ŷ(1 − ŷ) x̂(1 − x̂)ŷ 2 2 2
(1 − x̂) ŷ

The presence of the Jacobian polynomial in the numerator makes it possible to make an exact
integration, unlike the case of a stiffness matrix where a quadrature formula must be used. How-
ever, even here, a quadrature is possible with the condensation of the matrix as a result, which,
as such, is diagonal. The quadrature is defined by its nodes, vertices and its weights, 14 at every
node.

As a reminder, since:

J = {(1 − ŷ)x12 − ŷx34 } {−(1 − x̂)y41 + x̂y23 } −


− {(1 − ŷ)y12 − ŷy34 } {−(1 − x̂)x41 + x̂x23 } ,

7. The dot product involves a cosine, and the surface contains a sine.
206 Meshing, Geometric Modeling and Numerical Simulation 3

each term can be exactly integrated, for example, index 11:



(1 − x̂)2 (1 − ŷ)2 J dK̂,

requires the calculation of:



(1 − x̂)2 (1 − ŷ)2 {(1 − ŷ)x12 − ŷx34 } {−(1 − x̂)y41 + x̂y23 } dK̂


and of − (1−x̂)2 (1−ŷ)2 {(1 − ŷ)y12 − ŷy34 } {−(1 − x̂)x41 + x̂x23 } dK̂, which, moreover,

can easily be deduced from the first calculation. Therefore, one just looks at:

(1 − x̂)2 (1 − ŷ)2 {(1 − ŷ)x12 − ŷx34 } {−(1 − x̂)y41 + x̂y23 } dK̂.

There are thus two polynomials to integrate:



(1 − x̂)2 {−(1 − x̂)y41 + x̂y23 } (1 − ŷ)2 {(1 − ŷ)x12 − ŷx34 } dK̂

 1  1
= (1 − x̂)2 {−(1 − x̂)y41 + x̂y23 } dx̂ (1 − ŷ)2 {(1 − ŷ)x12 − ŷx34 } dŷ
0 0
 1  1
   
= −(1 − x̂)3 y41 + x̂(1 − x̂)2 y23 dx̂ (1 − ŷ)3 x12 − ŷ(1 − ŷ)2 x34 dŷ.
0 0
We reuse formula [6.12] that, here, becomes:
 1
α!β!
(1 − x̂)α x̂β dx̂ = .
0 (1 + α + β)!

Hence, the result:


y23   x12
y x34   x41 x23   y12 y34 
41
MK11 = − −− + − −
4 12 4 12 4 12 4 12
x x34   y23 y41   y12 y34   x23 x41 
12
= − − − − − ,
4 12 12 4 4 12 12 4
which is none other than:
1 ¶ −−−→ −−−→© ¶−−−→ −−−→© →
MK11 = < 3 A1 A2 − A3 A4 ∧ A2 A3 − 3 A4 A1 , − n >.
144
By a simple permutation, we can find the other three diagonal coefficients, but it requires more
work to guess what the other coefficients are. The relation will thus be deconstructed. In fact,
the result is particularly simple. By using the surfaces Si of the corner triangles, we have:
1
MK11 = {9 S1 + 3 S2 + S3 + 3 S4 } .
72
Meshes and Finite Element Calculations 207

The other three diagonal coefficients are therefore:


1
MK22 = {3 S1 + 9 S2 + 3 S3 + S4 } ,
72
1
MK33 = {S1 + 3 S2 + 9 S3 + 3 S4 } ,
72
1
MK44 = {3 S1 + S2 + 3 S3 + 9 S4 } .
72
Therefrom, all other coefficients are deduced (the weights being necessarily 3 or 1). It is thus
found that:
1
MK21 = {3 S1 + 3 S2 + S3 + S4 } ,
72
1
MK31 = {S1 + S2 + S3 + S4 } ,
72
1
MK32 = {S1 + 3 S2 + 3 S3 + S4 } ,
72
1
MK41 = {3 S1 + S2 + S3 + 3 S4 }
72
1
MK42 = {S1 + S2 + S3 + S4 } ,
72
1
MK43 = {S1 + S2 + 3 S3 + 3 S4 } .
72
In summary, the exact calculation is reasonable and only four vector products constitute its cost.

It should be noted, as for the triangle, that MKij = 2S1 (if it is assumed that Si are
equal). Having said that, one will nevertheless look at a quadrature formula whose effect will be
to condense the matrix. Therefore, the result is approached by:
 
t
[P̂ ] [P̂ ]J dK̂ ≈ ωi t [P̂ (i)] [P̂ (i)]J (i),
K̂ i

with the four vertices as integration points, and as weights ωi , the single value 14 . The polynomi-
als and the Jacobian are evaluated at these points. The calculation is then immediate, for i = 1,
only one term remains, that of index (11), which is equal to J 4(1) and we have J (1) = 2 S1 .
Therefore, the mass matrix has as coefficients:
S1 S2
MK11 = , MK22 = , etc.
2 2
with Si the area of the corner triangle at vertex i. We note the usual property, MKij =
S1 +S2 +S3 +S4
2 .
208 Meshing, Geometric Modeling and Numerical Simulation 3

 tFinally, we lookat t the expression of the right-hand side with both of its potential terms,

[P̂ ]F J dK̂ and Γ̂ [P̂ ]f JΓ d∂ K̂. The first term is written as:
⎧ ⎫
 ⎪ ⎪ (1 − x̂)(1 − ŷ) ⎪

⎨ ⎬
x̂(1 − ŷ)
F J dK̂,
K̂ ⎪


x̂ŷ ⎪


(1 − x̂)ŷ

with J the Jacobain polynomial. An exact integration is immediate, making any quadrature
unnecessary. One therefore has to calculate expressions such as:

(1 − x̂)(1 − ŷ) {(1 − ŷ)x12 − ŷx34 } {−(1 − x̂)y41 + x̂y23 } −

− {(1 − ŷ)y12 − ŷy34 } {−(1 − x̂)x41 + x̂x23 } dK̂,

that is:
 1  1
(1 − ŷ) {(1 − ŷ)x12 − ŷx34 } (1 − x̂) {−(1 − x̂)y41 + x̂y23 } dx̂dŷ
ŷ=0 x̂=0
 1  1
− (1 − ŷ) {(1 − ŷ)y12 − ŷy34 } (1 − x̂) {−(1 − x̂)x41 + x̂x23 } dx̂dŷ,
ŷ=0 x̂=0

1 x34 y23
therefore (using the above formula), for the first line, one has x12 − −y41 + . In
9 2 2
other words, the first coefficient8 is written as:
F −−−→ 1 −−−→ −−−→ 1 −−−→ −
BK1 = < (A1 A2 − A3 A4 ) ∧ (−A4 A1 + A2 A3 ), →
n >
9 2 2
F −−−→ 1 −−−→ −−−→ 1 −−−→ →
= < (A1 A2 + A4 A3 ) ∧ (A1 A4 + A2 A3 ), −
n >,
9 2 2
with strong weight on the incidental edges at A1 and weak weight on the other two. As a result:
F −−−→ 1 −−−→ −−−→ 1 −−−→ →
BK2 = < (A2 A3 + A1 A4 ) ∧ (A2 A1 + A3 A4 ), −
n >,
9 2 2
F −−−→ 1 −−−→ −−−→ 1 −−−→ →
< (A3 A4 + A2 A1 ) ∧ (A3 A2 + A4 A1 ), −
BK3 = n >,
9 2 2
F −−−→ 1 −−−→ −−−→ 1 −−−→ →
BK4 = < (A4 A1 + A3 A2 ) ∧ (A4 A3 + A2 A1 ), − n >.
9 2 2
If the datum is no longer a constant, the formula must be adapted or resort to an integration using
a quadrature. For F defined as [P̂ ] {Fi }, we find an analogous calculation to that of a mass
matrix, {BK } = [MK ] {Fi }.

8. With the dilemma of describing an edge in its traveling direction (4 to 1) or in the increasing direction of
the associated reference coordinate (1 to 4).
Meshes and Finite Element Calculations 209

To conclude, the second term of the right-hand side is evaluated. If it is of the first edge:
⎧ 1 ⎫
⎧ ⎫ ⎪
⎪ ⎪
(1 − x̂) ⎪ ⎪ 2 ⎪ ⎪
  1⎪

⎨ ⎪
⎬ »


⎨ 1



x̂ 2 2 2
t
[P̂ ]f JΓ d∂ K̂ = f JΓ dx̂ = x12 + y12 f ,
Γ̂ ⎪
0 ⎪ 0 ⎪
⎪ ⎪ 0 ⎪
⎪ ⎪
⎩ ⎭ ⎪
⎪ ⎪

0 ⎪
⎩ ⎪

0

which can be derived as (if f is not a constant and is expressed as


⎧ f1 ⎫
3 + 6
f2

⎪ ⎪


⎪ ⎪


⎨ f1 + f2 ⎪

 6 3
(1 − x̂)f1 + x̂f2 ), x212 + y12 2 .

⎪ 0 ⎪


⎪ ⎪


⎩ ⎪

0

For the other edges, one will have similar expressions.

6.2.4.2. Straight-sided Lagrange triangle of degree 2 with six nodes


We indicate the expression of the basic polynomials and their derivatives:

[P̂ ] = [(1 − x̂ − ŷ)(1 − 2x̂ − 2ŷ), x̂(2x̂ − 1), ŷ(2ŷ − 1), 4x̂(1 − x̂ − ŷ), 4x̂ŷ, 4(1 − x̂ − ŷ)ŷ]
ï ò
ˆ ] = −3 + 4(x̂ + ŷ) 4x̂ − 1
[DP
0 4(1 − 2x̂ − ŷ) 4ŷ −4ŷ
−3 + 4(x̂ + ŷ) 0 4ŷ − 1 −4x̂ 4x̂ 4(1 − x̂ − 2ŷ)
From the geometrical perspective, the current element K is a priori defined as the image by the
transformation FK of the reference element K̂. Therefore, with Ai the six nodes of K:
 6


K = M = FK (M̂ ) = p̂i (M̂ ) Ai , M̂ ∈ K̂ .
1=1

Since the current element is straight, if its nodes are defined according to a uniform pattern, the
node of an edge is its midpoint, the geometrical transformation is simplified and we find that of
the triangle of degree 1. Indeed, let Ai be the nodes of K with A4 = A1 +A2
2
, etc., we can write,
here with respect to A1 :

(1 − x̂ − ŷ)(1 − 2x̂ − 2ŷ) + 2x̂(1 − x̂ − ŷ) + 2ŷ(1 − x̂ − ŷ) = (1 − x̂ − ŷ),

one will find the same x̂ with respect to A2 and ŷ with respect to A3 and therefore we have:
 3


1 1
K = M = FK (M̂ ) = p̂i (M̂ ) Ai , M̂ ∈ K̂ ,
1=1

1
with FK (.)the degree 1 transformation and p̂1i
the degree 1 shape functions. As a result, the
Jacobian of the transformation is a constant per element, two times the surface area of the ele-
ment K.
210 Meshing, Geometric Modeling and Numerical Simulation 3

Figure 6.4. The reference triangle K̂, an arbitrary triangle K and the transformation FK . On
the left side, a straight-sided triangle, on the right, a curved triangle (a case that will be
detailed below)

• For the stiffness matrix, we need to first evaluate [DF ]. We have:


ˆ ] {Ai } ,
[DF ] = [DP

a priori involving the six nodes. Since the edge nodes are the midpoints, we will exactly find the
case of a triangle of degree 1, that is:
ï ò
x12 y12
[DF ] = ,
x13 y13

therefore, in the same way, J = x12 y13 − x13 y12 = 2 SK and:


ï ò
1 y13 −y12
[DF ]−1 = .
J −x13 x12

ˆ ] (without expressing the detailed expression of [DP


We then evaluate [DF ]−1 [DP ˆ ]). Namely:

ï ò  ∂p1 ∂p2 ∂p3 ∂p4 ∂p5 ∂p6 


1 y13 −y12 ∂ x̂ ∂ x̂ ∂ x̂ ∂ x̂ ∂ x̂ ∂ x̂
J −x13 x12 ∂p1 ∂p2 ∂p3 ∂p4 ∂p5 ∂p6
∂ ŷ ∂ ŷ ∂ ŷ ∂ ŷ ∂ ŷ ∂ ŷ
⎡ ⎤
y13 ∂p
− 1
y12 ∂p 1
...... y13 ∂p 6
− y12 ∂p 6
1 ⎣ ∂ x̂ ∂ ŷ ∂ x̂ ∂ ŷ
⎦.
=
J −x13 ∂p 1 ∂p1
∂ x̂ + x12 ∂ ŷ ...... −x13 ∂p ∂p6
∂ x̂ + x12 ∂ ŷ
6

Then we estimate t [DP ˆ ]t [DF ]−1 [DF ]−1 [DP


ˆ ], which is a (6 × 6) matrix of which only coeffi-
cient 11 will be expressed, then a non-diagonal term, that of index 21, and directly the coefficient
of index ij, that is:
 ®ß ™ ß ™ ´
1 ∂p1 ∂p1 2 ∂p1 ∂p1 2
AK11 = y13 − y12 + −x13 + x12 dK̂,
J K̂ ∂ x̂ ∂ ŷ ∂ x̂ ∂ ŷ
Meshes and Finite Element Calculations 211

 ßß ™ß ™
1 ∂p2 ∂p2 ∂p1 ∂p1
AK21 = y13 − y12 y13 − y12
J K̂ ∂ x̂ ∂ ŷ ∂ x̂ ∂ ŷ
ß ™ß ™™
∂p2 ∂p2 ∂p1 ∂p1
+ −x13 + x12 −x13 + x12 dK̂,
∂ x̂ ∂ ŷ ∂ x̂ ∂ ŷ
and the term of index ij of the matrix is formally written as:
 ßß ™ß ™
1 ∂pi ∂pi ∂pj ∂pj
AKij = y13 − y12 y13 − y12
J K̂ ∂ x̂ ∂ ŷ ∂ x̂ ∂ ŷ
ß ™ß ™™
∂pi ∂pi ∂pj ∂pj
+ −x13 + x12 −x13 + x12 dK̂. [6.13]
∂ x̂ ∂ ŷ ∂ x̂ ∂ ŷ
If we expand, we find:
 ß ß ™
1 2 ∂pi ∂pj ∂pi ∂pj ∂pi ∂pj 2 ∂pi ∂pj
AKij = y13 − y12 y13 + + y12
J K̂ ∂ x̂ ∂ x̂ ∂ x̂ ∂ ŷ ∂ ŷ ∂ x̂ ∂ ŷ ∂ ŷ
ß ™ ™
∂pi ∂pj ∂pi ∂pj ∂pi ∂pj ∂pi ∂pj
+ x213 − x12 x13 + + x212 dK̂,
∂ x̂ ∂ x̂ ∂ x̂ ∂ ŷ ∂ ŷ ∂ x̂ ∂ ŷ ∂ ŷ
that can be expressed as:
 ß ß ™
1 −−−→ ∂pi ∂pj −−−→ −−−→ ∂pi ∂pj ∂pi ∂pj
||A1 A3 ||2 − < A1 A2 , A1 A3 > + +
J K̂ ∂ x̂ ∂ x̂ ∂ x̂ ∂ ŷ ∂ ŷ ∂ x̂

−−−→ ∂pi ∂pj
+||A1 A2 ||2 dK̂.
∂ ŷ ∂ ŷ
The coefficients of this matrix are integrals involving the products of the derivatives of polyno-
∂pi ∂pj ∂pi ∂pj ∂pi ∂pj
mials , and . There are 108 terms of this nature, but since the first two
∂ x̂ ∂ x̂ ∂ ŷ ∂ ŷ ∂ x̂ ∂ ŷ
∂p3 ∂p2 ∂p6 ∂p5 ∂p5 ∂p4
expressions are symmetrical, and = = 0, =− and finally =− ,
∂ x̂ ∂ ŷ ∂ x̂ ∂ x̂ ∂ ŷ ∂ ŷ
the number of integrals to be evaluated is now only 10 + 10 + 16. By looking further into the
nature of the integrals that have yet to be calculated, we see, not surprisingly, that they are linear
combinations of only four elementary integrals denoted as I1 , Ix̂ = Iŷ , Ix̂2 = Iŷ2 and Ix̂ŷ , with:
  1  1−x̂  1  1−x̂
1 1 1
I1 = dK̂ = , Ix̂ = x̂ dŷdx̂ = , Ix̂2 = x̂2 dŷdx̂ = ,
K̂ 2 x̂=0 ŷ=0 6 x̂=0 ŷ=0 12
 1  1−x̂
1
and Ix̂ŷ = x̂ŷ dŷdx̂ = .
x̂=0 ŷ=0 24
212 Meshing, Geometric Modeling and Numerical Simulation 3

 ∂pi ∂pj
Therefrom, the (symmetrical) array of products is deduced as K̂
dK̂:
∂ x̂ ∂ x̂
∂p1 ∂p2 ∂p3 ∂p4 ∂p5 ∂p6
∂ x̂ ∂ x̂ ∂ x̂ ∂ x̂ ∂ x̂ ∂ x̂
∂p1 1
2 ... ... ... ... ...
∂ x̂
∂p2 1 1
6 2 ... ... ... ...
∂ x̂
∂p3
0 0 0 ... ... ...
∂ x̂
∂p4
− 23 − 23 0 4
3 ... ...
∂ x̂
∂p5 4
0 0 0 0 3 ...
∂ x̂
∂p6
0 0 0 0 − 43 4
3
∂ x̂

observing that the sum per column (row) is zero and that the non-zero contributions come from
the edge [A1 A2 ] and the segments [A1 A4 ] and [A2 A4 ] as well as from segment [A6 , A5 ], which
correspond to the differentiation direction.
 ∂pi ∂pj
After those remarks, we instantly have the array of products K̂
dK̂:
∂ ŷ ∂ ŷ
∂p1 ∂p2 ∂p3 ∂p4 ∂p5 ∂p6
∂ ŷ ∂ ŷ ∂ ŷ ∂ ŷ ∂ ŷ ∂ ŷ
∂p1 1
2 ... ... ... ... ...
∂ ŷ
∂p2
0 0 ... ... ... ...
∂ ŷ
∂p3 1 1
6 0 2 ... ... ...
∂ ŷ
∂p4 4
0 0 0 3 ... ...
∂ ŷ
∂p5
0 0 0 − 43 4
3 ...
∂ ŷ
∂p6
− 23 0 − 23 0 0 4
3
∂ ŷ

with the same observations, edge [A1 A3 ] and from segments [A1 A6 ] and [A3 A6 ] as well as from
 ∂pi ∂pj
segment [A4 , A5 ] contribute. Finally, for crossed derivatives, K̂ dK̂, one has the array:
∂ x̂ ∂ ŷ
Meshes and Finite Element Calculations 213

∂p1 ∂p2 ∂p3 ∂p4 ∂p5 ∂p6


∂ ŷ ∂ ŷ ∂ ŷ ∂ ŷ ∂ ŷ ∂ ŷ
∂p1 1 1
2 0 6 0 0 − 23
∂ x̂
∂p2 1
6 0 − 16 − 13 1
3 0
∂ x̂
∂p3
0 0 0 0 0 0
∂ x̂
∂p4
− 23 0 0 2
3 − 23 2
3
∂ x̂
∂p5 2
0 0 3 − 23 2
3 − 23
∂ x̂
∂p6
0 0 − 23 2
3 − 23 2
3
∂ x̂
As expected, the sum of the coefficients per column and row is zero. Moreover, when we expand
1 2 1
2 = 3 − 6 , we can guess the edges or segments that contribute.

The value of the coefficients can then be given. We start with AK11 :
ß 2 ™
1 y13 y2 x2 x2
AK11 = − y12 y13 + 12 + 13 − x12 x13 + 12 =
J 2 2 2 2
1  2 2
 1  
y13 − 2 y12 y13 + y12 + x213 − 2 x12 x13 + x212 = (y13 − y12 )2 + (x13 − x12 )2 ,
2J 2J
which is none other than: −−−→
||A2 A3 ||2
,
4 SK
−−−→ −−−→
||A1 A3 ||2 ||A1 A2 ||2
from which it is inferred that AK22 = and AK33 = . For the following
4 SK 4 SK
diagonal coefficient, AK44 , one has (we only denote the terms in ykl ):
ß ™
2 ∂pi ∂pj ∂pi ∂pj ∂pi ∂pj 2 ∂pi ∂pj
y13 − y12 y13 + + y12
∂ x̂ ∂ x̂ ∂ x̂ ∂ ŷ ∂ ŷ ∂ x̂ ∂ ŷ ∂ ŷ
4 2 2
 2 2 2 2 2

= y − y12 y13 + y12 = y + y12 + y13 − 2y12 y13 + y12
3 13 3 13
2 2 2
 2 2 
= y + y12 + (y13 − y12 )2 = y + y122 2
+ y23 ,
3 13 3 13
which gives:
−−−→ −−−→ −−−→
||A1 A2 ||2 + ||A1 A3 ||2 + ||A2 A3 ||2
AK44 = .
3 SK
As a result, AK55 = AK66 = AK44 .

We continue with the non-diagonal coefficients.


214 Meshing, Geometric Modeling and Numerical Simulation 3

¶ © ¶ ∂p ©  2 
∂pj y13
For AK21 , we recalculate only y13 ∂p i
∂ x̂ − y12
∂pi
∂ ŷ y13
j
∂ x̂ − y12 ∂ ŷ , that is 6 −
y12 y13
6 =
1
 2
 1
6 y13 − y12 y13 = 6 {y13 y23 }. Therefore:

1 −−−→ −−−→
AK21 = < A1 A3 , A2 A3 > .
12SK
And, mechanically:
1 −−−→ −−−→ 1 −−−→ −−−→
AK31 = < A1 A2 , A3 A2 >= − < A1 A2 , A2 A3 >,
12SK 12SK
1 −−−→ −−−→
and AK32 = < A1 A2 , A1 A3 > .
12SK
 2 
Then, for AK41 , one looks at − 23 y13 − y12 y13 = − 23 {y13 y23 }, thus:

1 −−−→ −−−→
AK41 = − < A1 A3 , A2 A3 >,
3 SK

for AK42 , one looks at − 23 y12 y13 , thus:

1 −−−→ −−−→
AK42 = − < A1 A2 , A1 A3 >,
3 SK
for AK43 , the integrals of all the differential products are equal to zero, therefore AK43 = 0;
for AK51 , similarly, the integrals of all the differential products are equal to zero, therefore
AK51 = 0; for AK52 , we find in absolute value AK42 , that is:
1 −−−→ −−−→
AK52 = < A1 A2 , A1 A3 >,
3 SK

for AK53 , we look at − 2(y123y13 ) , that is:

1 −−−→ −−−→
AK53 = − < A1 A2 , A2 A3 >,
3 SK
 
for AK54 , one looks at − 43 −y12 y13 + y12
2
= − 43 {y12 (y12 − y13 )} = − 43 {y12 y32 } and:

2 −−−→ −−−→
AK54 = < A1 A2 , A2 A3 >,
3 SK
 
for AK61 , one looks at − 23 −y12 y13 + y12
2
, and:

1 −−−→ −−−→ 1 −−−→ −−−→


AK61 = − < A1 A2 , A3 A2 >= < A1 A2 , A2 A3 >,
3 SK 3 SK
for AK62 , the integrals of all the differential products are equal to zero, therefore AK62 = 0; for
AK63 , we find AK61 , that is AK63 = AK61 ; for AK64 , we look at − 43 y12 y13 , therefore:

2 −−−→ −−−→
AK64 = − < A1 A2 , A1 A3 >,
3 SK
Meshes and Finite Element Calculations 215

 2 
for AK65 , we look at − 43 y13 − y12 y13 , that is:
2 −−−→ −−−→
AK65 = − < A1 A3 , A2 A3 > .
3 SK
We now look at the possible boundary contributions to this stiffness matrix. The second part
intervening in the general expression has to be calculated.

Let us look at the case where the edge [A1 A2 ] is involved in this boundary contribution. First,
we notice that the integration element is the same as for the triangle of degree 1. In fact, we have
d∂K = JΓ d∂K ˆ where JΓ depends on the edge being addressed. Therefore, we obtain:
⎡ 2 2

(1 − x̂) (1 − 2x̂) ... ... ... ... ...
 1⎢

−x̂(1 − x̂)(1 − 2x̂)2 x̂2 (2x̂ − 1)2 ... ... ... ... ⎥

⎢ 0 0 0 ... ... ... ⎥
JΓ ⎢ 4x̂(1 − x̂)2 (1 − 2x̂) 4x̂2 (1 − x̂)(2x̂ − 1) 0 16x̂2 (1 − x̂)2
⎥ dx̂ ,
⎢ ... ... ⎥
0
⎣ 0 0 0 0 0 ... ⎦
0 0 0 0 0 0

which, along with JΓ = x212 + y12
2 and since an exact integration is easy, is reduced to:

⎡ 2 ⎤
... ... ... ... ...
⎢ 15 ⎥
⎢ ⎥
⎢ −1 2
... ... ... ... ⎥
⎢ ⎥
⎢ 30 15 ⎥
⎢ 0 0 0 ... ... ... ⎥
JΓ ⎢ ⎥.
⎢ 1 1 8 ⎥
⎢ 0 ... ... ⎥
⎢ ⎥
⎢ 15 15 15 ⎥
⎣ 0 0 0 0 0 ... ⎦
0 0 0 0 0 0
Finally, the complete stiffness matrix is the sum of both types of contributions seen above.

The boundary contribution of the edgesother than the one explained above is obtained in the
same way, for edge A2 A3 , one has JΓ = x223 + y23 2 and the potential contribution is:

⎡ ⎤
0 ... ... ... ... ...
⎢ 0 2
⎢ ... ... ... ... ⎥ ⎥
⎢ 15 ⎥
⎢ 1 2 ⎥
⎢ 0 − ... ... ... ⎥
⎢ ⎥
JΓ ⎢ 30 15 ⎥,
⎢ ⎥
⎢ 0 0 0 0 ... ... ⎥
⎢ 1 1 8 ⎥
⎢ 0 0 ... ⎥
⎣ ⎦
15 15 15
0 0 0 0 0 0
216 Meshing, Geometric Modeling and Numerical Simulation 3

while the edge A1 A3 potentially contributes through the term:


⎡ ⎤
2
... ... ... ... ...
⎢ 15 ⎥
⎢ ⎥
⎢ 0 0 ... ... ... ... ⎥
⎢ 1 2 ⎥
⎢ ⎥
⎢ − 0 ... ... ... ⎥
JΓ ⎢ 30 15 ⎥,
⎢ ⎥
⎢ 0 0 0 0 ... ... ⎥
⎢ ⎥
⎢ 0 0 0 0 0 .... ⎥
⎣ 1 1 8 ⎦
0 0 0
15 15 15

with JΓ = x213 + y13 2 as it is easy to see.


• For the mass matrix, one has to calculate K̂ .t [P̂ ] [P̂ ]dK̂, the Jacobian, being constant,
appearing as a factor in front of this integral. This gives the following integrals, expressed as
  1  1−x̂

...dK̂ and then calculated as x̂=0 ŷ=0 ... dŷdx̂. We saw a direct formula for this integral
if basic polynomials of degree 1 are considered. It is therefore sufficient to express the basic
polynomials of degree 2 from those of degree 1, then apply the formula. We denote by λi the
degree 1 polynomials on K̂, one has:

λ1 = (1 − x̂ − ŷ), λ2 = x̂, λ3 , = ŷ.

Therefore:
p1 = (1 − x̂ − ŷ)(1 − 2x̂ − 2ŷ) = λ21 − λ1 λ2 − λ1 λ3 ,
p2 = x̂(2x̂ − 1) = 2λ22 − λ2 ,
p3 = ŷ(2ŷ − 1) = 2λ23 − λ3 ,
p4 = 4x̂(1 − x̂ − ŷ) = 4λ1 λ2 ,
p5 = 4x̂ŷ = 4λ2 λ3 ,
p6 = 4(1 − x̂ − ŷ)ŷ = 4λ1 λ3 ,
and the formula that will be used (seen above, relation [6.12]) is written here as:

β γ α!β!γ!
1 λ2 λ3 dK̂ =
λα .
K̂ (2 + α + β + γ)!

We can now look at the value of the integrals sought by calculating only those strictly neces-
sary, in fact, successively:
MK11 = MK22 = MK33 ,
MK44 = MK55 = MK66 ,
MK21 = MK31 = MK32 ,
MK54 = MK64 = MK65 ,
Meshes and Finite Element Calculations 217

MK43 = MK51 = MK62 ,


MK41 = MK42 = MK52 = MK53 = MK61 = MK63 .
Therefore only one integral per type will be calculated, the easiest one (calculation by hand).
This gives:
 
 4  J SK
MK22 = x̂2 (2x̂ − 1)2 J dK̂ = 4λ2 − 4λ32 + λ22 J dK̂ = = ,
K̂ K̂ 60 30
 
4J 8 SK
MK55 = 16x̂2 ŷ 2 J dK̂ = 16λ22 λ23 J dK̂ = = ,
K̂ K̂ 45 45
 
J SK
MK32 ŷ(2ŷ − 1)x̂(2x̂ − 1)J dK̂ = λ2 λ3 (2λ2 − 1)(2λ3 − 1)J dK̂ = − =− ,
K̂ K̂ 360 180
 
2J 4 SK
MK64 = 16ŷ x̂(1 − x̂ − ŷ)2 J dK̂ = 16λ21 λ2 λ3 J dK̂ = = ,
K̂ K̂ 45 45
 
J SK
MK43 = 4x̂(1 − x̂ − ŷ)ŷ(2ŷ − 1)J dK̂ = 4λ1 λ2 λ3 (2λ3 − 1)J dK̂ = − =− ,
K̂ K̂ 90 45
 
MK41 = 4x̂(1− x̂− ŷ)(1− x̂− ŷ)(1−2x̂−2ŷ)J dK̂ = 4λ21 λ2 (1−2λ2 −2λ3 )J dK̂ = 0.
K̂ K̂
It gives the following matrix:
⎡ ⎤
6 ... ... ... ... ...
⎢ −1 6 ... ... ... ... ⎥
⎢ ⎥
SK ⎢ −1 −1 6 ... ... ... ⎥
⎢ ⎥,
180 ⎢ 0 0 −4 32 ... ... ⎥
⎢ ⎥
⎣ −4 0 0 16 32 ... ⎦
0 −4 0 16 16 32

and we verify the usual property, MKIJ = SK .


• The right-hand side and its two terms. We take an exact integration, so for the term in K̂
,
we have, for the integrals of the six polynomials:

(1 − x̂ − ŷ)(1 − 2x̂ − 2ŷ) = λ21 − λ1 λ2 − λ1 λ3 → 0,

x̂(2x̂ − 1) = 2λ22 − λ2 → 0,
ŷ(2ŷ − 1) = 2λ23 − λ3 → 0,
1
4x̂(1 − x̂ − ŷ) = 4λ1 λ2 → ,
6
1
4x̂ŷ = 4λ2 λ3 → ,
6
1
4(1 − x̂ − ŷ)ŷ = 4λ1 λ3 → ,
6
218 Meshing, Geometric Modeling and Numerical Simulation 3

and that part of the right-hand side is equal to:


⎧ ⎫ ⎧ ⎫ ⎧ ⎫

⎪ 0 ⎪
⎪ ⎪
⎪ 0 ⎪
⎪ ⎪
⎪ 0 ⎪


⎪ 0 ⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪


⎪ ⎪ ⎪
⎪ 0 ⎪
⎪ ⎪ 0
⎪ ⎪

JF ⎨ ⎬ SK F ⎨ ⎬ SK ⎨ 0 ⎬
0 0
or or still .
6 ⎪ ⎪ 1 ⎪
⎪ 3 ⎪ ⎪ 1 ⎪
⎪ 3 ⎪
⎪ F4 ⎪

⎪ 1 ⎪
⎪ ⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪


⎪ ⎪
⎪ ⎪
⎪ 1 ⎪
⎪ ⎪
⎪ F ⎪

⎩ ⎭ ⎩ ⎭ ⎩ 5 ⎭
1 1 F6

If F is more finely defined, for instance, {F } = [P ] {Fi }, then the mass matrix is used and
one has {BK } = [MK ] {Fi }, that is:
⎧ ⎫

⎪ 6F1 −F2 −F3 −4F5 ⎪


⎪ ⎪


⎪ −F 1 +6F 2 −F3 −4F6 ⎪

SK ⎨ −F1 −F2 +6F3 −4F4

.
180 ⎪
⎪ −4F3 +32F4 +16F5 +16F6 ⎪


⎪ ⎪


⎪ −4F1 +16F4 +32F5 +16F6 ⎪

⎩ ⎭
−4F2 +16F4 +16F5 +32F6

The boundary term (for example, ⎤ edge A1 A2 ) is calculated by exact integration, if f


⎡ for the
f
⎢ f ⎥
⎢ ⎥
JΓ ⎢ ⎥ 
is the data (constant), we find ⎢ 0 ⎥ with JΓ = x2 + y 2 as here before and similar

6 ⎢ 4 f ⎥ 12 12

⎣ 0 ⎦
0
expressions for the other edges. This result is outlined, if f is not⎧a constant and is expressed
⎫ as
⎪ 4 f1 − f2 + 2 f4 ⎪
⎪ ⎪

⎪ ⎪


⎪ ⎪
⎪ −f1 + 4 f2 + 2 f4 ⎪
⎪ ⎪

√ 2 2 ⎨ ⎪
⎪ ⎪


x12 +y12 0
(1 − x̂)(1 − 2x̂)f1 + x̂(2x̂ − 1)f2 + 4x̂(1 − x̂)f4 , as 30 .

⎪ ⎪
⎪ 2 f1 + 2 f2 + 16 f4 ⎪
⎪ ⎪


⎪ ⎪


⎪ ⎪


⎪ 0 ⎪

⎩ ⎭
0

6.2.4.3. Lagrange isoparametric (curved) triangle of degree 2 with six nodes


ˆ ] are identical to the ones in the right-angled
Polynomials and their derivatives [P̂ ] and [DP
triangle with six nodes. This is where the similarity ends. As a matter of fact, one has:
 6


K = M = FK (M̂ ) = p̂i (M̂ ) Ai , M̂ ∈ K̂ ,
1=1

and, a priori presumably, all six nodes contribute. The geometrical transformation is not simpli-
fied, it will follow that none of the future quantities will be a constant. For the matrix stiffness,
the presence of the (Jacobian) determinant in the denominator will impose the use of a quadrature
Meshes and Finite Element Calculations 219

formula. For the mass and a right-hand side, the determinant remains in the numerator, resulting
in, possibly (it will be specified hereafter), a reasonably exact integration.

• The stiffness matrix and its two terms.


ˆ ] {Ai }
[DF ] = [DP
ï ò
−3 + 4x̂ + 4ŷ 4x̂ − 1 0 4 − 8x̂ − 4ŷ 4ŷ −4ŷ
= {Ai } ,
−3 + 4x̂ + 4ŷ 0 4ŷ − 1 −4x̂ 4x̂ 4 − 4x̂ − 8ŷ
The coefficient of index (11) is expressed according to segments [14], [42] and [65] by:

(3 − 4x̂ − 4ŷ)x14 + (4x̂ − 1)x42 + 4ŷx65 .

The coefficient of index (21) is therefore expressed according to segments [16], [63] and [45] by:

(3 − 4x̂ − 4ŷ)x16 + (4ŷ − 1)x63 + 4x̂x45 .

Therefore:
ï ò
(3 − 4x̂ − 4ŷ)x14 + (4x̂ − 1)x42 + 4ŷx65 (3 − 4x̂ − 4ŷ)y14 + (4x̂ − 1)y42 + 4ŷy65
[DF ] = ,
(3 − 4x̂ − 4ŷ)x16 + (4ŷ − 1)x63 + 4x̂x45 (3 − 4x̂ − 4ŷ)y16 + (4ŷ − 1)y63 + 4x̂y45

then by denoting a = 3 − 4x̂ − 4ŷ, b = 4x̂ − 1 and c = 4ŷ − 1:


ï ò
ax14 + bx42 + 4ŷx65 ay14 + by42 + 4ŷy65
[DF ] = ,
ax16 + cx63 + 4x̂x45 ay16 + cy63 + 4x̂y45

and the determinant is equal to:

J =< a2 A14 ∧ A16 + caA14 ∧ A63 + 4x̂aA14 ∧ A45

+ abA42 ∧ A16 + cbA42 ∧ A63 + 4x̂bA42 ∧ A45


+ 4aŷA65 ∧ A16 + 4cŷA65 ∧ A63 + 16x̂ŷA65 ∧ A45 , →

n >
−−−→
with Aij = Ai Aj . This determinant9 is elegantly written as:

J = J (x̂, ŷ) =< {aA14 + bA42 + 4ŷA65 } ∧ {aA16 + cA63 + 4x̂A45 } , −



n >.

9. If we integrate this polynomial on K̂, we find the surface area of the triangle K. The result can be
considered astonishing, indeed one finds only seven terms, that is:
1 1 1 1 2 2 2
< A14 ∧A16 − A14 ∧A63 − A42 ∧A16 − A42 ∧A63 + A42 ∧A45 + A65 ∧A63 + A65 ∧A45 , −

n >.
2 6 6 6 3 3 3
Again the fact that 12 A14 ∧ A16 = 23 A14 ∧ A16 − 16 A14 ∧ A16 is used, which allows for grouping these
vector products, giving a particularly simple result, namely:
2 2 2 2 1
A14 ∧ A16 + A42 ∧ A45 + A65 ∧ A63 + A65 ∧ A45 − A12 ∧ A13 .
3 3 3 3 6
220 Meshing, Geometric Modeling and Numerical Simulation 3

Then we calculate [DF ]−1 , that is:


ï ò
1 ay16 + cy63 + 4x̂y45 −ay14 − by42 − 4ŷy65
.
J −ax16 − cx63 − 4x̂x45 ax14 + bx42 + 4ŷx65

The stiffness matrix, for its first term, is equal to:



1t ˆ t ˆ ]dK̂.
AK = [DP ] [DF ]−1 [DF ]−1 [DP
K̂ J

This expression is replaced by a quadrature whose nodes are the edge nodes with the weight 16 .
These quantities therefore have to be estimated at those nodes, that is:
ï ò
1 −1 1 0 0 0 0
[DP ]( , 0) = ,
2 −1 0 −1 −2 2 2
ï ò
1 1 1 1 0 −2 2 −2
[DP ]( , ) = ,
2 2 1 0 1 −2 2 −2

Therefrom is inferred the formula:

4 S1
SK = SKi − K
3 3
i

1
with Ki the (right-angled) triangles the nodes as vertices, for example K1 = [A1 A4 A6 ] and SK the area
of the triangle with the three vertices of the initial curved triangle as vertices. The cost of the computation
is minimal, in fact five vector products. This exact result allows the proposed quality formula (Volume 2,
Chapter 7) to be refined by replacing SK ≈ i SKi with the above expression.
If we continue in this digression (on the calculation
 of the surface area), it is known
that for this particular
 
element, and (only for it), one has J = p̂i Ji , thus SK = K̂ J dK̂ = K̂ p̂i Ji dK̂ which we
i i
know how to exactly calculate. It follows that:
1
SK = {J4 + J5 + J6 } .
6
As an exercise, we verify whether this sum yields the same value.
Is this exact area calculation, easy here for this specific element, possible for the other elements and for any
other order? The answer is affirmative. To find the result, it is clearly more convenient to write the elements
in Bézier form (Volume 1, Chapter 1 and Chapter 3). For the triangle of degree d, we find [Feuillet-2019]:

2  Nijk
SK = ,
(q + 1)(q + 2) 2
i+j+k=q

with Nijk , the control coefficients of the Jacobian polynomial, q = 2 (d − 1) the degree of this polynomial,
and, at degree 2, a cost of nine vector products. To verify, at degree 2, that this expression is identical to the
initial expression, we show that it is simply written as 16 {J4 + J5 + J6 } by writing the Nijk according to
the J in question.
Meshes and Finite Element Calculations 221

ï ò
1 −1 −1 0 2 2 −2
[DP ](0, ) = ,
2 −1 0 1 0 0 0
1
J4 = J ( , 0) =< A12 ∧ {A16 − A63 + 2A45 } , → −
n >,
2
1 1
J5 = J ( , ) =< {−A14 + A42 + 2A65 } ∧ {−A16 + A63 + 2A45 } , − →
n >,
2 2
1
J6 = J (0, ) =< {A14 − A42 + 2A65 } ∧ A13 , → −
n >,
2
it should be noted that, for a straight element with nodes at the midpoints of the edges, one finds
J4 = J5 = J6 =< A12 ∧ A13 , → −n >. Before continuing, we shall explain this Jacobian at the
three vertices, as these values will be needed in the following. The mechanical application of the
general formula gives:

J1 =< 9A14 ∧ A16 − 3A14 ∧ A63 − 3A42 ∧ A16 + A42 ∧ A63 , −



n >,

which is an expression that is not very legible, and which one can, through a few judicious
groupings, write such as:

J1 =< 12A14 ∧ A16 − 3A12 ∧ A13 + 4A42 ∧ A63 , →



n >,

namely an expression that is much more readable and that allows writing, without thinking, the
other two Jacobians, that is:

J2 =< 12A25 ∧ A24 − 3A12 ∧ A13 + 4A53 ∧ A41 , →



n >,

J3 =< 12A36 ∧ A35 − 3A12 ∧ A13 + 4A61 ∧ A52 , −



n >.
Finally, another notation is possible, involving only a single vector product, namely, J1 =
< (3A14 − A42 ) ∧ (3A16 − A63 ), → −
n >, and similar expressions for the other two Jacobians.

After this aside, we continue the calculations for the stiffness matrix:
ï ò
1 1 y16 − y63 + 2y45 −y12
[DF ]−1 ( , 0) = ,
2 J4 −x16 + x63 − 2x45 x12
ï ò
1 1 1 −y16 + y63 + 2y45 y14 − y42 − 2y65
[DF ]−1 ( , ) = ,
2 2 J5 x16 − x63 − 2x45 −x14 + x42 + 2x65
ï ò
−1 1 1 y13 −y14 + y42 − 2y65
[DF ] (0, ) = .
2 J6 −x13 x14 − x42 + 2x65
ˆ ], or, for the first point of the quadrature:
We proceed with calculating the various [DF ]−1 [DP
ï òï ò
1 y16 − y63 + 2y45 −y12 −1 1 0 0 0 0
,
J4 −x16 + x63 − 2x45 x12 −1 0 −1 −2 2 2
ï ò
1 −y16 + y63 − 2y45 + y12 y16 − y63 + 2y45 y12 2y12 −2y12 −2y12
= ,
J4 x16 − x63 + 2x45 − x12 −x16 + x63 − 2x45 −x12 −2x12 2x12 2x12
222 Meshing, Geometric Modeling and Numerical Simulation 3

we introduce, here, the particular vector:




V 4 = A14 − A42 + 2A65 ,

and its counterpart:




V 6 = A16 − A63 + 2A45 ,
whose components will be denoted by xv4 , yv4 and xv6 , yv6 . With this new notation, the method
for the first and that for the last nodes of the quadrature will be similar. We look at the first node,
namely:
ï ò
[DF ]−1 [DPˆ ]= 1 −yv6 + y12 yv6 y12 2y12 −2y12 −2y12
,
J4 xv6 − x12 −xv6 −x12 −2x12 2x12 2x12

and we write the transpose:


⎡ ⎤
−yv6 + y12 xv6 − x12
⎢ yv6 −xv6 ⎥
⎢ ⎥
1 ⎢ y12 −x12 ⎥
⎢ ⎥,
J4 ⎢ 2y12 −2x12 ⎥
⎢ ⎥
⎣ −2y12 2x12 ⎦
−2y12 2x12
1
the expression (for this integration node) of the coefficients (up to factor J4 ) is deduced:
−−→ −
→ −−→ − →
AK11 = ||A12 ||2 + || V 6 ||2 − 2 < A12 , V 6 >,

− −−→ →− −

AK21 = −|| V 6 ||2 + < A12 , V 6 >, AK22 = || V 6 ||2 ,
−−→ −−→ −→ −−→ →− −−→
AK31 = ||A12 ||2 − < A12 , V 6 >, AK32 =< A12 , V 6 >, AK33 = ||A12 ||2 ,
−−→ −−→ →− −−→ −→
AK41 = 2 ||A12 ||2 − 2 < A12 , V 6 >, AK42 = 2 < A12 , V 6 >,
−−→ −−→
AK43 = 2 ||A12 ||2 , AK44 = 4 ||A12 ||2 ,
−−→ −−→ − → −−→ −→ −−→
AK51 = −2 ||A12 ||2 + 2 < A12 , V 6 >, AK52 = −2 < A12 , V 6 >, AK53 = −2||A12 ||2 ,
−−→ −−→
AK54 = −4||A12 ||2 , AK55 = 4 ||A12 ||2 ,
−−→ −−→ →− −−→ − →
AK61 = −2 ||A12 ||2 + 2 < A12 , V 6 >, AK62 = −2 < A12 , V 6 >,
−−→ −−→ −−→ −−→
AK63 = −2||A12 ||2 , AK64 = −4||A12 ||2 , AK65 = 4||A12 ||2 , AK66 = 4 ||A12 ||2 .
The value of this geometric notation is that one mechanically has the contribution to the third
−−→ −−→ →
− −

integration point, one just has to replace A12 with A13 and V 6 with V 4 and place the contribu-
tions at the right indices. Whether there are signs that have to be changed as well is a question
that arises. To do so, one just has to observe the calculation of:
ï ò
1 −y13 + yv4 −y13 −yv4 2y13 2y13 −2y13
,
J6 x13 − xv4 x13 xv4 −2x13 −2x13 2x13
Meshes and Finite Element Calculations 223

we deduce the solution thereof, using the text-editor we copy the above result, we replace and
then restore the signs and permute the indices if necessary. That is (up to factor J16 ):
−−→ −
→ −−→ − →
AK11 = ||A13 ||2 + || V 4 ||2 − 2 < A13 , V 4 >,
−−→ −−→ →− −−→
AK21 = −||A13 ||2 − < A13 , V 4 >, AK22 = ||A13 ||2 ,

→ −−→ − → −−→ − → −

AK31 = −|| V 4 ||2 + < A13 , V 4 >, AK32 =< A13 , V 4 >, AK33 = || V 4 ||2 ,
−−→ −−→ →
− −−→
AK41 = −2 ||A13 ||2 + 2 < A13 , V 4 >, AK42 = −2||A13 ||2 ,
−−→ → − −−→
AK43 = −2 < A13 , V 4 >, AK44 = 4 ||A13 ||2 ,
−−→ −−→ − → −−→ −−→ − →
AK51 = −2 ||A13 ||2 + 2 < A13 , V 4 >, AK52 = −2||A13 ||2 , AK53 = −2 < A13 , V 4 >,
−−→ −−→
AK54 = 4||A13 ||2 , AK55 = 4 ||A13 ||2 ,
−−→ −−→ →
− −−→
AK61 = 2 ||A13 ||2 − 2 < A13 , V 4 >, AK62 = 2||A13 ||2 ,
−−→ − → −−→ −−→ −−→
AK63 = 2 < A13 , V 4 >, AK64 = −4||A13 ||2 , AK65 = −4||A13 ||2 , AK66 = 4 ||A13 ||2 .

Nonetheless, for the second integration node, the whole process has to be repeated because

→ −

the result cannot be intuited from the others (the vectors to be introduced are not V 4 and V 6
(although quite close at first sight)). We achieve the product:
ï òï ò
1 −y16 + y63 + 2y45 y14 − y42 − 2y65 1 1 0 −2 2 −2
,
J5 x16 − x63 − 2x45 −x14 + x42 + 2x65 1 0 1 −2 2 −2

or, with appropriate notations:


ï òï ò
1 −yw6 yw4 1 1 0 −2 2 −2
,
J5 xw6 −xw4 1 0 1 −2 2 −2

by introducing both vectors:



→ −

W 4 = A14 − A42 − 2A65 and W 6 = A16 − A63 − 2A45 ,

whose components are denoted xw4 , .... Therefore, up to a factor, we find:


ï ò
−yw6 + yw4 −yw6 yw4 −2(−yw6 + yw4 ) 2(−yw6 + yw4 ) −2(−yw6 + yw4 )
,
xw6 − xw4 xw6 −xw4 −2(xw6 − xw4 ) 2(xw6 − xw4 ) −2(xw6 − xw4 )

which, multiplied by its transpose,


⎡ ⎤
−yw6 + yw4 xw6 − xw4
⎢ −yw6 xw6 ⎥
⎢ ⎥
⎢ yw4 −xw4 ⎥
⎢ ⎥,
⎢ −2(−yw6 + yw4 ) −2(xw6 − xw4 ) ⎥
⎢ ⎥
⎣ 2(−yw6 + yw4 ) 2(xw6 − xw4 ) ⎦
−2(−yw6 + yw4 ) −2(xw6 − xw4 )
224 Meshing, Geometric Modeling and Numerical Simulation 3

makes it possible to have the coefficients (only a few are to be effectively calculated, the others
are inferred therefrom):

→ −
→ −
→ − →
AK11 = ||W 4 ||2 + ||W 6 ||2 − 2 < W 4 , W 6 >,

which we denote by a11 for the rest (since met many times):

→ −
→ − → −

AK21 = ||W 6 ||2 − < W 4 , W 6 >, AK22 = ||W 6 ||2 ,

→ −
→ − → −
→ − → −

AK31 = ||W 4 ||2 − < W 4 , W 6 >, AK32 = − < W 4 , W 6 >, AK33 = ||W 4 ||2 ,

→ −
→ − → −
→ −
→ − →
AK42 = −2||W 6 ||2 + 2 < W 4 , W 6 >, AK52 = 2||W 6 ||2 − 2 < W 4 , W 6 >,

→ −
→ − →
AK62 = −2||W 6 ||2 + 2 < W 4 , W 6 >,

→ −
→ − → −
→ −
→ − →
AK43 = −2||W 4 ||2 + 2 < W 4 , W 6 >, AK53 = 2||W 4 ||2 − 2 < W 4 , W 6 >,

→ −
→ − →
AK63 = −2||W 4 ||2 + 2 < W 4 , W 6 >,
and the other coefficients:
AK41 = −2 a11 , AK44 = 4 a11 ,
AK51 = 2 a11 , AK54 = −4 a11 , AK55 = 4 a11 ,
AK61 = −2 a11 , AK64 = 4 a11 , AK65 = −4 a11 , AK66 = 4 a11 .
This is the time to make a few remarks and comments. For each integration point, the matrix
satisfies the classical property that stipulates that the sum of the coefficients per column (per row)
is zero. The operational cost is very low, 3 dot products and combinations thereof. If we execute
the calculation (without making it explicit, therefore) by a program, the cost is necessarily higher.
Finally, we have verified that the sum of the three contributions, for a given coefficient, recovers.
What we have found for straight-sided triangles, assuming that our curved triangle is straight-
sided.

For the second term of the stiffness matrix, one must calculate Γ̂ t [P̂ ] [P̂ ]JΓ d∂ K̂. To clarify,
let us take as the edge, the image of the edge ŷ = 0. Therefore, we find the expression seen for
the right-angled triangle with the notable difference that the Jacobian, being no longer constant,
remains in the integral. As such, one has to calculate the integral of:
⎡ 2 2

(1 − x̂) (1 − 2x̂) ... ... ... ... ...
 ⎢ −x̂(1 − x̂)(1 − 2x̂)2 x̂2 (2x̂ − 1)2 ... ... ... ... ⎥
1 ⎢ ⎥
⎢ 0 0 0 ... ... ... ⎥
⎢ 4x̂(1 − x̂)2 (1 − 2x̂) 4x̂2 (1 − x̂)(2x̂ − 1) 0 16x̂2 (1 − x̂)2
⎥ JΓ dx̂ ,
⎢ ... ... ⎥
0
⎣ 0 0 0 0 0 ... ⎦
0 0 0 0 0 0

with JΓ = JΓ (x̂) the Jacobian relating to this edge. To obtain this expression, we look at the
transformation γ(x̂) defined by γ(x̂) = FK (x̂, 0), that is:

γ(x̂) = (1 − x̂)(1 − 2x̂)A1 + x̂(2x̂ − 1)A2 + 4x̂(1 − x̂)A4 .


Meshes and Finite Element Calculations 225

Then:
−−→ −−→
γ  (x̂) = (4x̂ − 3)A1 + (4x̂ − 1)A2 + (4 − 8x̂)A4 = (3 − 4x̂)A14 + (4x̂ − 1)A42 ,

whose norm, which is JΓ , is equal to:


»
−−→ −−→ −−→ −−→
||γ  (x̂)|| = (3 − 4x̂)2 ||A14 ||2 + (4x̂ − 1)2 ||A42 ||2 + 2(3 − 4x̂)(4x̂ − 1) < A14 , A42 >.

Considering their shape, the integrals will be evaluated by a quadrature. The nodes are the ends
and the edge midpoint, the weights are 16 , 46 , 16 . Given the choice of nodes, the sum is reduced to a
single term and so all we have to do is calculate JΓ at these nodes and we obtain the contribution
(excluding the physical coefficient) of the edge A1 A2 :
⎡ ⎤
JΓ (0) ... ... ... ... ...
⎢ 0 JΓ (1) ... ... ... ... ⎥
⎢ ⎥
1⎢⎢ 0 0 0 ... ... ... ⎥
⎥,
6⎢⎢ 0 0 0 4JΓ ( 2 ) ... ... ⎥
1

⎣ 0 0 0 0 0 ... ⎦
0 0 0 0 0 0
 
with JΓ (0) =  (3x14 − x42 )2 + (3y14 − y42 )2 , JΓ (1) = (3x42 − x14 )2 + (3y42 − y14 )2
and JΓ ( 12 ) = x212 + y12
2 . The contribution of the other two edges, if they are involved, is

obtained in the same way. One successively finds:


⎡ ⎤
0 ... ... ... ... ...
⎢ 0 JΓ (0) ... ... ... ... ⎥
⎢ ⎥
⎢ 0 0 J (1) ... ... ... ⎥
1⎢ ⎢ 0
Γ ⎥
0 0 ... ... ... ⎥
⎥,
6⎢ ⎢ 0
⎢ 0 0 0 0 ... ⎥

⎣ 0 0 0 0 4JΓ ( 12 ) ... ⎦
0 0 0 0 0 0
 
where JΓ (0) =  (3x25 − x53 )2 + (3y25 − y53 )2 , JΓ (1) = (−x25 + 3x53 )2 + (−y25 + 3y53 )2
and JΓ ( 12 ) = x223 + y23
2 , and finally for the last edge:

⎡ ⎤
JΓ (0) ... ... ... ... ...
⎢ 0 0 ... ... ... ... ⎥
⎢ ⎥
1⎢
⎢ 0 0 JΓ (1) ... ... ... ⎥
⎥,
6⎢
⎢ 0 0 0 ... ... ... ⎥

⎣ 0 0 0 0 0 ... ⎦
0 0 0 0 0 4JΓ ( 12 )
 
with, here: JΓ (0)
 = (3x16 − x63 )2 + (3y16 − y63 )2 , JΓ (1) = (−x16 + 3x63 )2 + (−y16 + 3y63 )2
and JΓ ( 12 ) = x213 + y13
2 .
226 Meshing, Geometric Modeling and Numerical Simulation 3


• For the mass matrix, we need to calculate K̂ t [P̂ ] [P̂ ]J dK̂. The Jacobian is now a poly-
nomial (and no longer a simple constant), thus participating to the integral. Let us recall the
expression of this polynomial:

J = J (x̂, ŷ) =< {aA14 + bA42 + 4ŷA65 } ∧ {aA16 + cA63 + 4x̂A45 } , −



n >,

with a = 3 − 4x̂ − 4ŷ, b = 4x̂ − 1 and c = 4ŷ − 1. One can therefore try to use relation [6.12]
but it is necessary, beforehand, to explain all the terms of each integral (products of the basic
polynomials re-multiplied by the terms of the Jacobian) or, more simply, rely on a quadrature
by choosing it (integration points and weights) with respect to the degree of the functions to be
integrated (see below).

 Exact integration when expanding J . Consider the example


 of a matrix coefficient, that of
index 22 (not too complicated). It is necessary to calculate K̂ x̂2 (2x̂ − 1)2 J dK̂, that is:

x̂2 (2x̂ − 1)2 < (aA14 + bA42 + 4ŷA65 ) ∧ (aA16 + cA63 + 4x̂A45 ) , → −n > dK̂,

which leads to the following nine integrals:


 
x̂2 (2x̂ − 1)2 < aA14 ∧ aA16 , −

n > dK̂, thus a2 x̂2 (2x̂ − 1)2 dK̂,
K̂ K̂
 
x̂2 (2x̂ − 1)2 < aA14 ∧ cA63 , →

n > dK̂, thus acx̂2 (2x̂ − 1)2 dK̂,
K̂ K̂
 
x̂2 (2x̂ − 1)2 < aA14 ∧ 4x̂A45 , −

n > dK̂, thus 4ax̂3 (2x̂ − 1)2 dK̂,
K̂ K̂
 
x̂2 (2x̂ − 1)2 < bA42 ∧ aA16 , →

n > dK̂, thus abx̂2 (2x̂ − 1)2 dK̂,
K̂ K̂
 
x̂2 (2x̂ − 1)2 < bA42 ∧ cA63 , →

n > dK̂, thus bcx̂2 (2x̂ − 1)2 dK̂,
K̂ K̂
 
x̂2 (2x̂ − 1)2 < bA42 ∧ 4x̂A45 , −

n > dK̂, thus 4bx̂3 (2x̂ − 1)2 dK̂,
K̂ K̂
 
x̂2 (2x̂ − 1)2 < 4ŷA65 ∧ aA16 , →

n > dK̂, thus 4aŷ x̂2 (2x̂ − 1)2 dK̂,
K̂ K̂
 
x̂2 (2x̂ − 1)2 < 4ŷA65 ∧ cA63 , →

n > dK̂, thus 4cŷx̂2 (2x̂ − 1)2 dK̂,
K̂ K̂
 
x̂2 (2x̂ − 1)2 < 4ŷA65 ∧ 4x̂A45 , −

n > dK̂, thus 16x̂3 y(2x̂ − 1)2 dK̂.
K̂ K̂
Meshes and Finite Element Calculations 227

It is then necessary to decompose each polynomial into a sum of the powers of the polyno-
mials of degree 1 (therefore only x̂ and ŷ here) and then use relation [6.12]. It will be understood
that this calculation, by hand, is tedious, and we do not proceed completely (we shall only look
at this coefficient and rely on Maple) in this way. The coarse result is as follows:
13 1 1 3
MK22 =< A14 ∧ A16 + A14 ∧ A63 − A14 ∧ A45 − A42 ∧ A16
1260 252 35 140
3 11 1 1
− A42 ∧ A63 + A42 ∧ A45 − A65 ∧ A16 + A65 ∧ A63
140 105 315 315
1
+ A65 ∧ A45 , →

n >,
105
1
apart from the fact that the sum of the coefficients is equal to 15 , one cannot go much further
(there are a total of 21 coefficients and each one presents this aspect with nine integrations to be
calculated) and one must resort to numerical integration.

 Expanding J on the Jacobians at the nodes

The curved triangle of degree 2 has a special particularity (already mentioned), its Jacobian
polynomial is of the same degree and is exactly written as:

J = p̂i J (i),
i

Therefore, on can write the integrals (here the one of index 22) to be calculated as:
    
x̂2 (2x̂ − 1)2 J dK̂ = x̂2 (2x̂ − 1)2 p̂i J (i)dK̂ = J (i) x̂2 (2x̂ − 1)2 p̂i dK̂,
K̂ K̂ i i K̂

which allows an exact integration with six calculations (only). Again, this work will be deemed
tedious (and we shall not do it) and resort to numerical integration.

 Numerical integration

As we have seen, the only reasonable method left is numerical integration. Therefore, we
write:  
t
[P̂ ] [P̂ ]J dK̂ ≈ ωl t [P̂ (l)] [P̂ (l)]J (l),
K̂ l
depending on the choice, the quadrature is approximate or exact (and the sign ≈ will be replaced
by the sign =). If the integration nodes are the nodes (on K̂ and the weight 16 ), the matrix is
diagonal. So one will rather consider a formula such as:
  
t
[P̂ ] [P̂ ] p̂k J (k)dK̂ = t
[P̂ ] [P̂ ]p̂k J (k)dK̂
K̂ k k K̂
 
≈ J (k) ωl t [P̂ (l)] [P̂ (l)]p̂k (l),
k l

which directly unveils the Jacobians at the nodes and leads to a full matrix.
228 Meshing, Geometric Modeling and Numerical Simulation 3

Generally speaking, the quadrature to be used, and therefore its nodes and weights, must be
chosen in accordance with the degree of the polynomials to be integrated, in order to ensure
some consistency. Assuming that the datum (here set to 1 so not appearing) is constant, the
polynomial to be integrated is the product of two polynomials of degree 2. One finds an exact
numerical integration formula with seven nodes for degree 4 whose nodes are the nodes of K̂
and the point ( 13 , 13 ) with the weights 40
1 1
, 40 1
, 40 1
, 15 , 151 1
, 15 and 40 9
. The advantage of having
some of the quadrature nodes identical to the nodes of K̂ allows for obtaining relatively simple
expressions (because p̂i (j) is equal to 0 or 1) and the only values to be calculated are J (7) (based
on the J (k)) as well as on p̂k (7). For the latter, one finds − 19 , − 19 , − 19 , 49 , 49 , 49 , whereas J (7) =
J 1 + J2 + J3 J4 + J 5 + J6
− +4 . We can then have the generic form of the coefficient of
9 9
index ij of the matrix:
 
MKij = J (k) ωl p̂i (l) p̂j (l)p̂k (l),
k l

which is written as:


 6

 
MKij = J (k) ωl p̂i (l) p̂j (l)p̂k (l) + ω7 p̂i (7) p̂j (7)p̂k (7)
k l=1
 6

 
= J (k) ωl p̂i (l) p̂j (l)p̂k (l) + ω7 p̂i (7) p̂j (7)J (7),
k l=1

which is expressed as:

MKii = ωi Ji + ω7 p̂2i (7) J (7) on the diagonal and MKij = ω7 p̂i (7) p̂j (7)J (7) otherwise.

Made explicit, the coefficients of the matrix are as follows:


1
MKii = {81Ji + 9J7 } for i = 1, 3,
3240
1
MKii = {6Ji + 4J7 } for i = 4, 6,
90
1
MKij = J (7) for (i = 2, 3), (j = 1, i − 1),
360
1
MKij = − J (7) for (i = 4, 6), (j = 1, 3),
90
4
MKij = J (7) for (i = 5, 6), (j = 4, i − 1).
90
Meshes and Finite Element Calculations 229

If J is constant (right-angle triangle including uniformly distributed nodes), we find the follow-
ing matrix: ⎡ ⎤
10 ... ... ... ... ...
⎢ 1 10 ... ... ... ... ⎥
⎢ ⎥
J ⎢ ⎢ 1 1 10 ... ... ... ⎥ ⎥,
360 ⎢⎢ −4 −4 −4 40 ... ... ⎥

⎣ −4 −4 −4 16 40 ... ⎦
−4 −4 −4 16 16 40
which calls for some remarks. The sum of the coefficients of the three first columns (rows) is
J J
zero. The sum of all coefficients is equal to 180, and thus 360 ij MKij = 2 , which is none
other than SK , the area of the right-angled triangle. However, this matrix is not the one we found
in the case of a right-angled triangle of degree 2, which is probably legitimate.
 
• The right-hand side and its two terms, K̂ t [P̂ ]F J dK̂ and Γ̂ t [P̂ ]f JΓ d∂ K̂. For the first
term, if the mass matrix is known and assuming that F is defined by [P̂ ] { Fi }, then the second
member is calculated as:
BK = MK {Fi } .
A quadrature [Glowinski-1973] gives
⎧ ⎫
⎪ 6F1 J (1)


−F2 J (2) −F3 J (3) −4F5 J (5) ⎪



⎪ −F1 J (1) +6F2 J (2) −F3 J (3) −4F6 J (6) ⎪

⎪ ⎪
1 ⎨ −F1 J (1) −F2 J (2) +6F3 J (3) −4F4 J (4)

,
360 ⎪
⎪ −4F3 J (3) +32F4 J (4) +16F5 J (5) +16F6 J (6) ⎪


⎪ ⎪

⎪ −4F1 J (1) +16F4 J (4) +32F5 J (5)
⎪ +16F6 J (6) ⎪

⎩ ⎭
−4F2 J (2) +16F4 J (4) +16F5 J (5) +32F6 J (6)

which reminds us of the expression seen for the right-angled triangle of degree 2, but is not what
would yield the formula MK {Fi } with the matrix written above!

Finally, let us look at


 how to achieve an exact integration. It is assumed that the datum is
constant and one writes K̂ t [P̂ ]F J dK̂, either by expanding J , or expressing it according to the
J (i). This then results in a situation where relation [6.12] can be used. Let us describe in more
detail the coefficient of index 2 (fairly simple), that is:

BK2 = x̂(2x̂ − 1)F J dK̂,

which leads to nine calculations if we expand J , noting that the result does not, a priori, explic-
itly reveal the J (i), which increases our curiosity. We find (up to a factor F ):
1 1 1 1 1
BK2 =< − A14 ∧ A16 + A14 ∧ A63 − A14 ∧ A45 − A42 ∧ A16 − A42 ∧ A63
45 45 15 30 30
1 1 1
+ A42 ∧ A45 − A65 ∧ A16 − A65 ∧ A63 + 0 A65 ∧ A45 , →

n >,
5 90 18
230 Meshing, Geometric Modeling and Numerical Simulation 3

and we should convince ourselves that it is the Jacobians at the nodes that hide behind that
expression. In order to do this, one writes:

360 BK2 =< −8A14 ∧ A16 + 8A14 ∧ A63 − 24A14 ∧ A45 − 12A42 ∧ A16 − 12A42 ∧ A63

+72A42 ∧ A45 − 4A65 ∧ A16 − 20A65 ∧ A63 , −



n >,
which – and this is magical10 – is identified with the result above (for F1 = F2 = ... = F ), that
is −J (1) + 6J (2) − J (3) − 4J (6).

If, finally, we express J by i J (i)p̂i , there are only six calculations to carry out and the
J (i) naturally appear. Again, we want to see what it is all about. We have just verified that for
this coefficient, one exactly finds the value obtained by the above numerical integration.


For a boundary term, for example, the first edge [A1 A2 ], we replace Γ̂ t [P̂ ]f JΓ d∂ K̂ by the
quadrature:
f ¶t ©
[P̂ ](1)JΓ (1) + 4 t [P̂ ](4)JΓ (4) + t [P̂ ](2)JΓ (2) ,
6
which gives: ⎧ ⎫
⎪ JΓ (1) ⎪

⎪ ⎪


⎪ ⎪
⎪ JΓ (2) ⎪
⎪ ⎪


⎪ ⎪


⎪ ⎪

f ⎨ 0 ⎬
,
6⎪⎪ 4 JΓ (4) ⎪⎪

⎪ ⎪


⎪ ⎪


⎪ 0 ⎪


⎪ ⎪


⎩ ⎪

0
with the JΓ (.) already seen during the calculation of a boundary term of the stiffness matrix;
these expressions can be refined if it is assumed that f is defined as [P̂ ] {fi } for [P̂ ] the restriction
at the edge (here ŷ = 0) of [P̂ ].

6.2.4.4. In practice
Cautiously, we have completely described some elements that remain accessible through cal-
culations by hand. It is clear that, in general, and, a fortiori, for higher-degree elements and/or in
three dimensions, it is unreasonable to engage in such calculations. Consequently, the program
will be left to make these calculations by providing it with the necessary data, and the formal
expressions of useful polynomials, derivatives, and quadratures, etc.

Before giving a synthetic view of the process, a view that will be followed by a few practical
indications, we present some generalities. Obviously, the mesh elements will be looped over
and for each element, it will be necessary to assign the relevant (physical) properties. This

10. The calculation is somewhat tedious because the terms have to be cleverly combined in an attempt to
show that the difference between the two expressions is zero.
Meshes and Finite Element Calculations 231

requires knowing which (sub)domain the element is in and whether (at least) one of its edges
carries a boundary condition contributing to any of the elementary arrays (stiffness and/or second
member).

Since the integrations involve the reference element, a number of the calculations required
for quadratures can be anticipated and achieved once and for all.

Moreover, in our case, the matrices are symmetrical and only their lower (or upper) (triangu-
lar) part will be calculated and stored (for later assembly if one opts for this choice, see below,
and the assembly is not done on the spot). Therefore, the coefficient of index ij of an n × n
matrix has as the index the integer ind, initialized to the value 0 and defined as follows:
– For i = 1, n:
- For j = 1, i;
- ind = ind + 1, coef (ind) = ..., alias coef (i, j).
% knowing i and j, one knows which are the polynomials relative to index ind;
- End for j;
– End for i.

For the elements previously described, the results established above just need to be copied.
For the other elements, it is necessary to stay generic. To simplify the discussion, it is assumed
that there is only one type of element in the mesh and, to generalize, shifting through the refer-
ence element is necessary and the integrations are carried out via quadratures. We are thus going
to start again from relations [6.7], [6.9] and [6.11], that is:
 
AKij = < t [DF −1 ] [DF −1 ]∇p̂i , ∇p̂j > J dK̂ + χΓ̂ p̂i p̂j JΓ d∂ K̂,
K̂ Γ̂

MKij = p̂i p̂j J dK̂,

 
BKi = F p̂i J dK̂ + χΓ̂ f p̂i JΓ d∂ K̂.
K̂ Γ̂
The appropriate quadrature formulas (number of nodes, nq , weights , ωk , k = 1, nq , nodes,
(x̂k , ŷk ), k = 1, nq and finally integration nodes on an edge, x̂k , k = 1, nq ). There are (at most)
five integrations to be made, one can thus have five quadrature formulas (but we shall keep the
same notation for all of them). In the scheme, noe is the number of nodes of the elements thereby
the number of polynomials and noeedge is the number of nodes per edge.
232 Meshing, Geometric Modeling and Numerical Simulation 3

Matrix calculations and second elementary members [6.14]


– Data:
ˆ ], that is: p̂i , i = 1, noe, ∂ p̂i , ∂ p̂i , i = 1, noe;
- the finite element: [P̂ ] and [DP
∂x ∂y
- the quadrature formula(s): nq and for k = 1, nq : ωk , (x̂k , ŷk );
- the quadrature formula(s) on one boundary: nq and for k = 1, nq : ωk , x̂k .
– End data.
– Precalculations:
- mass: p̂i (x̂k , ŷk ) for k = 1, nq and the required products p̂i (x̂k , ŷk )p̂j (x̂k , ŷk );
- second member: p̂i (x̂k , ŷk ) for k = 1, nq , i = 1, noe;
∂ p̂i ∂ p̂i
- stiffness. (x̂k , ŷk ) and (x̂k , ŷk ) for k = 1, nq and i, j = 1, noe;
∂x ∂y
- boundary term(s) (stiffness and right-hand side): p̂i (x̂k ) for k = 1, nq
with the restricted polynomial, i = 1, noeedge .
– End precalculations.
– Loop over the K mesh elements:
- initialization: MKij = AKij = 0, BKi = 0 for the appropriate indices;
- [DF ] = [DF ](x̂k , ŷk ), then J (x̂k , ŷk ) its determinant, likewise if boundary
for JΓ (x̂k );
- stiffness. [DF ]−1 , C = J t [DF ]−1 [DF ]−1 , loop over k:
AKij = AKij + ωk < C ∇p̂i (x̂k , ŷk ) , ∇p̂j (x̂k , ŷk ) >;
- boundary stiffness: loop over k: AKij = AKij + ωk JΓ (x̂k )pi (x̂k )p̂j (x̂k ) according
to χΓ ;
- mass: loop over k: MKij = MKij + ωk J (x̂k , ŷk )pi (x̂k , ŷk )p̂j (x̂k , ŷk );
- right-hand side: loop over k: BKi = BKi + ωk F p̂i (x̂k , ŷk )J (x̂k , ŷk );
- boundary right-hand side: loop over k: BKi = BKi + ωk f p̂i (x̂k )JΓ (x̂k );
- on-the-fly assembly or writing to memory (or file) for subsequent assembly.
– End loop over the elements.

Since the matrices are symmetrical, only part is computed, as shown, before the scheme and
the loops at i and j must take into account this property. The loops are actually written at i and
j, which allows for knowing which polynomials are involved in the coefficient being calculated,
and the result is stored with the index ind.

The physical stiffness coefficients have been assumed to be isotropic (a simple k factor ap-
pears), otherwise the notation of the first stiffness term (and any quantities sensitive to this aspect)
must be changed. We in fact had the term:
 
t ˆ t ˆ ]J dK̂ or rather t ˆ t ˆ ]J dK̂,
[DP ] [DF −1 ] [DF −1 ] [DP [DP ] [DF −1 ] [K] [DF −1 ] [DP
K̂ K̂
ï ò
k 0
where [K] = with k as a simple scalar that appears as a factor and gives the above
0 k
ï ò
k11 k12
expression for the coefficient of index ij . In the general case, we have [K] = and
k12 k22
Meshes and Finite Element Calculations 233


one will have, at index ij , the coefficient K̂ < t [DF −1 ] [K] [DF −1 ]∇p̂i , ∇p̂j > J dK̂. In the
event that k12 = 0, a simple change of variables (see below, the elasticity case) makes it possible
to recover the standard case.

If these data have a spatial dependency (therefore they are a function of the position (x, y)
or x), they are going to be part of the quadrature and not just a factor of it.

6.2.5. The generic notation for the four chosen elements, elasticity equation

All of these results are made explicit by involving the four elements indicated above for the
linear elasticity equation in small deformations. We are not going to repeat the same level of
detail as for the heat equation, for which we saw the principle to be followed. An additional
reason to be brief is that some quantities are very similar to the previous case (polynomials,
derivatives, Jacobian polynomials) or very close thereto (mass matrix).

6.2.6. Lagrange triangle of degree 1 with three nodes

We saw that the stiffness matrix was composed of four blocks, each of which had a shape
very similar to a stiffness matrix corresponding to the heat problem. We recall it. The starting
calculation is the product:
⎡ ⎤
−y23 x23 ï ò
1 ⎣ −y23 −y31 −y12
−y31 x31 ⎦ .
2J x23 x31 x12
−y12 x12

And, as such, the term in K̂ of the stiffness matrix, is equal to:
⎡ ⎤
x223 + y23
2
... ...
1 ⎣ ⎦,
AK = x23 x31 + y23 y31 x231 + y31
2
...
2J 2 2
x23 x12 + y23 y12 x12 x31 + y12 y31 x12 + y12

and the geometric view of these coefficients shows the dot products between the edges, that is:
⎡ −−−→ ⎤
||A2 A3 ||2 ... ...
1 ⎢ −−−→ −−−→ −−−→ ⎥
AK = ⎣ < A2 A3 , A3 A1 > ||A3 A1 ||2 ... ⎦.
2J −−−→ −−−→ −−−→ −−−→ −−−→ 2
< A2 A3 , A1 A2 > < A1 A2 , A3 A1 > ||A1 A2 ||

Here, we have:
⎡     ⎤
K
t
[DP ] [E11 ] [DP ] dK K
t
[DP ] [E12 ] [DP ] dK
[AK ] = ⎣    
⎦.
K
t
[DP ] [E21 ] [DP ] dK K
t
[DP ] [E22 ] [DP ] dK
234 Meshing, Geometric Modeling and Numerical Simulation 3

Therefore, each block requires a calculation very close to the one above. For example, for
block 11, one just has to replace the calculation:
⎡ ⎤
−y23 x23 ï ò
⎣ −y31 x31 ⎦ −y23 −y31 −y12 ,
x23 x31 x12
−y12 x12

with the calculation:


⎡ ⎤
−y23 x23 ï ò ï ò
⎣ −y31 x31 ⎦ [E11 ] −y23 −y31 −y12
with [E11 ] =
λ + 2μ 0
.
x23 x31 x12 0 μ
−y12 x12
√ √
By setting Xij = μ xij and Yij = λ + 2 μ yij , it follows that:
⎡ ⎤ ⎡ ⎤
−y23 x23 ï ò −Y23 X23 ï ò
⎣−y31 x31 ⎦[E11 ] −y −y −y −Y23 −Y31 −Y12
23 31 12
= ⎣−Y31 X31 ⎦ ,
x23 x31 x12 X23 X31 X12
−y12 x12 −Y12 X12

so exactly the matrix ïof the heat equationò expressed in the space deformed by the above trans-

μ √ 0
formation (of matrix ). In the same way, we find the other blocks of the
0 λ + 2μ
stiffness matrix by applying the following transformations:
√ √
Block 12 : Xij = λ xij , Yij = μ yij
√ √
Block 21 : Xij = μ xij , Yij = λ yij ,
 √
Block 22 : Xij = λ + 2 μ xij , Yij = μ yij .
In the general case (the present case is homogeneous and isotropic), matrices [EIJ ] are full and
an explicit calculation must be made to establish the result.

It is easier for the mass matrix, and we have:


⎡ ⎤
1
... ...
⎢ 6 ⎥
⎢ ⎥
⎢ 1 1 ⎥
[MK ] = SK ⎢ ... ⎥ ,
⎢ 12 6 ⎥
⎣ 1 1 1 ⎦
12 12 6
for the heat equation (with SK the area of K) and, here, in elasticity, we have:
ï ò
[MK ] [0]
MK = .
[0] [MK ]

Finally, the right-hand side with its two contributions is written from its expression for the heat
problem, first for data at x (F1 and f1 ) then those at y (F2 and f2 ).
Meshes and Finite Element Calculations 235

6.2.6.1. The other three elements


• Lagrange quadrilateral of degrees 1 × 1 with four nodes

One will follow the exact same approach, starting with the matrices and right-hand sides of
the heat problem to define the blocks existing here.
• Straight-sided Lagrange triangle, of degree 2 with six nodes, as above.
• Isoparametric Lagrange triangle of degree 2 with six nodes, as above.

6.2.6.2. In practice
Broadly speaking, what has been seen for the heat equation is applied, but blockwise. Given
this block structure, it is necessary to focus on the way in which the (symmetrical) matrices and
the right-hand side are stored. It is simpler to consider conventional global indices (i, j) for the
matrix (thus varying, a priori, from 1 to ndl×noe with ndl the number of degrees of freedom per
node and noe the number of nodes per element and then take advantage of the symmetry, in order
to keep only the desired coefficients with a single index numbering. To see these gymnastics, we
are going to take the example of the degree 1 triangle.
⎡ ⎤
11 ... ...
For the mass, in terms of a double index, a symmetrical block is written as ⎣ 21 22 ... ⎦,
31 32 33
⎡ ⎤
1 ... ...
whereas with as single index one has ⎣ 2 3 ... ⎦. In order for the (symmetrical) matrix
4 5 6
structure to be identical to that of a stiffness matrix, one will also consider a complete block of
zeros (block (22)). Subsequently, the matrix will be written as:
⎡⎡ ⎤ ⎤ ⎡ ⎡ ⎤ ⎤
11 ... ... 1 ... ...
⎢⎣21 22 ... ⎦ [...] ⎥ ⎢ ⎣2 3 ...⎦ [...] ⎥
⎢ ⎥ ⎢ ⎥
⎢ 31 32 33 ⎥ ⎢ 4 5 6 ⎥
⎢ ⎥ ⎢ ⎥

(1) ⎢⎡ ⎥ ⎢ ⎥
⎤ ⎡ ⎤⎥ or (2) ⎢ ⎡ ⎤ ⎡ ⎤ ⎥,
⎢ 11 12 13 11 ... ... ⎥ ⎢ 1 2 3 1 ... ... ⎥
⎢ ⎥ ⎢ ⎥
⎣⎣21 22 23⎦ ⎣21 22 ... ⎦⎦ ⎣ ⎣ 4 5 6 ⎦ ⎣2 3 ...⎦ ⎦
31 32 33 31 32 33 7 8 9 4 5 6

and, depending on the block, the local indices just have to be shifted to obtain a global numbering:
⎡ ⎡ ⎤ ⎤
1 ... ... ⎡ ⎤
⎢ ⎣2 3 ...⎦ ⎥ 1 ...
⎢ [...] ⎥ ⎢2 ⎥
⎢ ⎥ ⎢ 3 ⎥
⎢ 4 5 6 ⎥ ⎢4 ⎥
5 6
(3) ⎢
⎢⎡ ⎤ ⎡ ⎤⎥
⎥ that is, in the end (4) ⎢
⎢7
⎥,

⎢ 7 ⎥ ⎢ 8 9 10 ⎥
⎢ 8 9 10 ... ... ⎥ ⎣11 12 13 14 15 ⎦
⎣⎣ 11 12 13 ⎦ ⎣14 15 ... ⎦⎦
16 17 18 19 20 21
16 17 18 19 20 21
236 Meshing, Geometric Modeling and Numerical Simulation 3

whose indexing with a couple of indices is standard, that is:


⎡ ⎤
11 ...
⎢ 21 22 ⎥
⎢ ⎥
⎢ 31 32 33 ⎥
(5) ⎢
⎢ 41 42 43 44
⎥.

⎢ ⎥
⎣ 51 52 53 54 55 ⎦
61 62 63 64 65 66

The technical difficulty is therefore to juggle with the indices of the blocks themselves, the in-
dices (single index or pair of indices) in a given block or the global indices, with the need to shift
from one system to the other and, this, in both directions.

To find the coefficient of index (i, j), thus ind, in the full matrix, we need to find out which
block it comes from and what its index is (single or double) in this block.

The first relation (i, j) =⇒ ind is immediate and has already been seen. The value ind is the
result of the loop:
– ind = 0;
– DO i = 1, ndl × noe:
- DO j = 1, i
ind = ind + 1;
- end DO j;
– end DO i.

On paper, this allows for finding the image ind of the given pair (i0 , j0 ), as a shift from (5) to
(4), it is obtained as:
– ind = 0;
– DO i = 1, ndl × noe:
- DO j = 1, i;
ind = ind + 1;
if i = i0 and j = j0 , EXIT;
- end DO j;
– end DO i.

In the opposite direction, one also finds the image (i, j) of the given value indo , shifting from
(4) to (5), it is obtained as:
– ind = 0;
– DO i = 1, ndl × noe:
- DO j = 1, i
ind = ind + 1,
if ind = ind0 , EXIT;
- end DO j;
– end DO i.
Meshes and Finite Element Calculations 237

To perform the calculations, the indices ib and jb of the block under consideration have to be
known. Therefore, from the global indices i and j, it is necessary to find out which block is
concerned; once done, on the one hand find the indices looked for and, on the other hand, the
physical data to be taken into account.

Therefore, we reuse the initial loop and complete it:


– ind = 0, I = 1;
– DO i = 1, ndl × noe:
- if i = I × noe + 1, then I = I + 1 and J = 1;
- DO j = 1, i
If j = J × noe + 1, then J = J + 1,
ind = ind + 1,
% This coefficient of global index (i, j) or ind originates from the block of index
(I, J).
% The indices (ib , jb ) or indb remain to be found in this block.
Set ib = i, DO WHILE: if ib > noe, ib = ib − noe,
set jb = j, DO WHILE: if jb > noe, jb = jb − noe,
% (i, j) thus ind corresponds to (ib , jb ) of block (I, J).
% Here, one has all useful information for the calculation of the coefficient;
- end DO j;
– end DO i.

Once done, one can find the coefficients of the mass matrix after initializing block (21)
to zero.

The stiffness matrix follows the same mechanics (and block (22) is what it is, that is, non-zero).

Some remarks or alternatives. In our example, block (21) is considered to be full when it is
symmetrical; one could take advantage of this to reduce the memory needed at the cost of taking
this particularity into account during the assembly operation (which will be described forthwith).
Similarly, the vision by block is necessary because we reasoned by degree of freedom.

An elementary matrix (a right-hand side) could anticipate the fact that the global quantities
(originating from the assembly) will be stored by node and already adopt this storing per node.
Again, this particularity will have to be taken into account during the assembly operation to stay
consistent.

6.3. Matrix or right-hand side assembly

The construction of the system matrix is done by assembling the matrices relative to each
element. This assembly can be carried out on the fly; we calculate the matrix on the element,
and its coefficients are immediately transferred into the global matrix at the intended places or
carried out sequentially. All the elementary matrices are calculated and then they transferred
into the global matrix. Similarly, the second member of the system is obtained by assembling the
238 Meshing, Geometric Modeling and Numerical Simulation 3

second elementary members (on the fly or sequentially). The assembly is directly related to the
way in which the elementary quantities have been organized.

In the case where there is only one degree of freedom per node, the assembly operation is
easy, otherwise, the organization of the result must be defined (by node with all its degrees or per
degree with all its nodes).

When iterating through the mesh elements, each element is known by the list of its nodes,
as a list expressed in the global numbering. For example, the element K has for nodes those of
indices (triangle case ) 23, 37, 754 (Figure 6.5). From the local point of view, this same triangle
has as nodes the list 1, 2, 3. We can simply mention that the local node (of K) of number 1 has
as its global number the index 23, etc. As a result, we know the correspondence array iK = I
for i = 1, 2, 3.

The assembly in the case of only one degree of freedom per node consists of carrying forward
a coefficient from an elementary matrix (that of an element K) of local index the pair (ij) (the
value ind) to the global index the pair (I = iK , J = jK ); this coefficient expresses the interac-
tion between the nodes of local index i and j. For a second elementary member, the coefficient
of local index, the value i, is carried forward to the index I = iK and this coefficient is the
contribution of the local node i. In practice, we initialize the matrices and the right-hand side.
At 0 then, looping over the elements, we add the corresponding contribution:

MIJ = MIJ + MKij or AIJ = AIJ + AKij or BI = BI + BKi .

If the matrix in question has been viewed as a single-index array, we have to decode it to find the
pair (ij), which amounts to repeating the way this unique index has been built.

When there are several degrees of freedom per node, the assembly operation must take this
particularity into account. We still have the correspondence array iK = I for i = 1, 2, 3, for all
K (here a triangle again). At the element level, the chosen organization is by degree of freedom
with all nodes of the element: all of the nodes of the first degree of freedom and then all nodes
of the second degree, etc. Globally, one will choose as an organization to give per node all its
degrees of freedom: all the degrees of the first node and then all the degrees of the second node,
etc., all the degrees of the last node. The difficulty is to see what a coefficient of index (ij)
represents (decrypted, if applicable, from a sequential indexing) of a matrix or i of a second
elementary member at the global level, therefore to find the pair (IJ) or the index corresponding
to I. It should be noted that the presence of essential boundary conditions (Dirichlet) is not taken
into account and that we consider all the nodes11 of the mesh.

If the nodes only carry one degree of freedom, we have as above AIJ = AIJ + AKij with
I = iK and J = jK and BI = BI + BKi with I = iK .

11. One could formally isolate the nodes with such a condition, further reducing the size of the system to
be solved. In practice, this technique is not realistic.
Meshes and Finite Element Calculations 239

Figure 6.5. For the assembly corresponding to the triangle of vertices [23, 37, 754],
thus for the following values of iK : 1K = 23, 2K = 37 and 3K = 754

When there are several degrees of freedom per node, one has to shift from a local index (thus
relative to a node but also to a degree) to a global index in accordance with the way in which
global vectors (therefore, the global matrices) are ordered. We denote:
– ind, the index of a value of a right-hand side of an element K;
– dl the number of the degree of freedom associated with this value. Let ndl be the number
(assumed constant) of degrees of freedom per node and noe the number of nodes K;
– I the value of the global index that we want to determine.

Starting with the index12 ind, the calculation is as follows:


(ind − 1)
dl = + 1,
noe
i = ind − (dl − 1) noe,
then I = ndl (iK − 1) + dl.
In the above example, in elasticity, ndl = 2, noe = 3 one simply has the following correspon-
dence:
ind =⇒ (i dl) =⇒ (iK dl) =⇒ I,
that is: ⎧ ⎫ ⎧ ⎫ ⎧ ⎫ ⎧ ⎫

⎪ 1 ⎪
⎪ ⎪
⎪ 1 1 ⎪
⎪ ⎪
⎪ 23 1 ⎪
⎪ ⎪
⎪ 45 ⎪


⎪ ⎪ ⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪


⎪ 2 ⎪




⎪ 2 1 ⎪
⎪ ⎪
⎪ 37 1 ⎪
⎪ ⎪
⎪ 73 ⎪

⎨ ⎬ ⎨ ⎬ ⎨ ⎬ ⎨ ⎬
3 3 1 754 1 1507
=⇒ =⇒ =⇒ .
⎪ 4 ⎪
⎪ ⎪ ⎪
⎪ 1 2 ⎪
⎪ ⎪ 23
⎪ 2 ⎪
⎪ ⎪
⎪ 46 ⎪


⎪ ⎪ ⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪
⎪ ⎪


⎪ 5 ⎪




⎪ 2 2 ⎪
⎪ ⎪
⎪ 37 2 ⎪
⎪ ⎪
⎪ 74 ⎪

⎩ ⎭ ⎩ ⎭ ⎩ ⎭ ⎩ ⎭
6 3 2 754 2 1508
For a matrix, the same mechanism is applied; the values of dl (1 or 2 in our example) depend on
the block being considered and the correspondence sought consists of seeing what becomes of a

12. Our indices start at 1.


240 Meshing, Geometric Modeling and Numerical Simulation 3

single index of the matrix decrypted as a pair of indices (ind, jnd), and then in seeing how this
pair enables the construction of the matrix we are looking for. The calculation is as follows:

(ind, jnd) =⇒ (i dl, j dl) =⇒ (iK dl, jK dl) =⇒ (I, J),

which is illustrated (the symmetry of the matrix has not been taken into account) with the same
example, that is:
⎡ ⎤
11 12 13 14 15 16
⎢ 21 22 23 24 25 26 ⎥
⎢ ⎥
⎢ 31 32 33 34 35 36 ⎥
⎢ ⎥
⎢ 41 42 43 44 45 46 ⎥ =⇒
⎢ ⎥
⎣ 51 52 53 54 55 56 ⎦
61 62 63 64 65 66
⎡ ⎤
11 − 11 11 − 21 11 − 31 11 − 12 11 − 22 11 − 32
⎢ 21 − 11 21 − 21 21 − 31 21 − 12 21 − 22 21 − 32 ⎥
⎢ ⎥
⎢ 31 − 11 31 − 21 31 − 31 31 − 12 31 − 22 31 − 32 ⎥
=⇒ ⎢ ⎢ ⎥,

⎢ 12 − 11 12 − 21 12 − 31 12 − 12 12 − 22 12 − 32 ⎥
⎣ 22 − 11 22 − 21 22 − 31 22 − 12 22 − 22 22 − 32 ⎦
32 − 11 32 − 21 32 − 31 32 − 12 32 − 22 32 − 32

with, on the left side, the natural indices of the matrix on K (from 1 to 6 for each one) and,
on the right, the node index (from 1 to 3) and its dl (from 1 to 2 depending on the block of
the elementary matrix). The relations already seen are then applied and it follows, for a few
examples, that:

K11 → K45,45 , K42 → K46,73 , K25 → K73,74 , K65 → K1508,74 .

To conclude this chapter, we indicate how, when resolving the system, to take into account an
essential boundary condition. Two methods will be found. If I designates the number of the node
bearing the condition and dl its degree of freedom concerned, we have seen that the index (in the
matrix or in the right-hand side) to be processed is ndl(I − 1) + dl. The idea is to modify the
matrix and the right-hand side of the system to ensure compliance with the boundary condition.
To give an example, one sets ndl = 2 and therefore dl = 1 or dl = 2.

 Method 1: To block the first degree (dl = 1), we set A2I−1,k = 0, for all k (that is
the whole row 2I − 1) and Ak,2I−1 = 0, for all k (that is, the whole column 2I − 1), then
A2I−1,2I−1 = 1 (the diagonal coefficient) and finally B2I−1 = value1 . To block the second
degree, we set A2I,k = 0 and Ak,2I = 0, for all k, then A2I,2I = 1 and finally B2I = value2 .
To avoid conditioning problems, value 1 can be replaced by a value α and the blocking value by
α times itself.

 Method 2: To block the first degree (dl = 1), one sets A2I−1,2I−1 = c∞ (the diagonal
coefficient) and B2I−1 = c∞ value1 . To block the second degree, one sets A2I,2I = c∞ and
finally B2I = c∞ value2 . The idea is that for c∞ large enough, one has: k Aik uk = Aii uii =
c∞ uii if u designates the unknown. Although elegant, this method is less robust than the previous
one.
Meshes and Finite Element Calculations 241

Issues in renumbering the nodes to optimize the "head" of the matrix is not the subject of this
chapter (see Chapter 3).


∗ ∗

By means of two classical examples (one with a scalar unknown (only one degree of freedom
per node), the other one with a vector unknown (2 degrees of freedom per node)), we have
showed how to compute the matrices and second elementary members (for every mesh element).

We have given a generic notation for matrices and second members (therefore for all Lagrange-
type elements of any degree) and, based on a few simple examples, we have explained the co-
efficients that appear. This notation is very clearly inspired by the philosophy developed in the
Modulef code and then in the Mefisto code whose references can easily be found. An important
point (rarely mentioned) is the geometric interpretation of these coefficients. Dot products are in-
deed found (involving edges and particular segments with the nodes as ends) and mixed products
(whose vector product component has helped us in many calculations), therefore lengths only,
lengths together with an angle as well as surface areas (solids in three dimensions). We have seen
the very great similarity between the matrices and right-hand sides of the two problems taken as
examples, the vector case resulting in a natural block decomposition, each of these blocks can be
seen – down to a few details – as an entity of the dot case.

We have showed how to assemble local contributions to build the matrices and right-hand
sides of the system to be solved and methods have been discussed that allow for taking into
account essential boundary conditions by modifying the matrices and right-hand sides.

These calculations have indicated what information a mesh (its vertices, its nodes, its edges,
its faces and finally its elements) must contain in order to be able to assign the physical properties
to the elements, their edges and faces, and to assign the different coefficients or data characteriz-
ing boundaries, boundary conditions and source terms.

Some special attention should be given to a few formulas and an explicit expression for the
area of a curved triangle of degree 2 (a formula that applies, after extension, to any degree and to
any type of element, and also in three dimensions for the calculation of the volume of a curved
element (Chapter 9)).
Chapter 7

Meshes and Finite Volume Calculation

Among the classic methods for solving problems formulated in partial differential equations
and along with finite element methods (previous chapter), we find finite volume methods, which,
as we shall see, allow hyperbolic-type equations to be addressed.

In this chapter, we want to show how the methodologies seen in this book make it possible
to efficiently build the data structures of solvers based on the finite volume method in two di-
mensions when considering unstructured simplicial meshes. Finite volume methods have been
proposed as an alternative to the finite element method to solve a particular type of equation:
hyperbolic problems that have an unsymmetrical structure.

In order to understand the formulation of a finite volume solver, we briefly present the foun-
dations of the finite volume method in one dimension with a fundamental hyperbolic equation:
the advection equation. Then we briefly describe the extension to systems of hyperbolic equa-
tions with the case of the Euler equation in two dimensions. The algorithms for constructing the
data structures necessary for the implementation of this type of method will be specified. In the
last section, we show some numerical results obtained with this approach. This introduction is
obviously non-exhaustive and we refer the reader to the works of [Hirsch-1988], [Hirsch-1990]
or [Toro-2009] for a full review of this type of methods.

7.1. Presentation of the finite volume method with a first-order problem

To simplify things, consider a hyperbolic equation in one dimension (spatial). Consider the
example of a tube of length L. A regular discretization of [0, L] is chosen in intervals of length
Δx and a discretization of the time interval [0, T ] in time steps of length Δ t. Let xj denote the
point j Δx and tn time n Δt. Let unj denote the value of the solution approximated at point xj
and time tn . Let c be a real positive. Consider the problem:

Meshing, Geometric Modeling and Numerical Simulation 3: Storage,


Visualization and In Memory Strategies, First Edition. Paul Louis George,
Frédéric Alauzet, Adrien Loseille and Loïc Maréchal.
© ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
244 Meshing, Geometric Modeling and Numerical Simulation 3

⎪ ∂ ∂

⎪ u(x, t) + c u(x, t) = 0 , ∀x ∈ [0, L] and ∀t ∈ [0, T ] ,
⎨ ∂t ∂x
u(x, 0) = u0 (x) the initial condition, [7.1]




u(0, t) = a because c > 0 .
The determination of the solution to this first-order problem in time requires setting an initial
condition for u:
u(x, 0) = u0 (x) .
An initial value problem or Cauchy problem is thus obtained. The boundary condition is imposed
to the left boundary (the point x = 0) where the flow is incoming (c > 0). The exact solution to
this equation is given by:
u(x, t) = u0 (x − ct) .
The discretization of this equation, which seems simple at first glance, requires some precautions,
because the first derivative in space has a non-symmetrical structure. Physically, one models a
phenomenon where the traveling direction is imposed (compare this to upstream and downstream
in a river). The discretization concerns time and space.

7.1.1. Time discretization

Regarding time, the time derivative is approximated by the formula (called the Euler method):

∂u(x, t) un+1 (x) − un (x) ∂u(xj , t) un+1


j − unj
≈ or ≈ in xj ,
∂t Δt ∂t Δt

and we can define two types of time discretizations, one explicit and the other implicit. Let Φ(.)
denote the spatial discretization operator over the whole mesh and Φj (un ) the spatial discretiza-
tion operator at xj at time tn , which will be explained below.

• Explicit method. In a time explicit method, the spatial discretization is expressed according
to the solution u at time tn . The explicit Euler method is written at xj as:

un+1
j − unj
+ Φj (un ) = 0.
Δt
We observe that for this type of method, it is easy to move forward (update) the time solution:

un+1
j = unj − Δt Φj (un ).

This scheme is of order one in time, but it is possible to increase the order by using Runge–Kutta
[Shu, Osher-1988], [Spiteri, Ruuth-2002] multistep schemes.

• Implicit method. In a time implicit method, spatial discretization is expressed as a function


of the solution u at time tn+1 . For the implicit Euler method, which is also of order one in time,
this same equation is written as:

un+1
j − unj
+ Φj (un+1 ) = 0,
Δt
Meshes and Finite Volume Calculation 245

which is to say that:


un+1
j = unj − Δt Φj (un+1 ).
It is immediately apparent that the time progression of the solution is much more complex be-
cause Φj (un+1 ) is not known. We are then forced to linearize the space operator using a Taylor
expansion:

∂Φj n ∂Φj n
Φj (un+1 ) ≈ Φj (un ) + (un+1
j − unj ) (u ) = Φj (un ) + δunj (u ) ,
∂u ∂u
∂Φ
with the notation δunj = un+1
j − unj . We notice that the term is a sparse matrix with its size
∂u
equal to the number of vertices times the number of vertices. The non-zero entries of this matrix
are given by the vertices that contribute to the calculation of the operator Φ. For example, if we
unj+1 − unj ∂Φj n c
have Φj (un ) = c , we then get (u ) = − which contributes to the matrix at
Δx ∂unj Δx
∂Φj c
(j, j) and (un ) = , which contributes to the matrix at (j, j + 1). The following linear
∂unj+1 Δx
matrix system is then obtained:
Å ã
I ∂Φ n
+ (u ) δunj = −Φ(un ) ,
Δt ∂u

whose resolution gives us δunj , I denoting the identity matrix. It can be seen that in the implicit
case, the progress in time of the solution requires the resolution at every time step of a linear
system, which is much more complex. Once resolved, the solution time progress is then given
as:
un+1
j = unj + δunj .
For implicit schemes, one can also increase the time accuracy order by using, for example,
the Crank-Nicolson method [Crank, Nicholson-1996], BDF-type schemes (Backward Difference
Formula) [Curtiss, Hirschfelder-1952] or still implicit Runge–Kutta (IRK) schemes.

7.1.2. Spatial discretization

To begin, in one dimension, we note that the formulation is equivalent to a finite difference
scheme. In two or three dimensions, finite-difference type schemes can only be extended in the
presence of Cartesian grids and, therefore, do not allow complex (arbitrary) geometries to be
addressed. On the other hand, finite volume formulations can be built on unstructured meshes
(see below) and can be used for complex geometries.

Finite volume formulations are based on the use of the divergence formula on the conservative
form of the equation under consideration. We consider the case where spatial discretization is
based on a finite volume method centered on the vertices of the mesh. The basic principle is
that the domain is divided into cells (one-dimensional intervals) and the solution is assumed to
be constant over these cells. Therefore, the numerical unknowns unj represent approximations at
time tn of the average of u on the cell of index j. The discretized domain Ωh can then either be
246 Meshing, Geometric Modeling and Numerical Simulation 3

written as the union of the elements Kj of the mesh, called primal mesh, or as the union of the
cells Cj , that we call dual mesh:

Ω h = ∪ j Kj = ∪ j C j .

To move forward with the presentation, we choose a uniform mesh size, namely:

⎪ xj = j Δx

⎪ xj + xj+1 1

xj+ 12 = = (j + )Δx
2 2 [7.2]

⎪ Kj = [xj , xj+1 ]


Cj = [xj− 12 , xj+ 12 ]

with xj the vertex j of the mesh, xj+ 12 the midpoint between vertices xj and xj+1 , xj− 12 the
midpoint between vertices xj−1 and xj , Kj the element (a segment) of the primal mesh and Cj
the cell centered at xj . We can rewrite the transport equation:

∂ ∂
u(x, t) + c u(x, t) = 0,
∂t ∂x
in variational form. Let Vh be the set of continuous functions P 1 on the mesh, we are looking
for uh ∈ Vh such that for all ψh ∈ Vh one has:
 
∂uh ∂uh
ψh dx + c ψh dx = 0 .
Ωh ∂t Ωh ∂x

We can reformulate this variational formulation on the set of the cells of the dual mesh by taking
as test function ψh the characteristic function of Cj (namely the function ψh (x) which is equal
to 1 if x ∈ Cj and 0 otherwise). This brings us to:
  ∂uh  ∂uh
dx + c dx = 0 ,
j Cj ∂t j Cj ∂x

and according to the Green–Ostrogradski theorem (divergence theorem), we get:


  ∂uh 
dx + (c uh ) n dσ = 0,
j Cj ∂t j ∂Cj

where n is the outward-oriented unit normal to the edge ∂Cj of the cell Cj .

One can now look into the discretization at the cell level, namely the terms:
 

uh dx and c uh n dσ.
Cj ∂t ∂Cj

The discretization of the first term has been explained in section 7.1.1. In the following, we study
the discretization of the second term.
Meshes and Finite Volume Calculation 247

Before continuing, let us recall that a numerical scheme is said to be conservative if there
exists a function φ of numerical flux such that the scheme is written in the form (taking the Euler
method in time):

un+1
j − unj un+1
j − unj φj+ 12 − φj− 12
+ Φj (un ) = + = 0, [7.3]
Δt Δt Δx
where the numerical flux function φj+ 12 (respectively φj− 12 ) between the cells Cj and Cj+1 (re-
spectively Cj−1 and Cj ) is given by φj+ 12 = φ(unj , unj+1 ) (respectively φj− 12 = φ(unj−1 , unj )).

• As the first possible discretization of the space term, one considers the following time
explicit Euler method and central in space1:

un+1
j − unj unj+1 − unj−1
+c = 0. [7.4]
Δt 2Δx
that is, for the spatial discretization operator and the numerical flux function, we have chosen:
unj+1 − unj−1 unj+1 + unj
Φj (un ) = c , φj+ 12 = φ(unj , unj+1 ) = c
2Δx 2
unj + unj−1
and φj− 12 = φ(unj−1 , unj ) = c
2
This scheme is of order one in time and order two in space. Let us study the stability of this
scheme following a Fourier analysis. To this end, we set:

unj = ũnj eikjΔx ,

and, replacing in equation [7.4] and using the Euler formulas2, we obtain:
Δt
ũn+1 − ũnj + c i sin(kΔx) ũnj = 0 ,
j
Δx
which can be rewritten in the form:
Δt
ũn+1 = (1 − iα) ũnj , with α=c sin(kΔx).
j
Δx

The amplification coefficients Gk = (1 − iα) are therefore all of modulus |Gk | = 1 + α2 > 1.
The instability of this scheme can be deduced independently of the Δt. It is said that the scheme
is unconditionally unstable, which means that the solution will diverge over iterations. We are
therefore going to propose a stable scheme that takes into account the non-symmetrical structure
of the advection equation.

1. In a space-central scheme, the spatial derivative is symmetrical with respect to the vertex xj in which we
evaluate this derivative.
ix −ix ix −ix
2. cos(x) = e +e 2
and sin(x) = e −e 2i
.
248 Meshing, Geometric Modeling and Numerical Simulation 3

• As a second possible discretization of the spatial term, we propose to use an upwind scheme.
The idea of upwinding is natural for the transport equation where, as we said, there is an upstream
and a downstream. So the flow of information comes from upstream. We can physically explain
the instability of centered schemes by the fact that they search for information downstream. First,
it is assumed that c > 0. Taking as a numerical flux function:

φj+ 12 = φ(unj , unj+1 ) = c unj and φj− 12 = φ(unj−1 , unj ) = c unj−1

in equation [7.3], the left upwind scheme is written as:

un+1
j − unj unj − unj−1
+c = 0. [7.5]
Δt Δx
It is obviously conservative. This scheme is order one in time and order one in space. Let us
study the stability of this scheme using Fourier analysis. To do this, we set again:

unj = ũnj eikjΔx ,

and, replacing that in equation [7.5], we obtain:


Δt  
ũn+1 − ũnj + c 1 − e−ikΔx ũnj = 0 ,
j
Δx
which can be rewritten in the form:
Å ã Å ã
Δt   Δt Δt −ikΔx n
ũn+1 = 1−c 1 − e−ikΔx ũnj = 1 − c +c e ũj = Gk ũnj .
j
Δx Δx Δx
Δt
If one sets ν = c (with c > 0), then one has: |Gk |2 ≤ |1 − ν| + |ν|, ∀k. It is deduced
Δx
that: |Gk | ≤ 1, ∀k if 0 ≤ ν ≤ 1. This scheme is therefore stable and monotonic under the
Courant–Friedrichs–Lewy (CFL) condition:
Δt
ν=c ≤ 1.
Δx
We have the following properties of upwind schemes:
– the left upwind scheme is stable and monotonic if c > 0 and ν ≤ 1;
– the right upwind scheme, where the numerical flux φj+ 12 is given by:

φj+ 12 = φ(uj , uj+1 ) = c uj+1 ,

is stable and monotonic if c < 0 and ν ≤ 1. It should be noted that φj+ 12 depends on uj and on
uj+1 ;
– the upwind scheme, where the numerical flux φj+ 12 is given by:
ß
c+ = max(c, 0)
φj+ 12 = φ(uj , uj+1 ) = c+ uj + c− uj+1 , with ,
c− = min(c, 0)

is stable and monotonic if ν ≤ 1.


Meshes and Finite Volume Calculation 249

• The accuracy of the schemes can be improved by geometrical constructions using Taylor
expansions based on the neighboring nodes. We are considering the MUSCL-type expansion
[Van Leer-1974] (Monotone Upwind Schemes for Conservation Laws) of the upwind scheme
seen above. The treatment for the spatial part of this conservative scheme uses the same function
of numerical flux as the first-order upwind scheme, but applied to different arguments, obtained
by affine extrapolation in every cell. The flux is written as:

φi+ 12 = c+ ui+ 1 − + c− ui+ 1 + ≡ φ(ui+ 1 − , ui+ 1 + ) ,


2 2 2 2

where φ is the numerical flux function of the first-order upwind scheme, and the interpolated
states on either side of the interface of two consecutive cells (at xi+ 12 ), ui+ 1 − and ui+ 1 + , are
2 2
given by:
1î ó
ui+ 1 − = ui + (1 − β)Δui+ 12 + βΔui− 12 ,
2 2
1î ó
ui+ 1 + = ui+1 − (1 − β)Δui+ 12 + βΔui+ 32 ,
2 2
where β is an upwind parameter, the symbol Δui+ 12 represents the slope given by:

Δui+ 12 = ui+1 − ui , [7.6]

and Δui− 12 and Δui+ 32 are similarly defined. One obtains centered slopes in the cells when
β = 0 and totally off-center slopes when β = 1.

This scheme is always at least of order two in space. It is of order three in space when
β = 1/3. As we discussed in section 7.1.1, the order in time can be increased using multistep
Runge–Kutta schemes.

7.2. Finite volume methods for two-dimensional Euler equations

Consider Eulerian flows; in other words, the gas is assumed to be perfect, non-viscous and
there is no thermal diffusion. Let ρ denote the density, U = (u, v) the velocity vector, T the
2
temperature, E = T + U2 the total energy, p = (γ − 1)ρT the pressure with γ the specific
heat ratio and ∇ the gradient operator. The Euler equations are written in conservative form in
the following way:
⎧ ∂ρ

⎪ + ∇ · (ρU ) = 0 ,

⎪ ∂t

⎨ ∂(ρU )
+ ∇ · (ρU ⊗ U ) + ∇p = 0 ,

⎪ ∂t




∂(ρE)
+ ∇ · ((ρE + p)U ) = 0 ,
∂t
where successively one has the conservation of mass equations, of the moment and energy ex-
pressed in tensor form.
250 Meshing, Geometric Modeling and Numerical Simulation 3

Before presenting the way in which Euler’s equations are discretized in two spatial dimen-
sions, we are going to rewrite them in symbolic form. These equations in conservative form are
written as:
∂W
+ ∇ · F (W ) = 0 where F (W ) = (F1 (W ), F2 (W )) , [7.7]
∂t
∂W ∂F1 (W ) ∂F2 (W )
⇐⇒ + + = 0, [7.8]
∂t ∂x ∂y
with (x, y, t) in a bounded open set Ω(t) of R2 × [0, T ] of boundary Γ(t), t ∈ [0, T ] is the
time variable defined in R+ . In the above expression, W = t (ρ, ρu, ρv, ρE) is the vector of
conservative variables and the Euler fluxes Fi represent the convective operator:
F1 (W ) = (ρu, ρu2 , ρuv, u(ρE + p))T ,
F2 (W ) = (ρv, ρuv, ρv 2 , v(ρE + p))T .

7.2.1. Spatial discretization

Let Ωh be a mesh of the computational domain Ω in simplexes (triangles) K, which forms


a cover of the computational domain: Ωh = i Ki . There are two finite volume approaches
that each have their own advantages and disadvantages. There are cell-centered methods, that
is to say that cells are formed from the elements of the mesh, and vertex-centered methods,
where the cells are built around each grid vertex and form a dual partition of the computational
domain. The choice of either approach will drastically change the construction of the finite
volume method. Here, we shall just present the vertex-centered approach that has proven to be
far more appropriate in the context of the adaptation of anisotropic tetrahedra meshing (Chapter
8, Volume 2), thus in three dimensions.

The vertex-centered finite volume formulation consists of associating with each vertex Pi of
the mesh a control cell Ci , also known as a finite volume cell. Finite volume cells form a dual
partition of the computational domain: Ωh = i Ci . Let ∂Ci denote the boundary of the finite
volume cell Ci , ηi the outgoing normal on ∂Ci and n the outgoing normal on the boundary Γ of
domain Ω.

Let Vh be the set of continuous functions P 1 on the mesh, and the variational formulation of
[7.7] following Vh is written as:


⎪ Find Wh ∈ (Vh )4 , such that for all ψh ∈ Vh one has:

 

⎪ ∂Wh
+ ∇ · F (Wh )ψh dΩh = 0 ,
⎩ ψ h dΩh
Ωh ∂t Ωh

As in one dimension, this variational formulation can be reformulated on the set of cells of the
dual mesh by taking as test function ψh the characteristic function of Ci (that is the function
ψh (x, y), which is equal to 1 if (x, y) ∈ Ci and 0 otherwise):
  ∂Wh 
dxdy + ∇ · F (Wh ) dxdy = 0 ,
i Ci ∂t i Ci
Meshes and Finite Volume Calculation 251

which according to the Green–Ostrogradski theorem (divergence theorem) becomes:


  ∂Wh 
dxdy + F (Win ) · ηi dγ = 0 .
i Ci ∂t i ∂Ci

For each cell Ci , by considering an explicit time Euler scheme, the previous formulation becomes
on each control volume:

W n+1 − Win
|Ci | i + F (Win ) · ηi dγ = 0 , [7.9]
Δt ∂Ci

where |Ci | is the area of the finite volume cell Ci and Win is the mean value of the solution W
on the cell Ci at time tn .

7.2.1.1. Finite volume cell definition


• Median cells. There are several ways to define cells; we consider here cells of the median-
type then some indications will be given on other cell types. These cells are built around each
vertex, Pi , of the simplicial mesh as follows:

1) one divides each triangle K = [Pi Pj Pk ] by building quadrilaterals.


For the vertex Pi this quadrilateral has as vertices:
i) the vertex Pi ;
ii) the two midpoints of the edges of K incident at Pi , that is Mij and Mki ;
iii) and the center of gravity of the triangle K, the point G;
2) the cell Ci is formed by the union of all the quadrilaterals linked to Pi .

Figure 7.1 illustrates the construction of a median cell: on the left, an element of the mesh; in
the middle, a cell around an inner vertex; on the right, a cell around a vertex on the domain
boundary. In the latter case, the cell is closed by the half boundary edges associated with the
vertex Pi . Figure 7.2 (on the left) shows the dual mesh in red, the median cells, associated with
the primal triangular mesh drawn in black. When this type of cell is used, an equivalence can be
shown between the finite element method and the finite volume method [Dervieux et al. 1992],
[Selmin, Formaggia-1996] [Barth, Larson-2002], which makes them very attractive.

In the formulation [7.9], the area of the cell |Ci | must be calculated. In the case of median
cells, this calculation is easy, because each quadrilateral dividing each triangle has an area that
is one-third that of the triangle. So, to calculate |Ci |, one just has to loop over the triangles and
assemble the areas of the cells using:
 |Kj |
|Ci | = ,
3
Kj P i

where |Kj | is the area of the triangle |Kj .

The median cells are easily generalized to dimension three by cutting each tetrahedron into
four hexahedra defined using the centers of the edges, the centers of gravity of the faces and
252 Meshing, Geometric Modeling and Numerical Simulation 3

the center of gravity of the tetrahedron. For hybrid meshes composed of tetrahedra, pyramids,
prisms and hexahedra, the same procedure is applied. Each type of element is subdivided into
hexahedra by connecting the edge midpoints, the centers of gravity of the faces and the center of
gravity of the element.

Other cell types can be considered in finite volume methods.

• Voronoi cells. Some of the methods consider Voronoi cells to build the dual mesh, but it
requires the mesh to be self-centered, that is that the vertices of the mesh must belong to the cells
associated with them. The advantage of this approach is that the faces of the cells are orthogonal
to the mesh edges, but the self-centering constraint is much too strong and inapplicable when
dealing with complex (arbitrary) geometry.

• Barth cells. Barth proposed an alternative to Voronoi cells, taking as point G either the
center of the circumscribed circle if it is in the element or the midpoint of the largest edge of the
triangle in the other case [Barth-1992]. G is, in fact, the center of the smallest circle containing
the triangle. Orthogonality is always present if the centers of the circumscribed circles are inside
the elements, but is lost in the other case. The interest of this approach is that, if the triangular
mesh is structured (that is, formed of quadrilaterals subdivided into two triangles), then dual
cells are obtained that are quadrilaterals, as can be seen in Figure 7.2 (on the right). We know
how to easily generalize Barth cells in three dimensions because one just has to take the center
of the smallest sphere containing the tetrahedron but unfortunately the dual hexahedron is lost
when generating a tetrahedral-structured mesh from prisms and/or hexahedra. The only way to
find them is to make a very specific cut of the hexahedra into tetrahedra, which was given by
Gourvitch et al. [Gourvitch et al. 2004].

Figure 7.1. Median finite volume cells. On the left, the construction of the three quadrilaterals
associated with the median cells of vertices Pi , Pj and Pk inside a triangle. In the middle, the
median cell around the vertex Pi formed by the union of all the quadrilaterals linked to Pi . On
the right, the median cell around the vertex Pi on the boundary of the domain. The cell is
closed by the half edges associated with Pi

7.2.1.2. Calculation of upwind conservative fluxes


Let Vois(Pi ) denote the set of vertices neighboring Pi and, for Pj ∈ Vois(Pi ), Cj denotes its
associated cell. The common boundary ∂Cij = ∂Ci ∩ ∂Cj between two control volumes Ci and
Cj is decomposed into two segments or bi-segments that join Mij , the midpoint of the segment
[Pi Pj ], at the centers of gravity of the triangles having for edge the segment [Pi Pj ] (Figure 7.3.
Meshes and Finite Volume Calculation 253

on the left). The integration of the flux term on the cell edge is then done by decomposing the
cell edge into a bi-segment ∂Cij :
   
F (Win ) · ηi dγ = Fij · ηi dγ + F (Win ) · n dγ , [7.10]
∂Ci Pj ∈Vois(Pi ) ∂Cij ∂Ci ∩Γ

where Fij represents the constant value of F (W ) on the interface ∂Cij and for the boundary
term, one has F (Win ) = F (WΓ ) on ∂Ci ∩ Γ. To calculate the flux, a numerical flux function is
used and denoted φij :

φij = φij (Wi , Wj , ηij ) = Fij · ηi dγ , [7.11]
∂Cij

where ηij = ηi dγ = ηij1 + ηij2 = (Mij GK1 )⊥ + (Mij GK2 )⊥ (see Figure 7.3). The
∂Cij
numerical flux function approximates the hyperbolic terms on the bi-segment ∂Cij : we note that
the calculation of the convective fluxes φij at the interface ∂Cij is done in a one-dimensional
way in the direction of the normal ηij at the boundary of a control volume; this is tantamount to
locally solving a one-dimensional Riemann problem.

Several types of numerical flux functions are possible. If one considers a central numerical
flux function for Fij on the segment Pi Pj , we then obtain a central scheme:

F (Wi ) + F (Wj )
φij = · ηij .
2

Figure 7.2. On the left, illustration of the dual median cell mesh (in red) of a triangular mesh
(in black) where there is a structured area and an unstructured area. On the right, illustration
of the dual Barth cell mesh (in red) and the same triangular mesh (in black)
254 Meshing, Geometric Modeling and Numerical Simulation 3

which corresponds to the central explicit Euler scheme (presented previously, relation [7.4]). We
have seen that this scheme is not stable, especially near shocks and discontinuities. A monotonic
scheme is then constructed by introducing an upwind scheme, that is an upwind numerical flux
function is used. This upwind can be seen as the addition of a numerical dissipation term to the
centered flux. Therefore, the upwind numerical flux function is written as follows:

F (Wi ) + F (Wj )
φij (Wi , Wj , ηij ) = · ηij + d (Wi , Wj , ηij ) , [7.12]
2
where the function d (Wi , Wj , νij ) contains the upwind terms and depends on the scheme being
used. This expression of φ consists of a centered term and the term d that contains the inter-
nal numerical viscosity of the scheme. The upwind term should verify the consistency relation
d (Wi , Wi , νij ) = 0 to have φij (Wi , Wi , ηij ) = F (Wi ) · ηij .

7.2.1.2.1. Second-order spatial scheme


The numerical flux function whose general expression is given by relation [7.12] assumes
that the states Wi and Wj are constant on cells Ci and Cj . Such an approximation leads to an
accurate first-order spatial numerical scheme. In order to increase the accuracy of the scheme
(thus reducing numerical dissipation), we use the MUSCL method originally introduced by Van
Leer [Van Leer-1972], [Van Leer-1974] and adapted to the finite element case (triangular and
tetrahedral) by Dervieux et al. [Stoufflet et al. 1987], [Dervieux et al. 1992]. This method con-
sists, while preserving the same expression of the numerical flux, in modifying the arguments of
this function by raising the interpolation order of the state variable W , which has already been
seen previously in one dimension. Thereby, the following approximation is achieved:

φij = φij (Wij , Wji , ηij ) ,

where Wij and Wji are the values extrapolated to the left and right of the interface ∂Cij (see
Figure 7.3). We consider a piecewise linear interpolation that leads to an accurate second-order
spatial scheme. We choose the “physical variables” as interpolation variables, that is, the vari-
ables of the vector (ρ, u, v, p) are extrapolated that will still denote by W . The reason for this
choice is simple, if we extrapolate the conservative variables then we cannot guarantee the posi-
tivity of the physical variables such as pressure. To extrapolate the variables, a Taylor expansion
is used to obtain the values on either side of the interface, the interpolation formulas giving the
states are written as: ⎧
⎪ 1 −−→
⎨ Wij = Wi + (∇W )ij · Pi Pj ,
2
[7.13]

⎩ W 1 −−→
ji = W j + (∇W ) ji · P P
j i .
2
where (∇W )ij and (∇W )ij are the gradients associated with points Pi and Pj , respectively, to
perform the extrapolation. As with the one-dimensional case, we can define central and upwind
gradients.

The central gradient associated with the edge Pi Pj is implicitly defined by the relation:
−−→
(∇W )C
ij · Pi Pj = Wj − Wi .
Meshes and Finite Volume Calculation 255

The calculation of the upwind gradients, such as the first one, appears rather as an extension
of what happens in the one-dimensional case. With each Pi Pj segment, two triangles denoted
Kij and Kji are associated, which are the triangles, respectively, belonging to the balls of Pi
and Pj intersected by the line (Pi Pj ) (Figure 7.3). The triangles Kij and Kji are referred to
as the upstream and downstream triangles, respectively. They are used to define totally upwind
gradients at the vertices Pi and Pj :

ij = (∇W )|Kij
(∇W )D and (∇W )D
ji = (∇W )|Kji .

One can then construct the gradients parameterized by β ∈ [0, 1] as follows:


−−→ −−→ −−→ D −−→
(∇W )ij · Pi Pj = (∇W )βij · Pi Pj = (1 − β)(∇W )C
ij · Pi Pj + β(∇W )ij · Pi Pj .

The scheme is central for beta = 0 and totally upwind for beta = 1. In our case, we shall use
β = 13 which gives a third-upwind scheme. In the linear scalar case with a structured regular
mesh, one can show that this scheme is accurate at spatial order three. For nonlinear cases and
unstructured meshes, this scheme is of second order but with low numerical dissipation.

7.2.1.2.2. Limiting functions


The previous central, upwind or third-upwind gradients are in fact rarely used directly. In-
deed, the resulting scheme is not monotonic and can introduce extrema that did not exist, es-
pecially in the case of transonic and supersonic flows. This can lead to negative densities or
pressures that are non-physical. This problem is addressed by the use of “limitation” procedures.
We replace the gradient appearing in relation [7.13] by a limited gradient denoted (∇W )lim ij
which is a function, denoted F , of these three gradients:

ij = F ((∇W )ij , (∇W )ij , (∇W )ij ) .


(∇W )lim C D HO

7.2.1.2.3. Calculation of boundary fluxes


In the flux equation [7.10], there is a boundary flux term that depends on the boundary con-
ditions under consideration. We shall present here the two very common boundary conditions
for Euler equations: slip conditions for solid walls and free flow at infinity to simulate external

Figure 7.3. On the left, illustration of the bi-segment associated with the edge [Pi Pj ] forming
the interface ∂Cij between cells Ci and Cj and the normals associated with every segment.
The normal ηij of the interface ∂Cij (of the bi-segment) is the sum of the segment normals. In
the middle, values Wij and Wji are extrapolated from both sides of the interface used for the
MUSCL reconstruction. On the right, upstream and downstream elements for the edge Pi Pj
used in the MUSCL reconstruction
256 Meshing, Geometric Modeling and Numerical Simulation 3

flow. In practice, there are many other boundary conditions, such as conditions of periodicity,
Dirichlet, imposed pressure, etc., especially in internal flow applications.

For the slip boundary condition for solid walls, we weakly impose that:

U · n = 0. [7.14]

where U n is the normal to the wall. To this end, one calculates the boundary flow φslip (W, W )
between the boundary state W and its mirror state W which is given by:
Ñ é Ñ é
ρ ρ
W = ρU and W = ρU − 2 ρ (U · n) n .
ρE ρE

If condition [7.14] is verified, then we have W = W which implies φslip (W, W ) = F (W ) (the
flux operator is consistent). Moreover, since W verifies relation [7.14], Euler fluxes F (W ) are
simplified into:
φslip = F (W ) = (0, p n, 0)t .
The latter form of the flux for the slip condition is commonly used even if it is only satisfied if
the state verifies the slip condition. The other way to calculate the flux for the slip condition is to
calculate the flux between the state and its mirror state using an approximated Riemann solver.

To simulate an external flux, we use a free flux condition at infinity where it is assumed that
the flux is uniform at infinity and is given by a known W∞ state:
Ñ é
ρ∞
W∞ = (ρU )∞ .
(ρE)∞

To calculate the edge flux on the external (infinite) boundaries of the computational domain, one
then calculates the flux between the infinite state W∞ and the current state W by either using
an approximated Riemann solver or by using the Steger–Warming flux [Steger, Warming-1981],
which is completely upwind in the solution W :

φ∞ = A+ (W, n)W + A− (W, n)W∞ ,

|A| + A |A| − A
where A = ∂W∂F
is the Jacobian of Euler fluxes, A+ = and A− = , where A is
2 2
the matrix where the absolute eigenvalue of A was considered.

7.2.2. Time discretization

As seen in section 7.1.1, we can either use explicit time integration or implicit time integra-
tion. Exactly the same formulas are used as those previously presented. It was seen that there
Meshes and Finite Volume Calculation 257

was a CFL condition that must be met for explicit schemes to be stable. For two-dimensional
Euler equations, this CFL condition for every vertex of the mesh Pi is given by:
Ä ä Δt(Pi )
ν = c i + | Ui | ≤ 1.
h(Pi )

where ci is the speed of sound and h(Pi ) is the smallest height of the set of triangles containing
the vertex Pi . (that is, the ball of elements of Pi ). This relation allows the time step Δt(Pi )
to be calculated for each vertex Pi at each iteration. Usually, the calculation of the time step is
parameterized with a coefficient α:
h(Pi )
Δt(Pi ) = α with α ≤ 1. [7.15]
c i + | Ui |
where the coefficient α represents the CFL that we consider for the simulation. For stationary
simulations, one uses this local time step3 that considerably accelerates the convergence of the
simulation (one order of magnitude). We are allowed to do so, because in the stationary case, it
is not necessary to be consistent in time.

In the case of unsteady simulations, one should be consistent in time so all the vertices must
move forward at the same speed in time. At that point, a global time step is used: Δt =
min(Δt(Pi )).
Pi

7.3. From theory to practice

We propose some data structures adapted to the purpose and describe schematically the main
steps of an explicit method and then of an implicit method.

7.3.1. Data structures

In a finite volume solver, there are three kinds of data structures:


– geometric data;
– data solutions;
– in the implicit case, the matrix data.

In the following, we denote by nbv, nbe, nbf and nbt the numbers of vertices, edges, boundary
faces and elements.

For geometric data structures, the classical data read from the mesh file are used: the vertex
coordinates (Coor[nbv][dim]), the list of elements (T ri[nbt][3] or T et[nbt][4]) and the list of
boundary faces (Ef r[nbf ][2] or T f r[nbf ][3]). Then, we saw (equation [7.10]), that the fluxes

3. Every vertex moves forward with its own time step.


258 Meshing, Geometric Modeling and Numerical Simulation 3

are calculated edge by edge. Therefore, in two and three dimensions, we need the array of mesh
edges (Edg[nbe][2]).

Concerning the cells of finite volumes, we notice (section 7.2.1.2) that it is not necessary to
store the polygonal structure of the cells, which would be expensive in terms of memory. In
effect, the cell boundaries only appear through the vector − η→ −

ij for each edge Pi Pj . The vector ηij
as well as its norm (which is the area of the face ∂Cij through which the flux is calculated) thus
merely have to be stored for each edge. In two dimensions, one could consider re-calculating

η→
ij every time but that would require storing for every edge the two triangles that share it. In
three dimensions, however, this is not at all advantageous because one would have to store the
shell of elements of every edge. Therefore, we store −η→ij that is calculated during pre-processing
(EdgV no[nbe][dim + 1]). Finally, formula [7.9] shows that we also need the area of cells |Ci |.
Since this area is calculated per assembly, it is also advantageous to calculate it during pre-
processing and to store it. This avoids storing element balls to calculate it on the fly. So we
have a cell area associated with each vertex (CelAir[nbv]). In summary, the finite volume cells
are implicitly stored via the cell areas and the vectors −η→ij , which avoids resorting to arrays of
variable size because the number of faces of each cell is not known in advance. For the second-
order spatial scheme using the MUSCL method, we need to know for each edge the upstream
and downstream element. Since we do not want to store the point balls and re-calculate them at
every iteration, these two elements are calculated during pre-processing and stored for each edge
(EdgElt[nbe][2]). To calculate the solver time step at each iteration (equation [7.15]), we need
to calculate the heights h(Pi ) associated with the vertices. Once again, this value is calculated
during pre-processing and is stored for every vertex (V erHgt[nbv]).

For solution data structures, we need to store solution and flux vectors at each vertex (Sol[nbv]
[dim + 2] and F lu[nbv][dim + 2]). When solving stationary problems, one also stores the local
time step for each vertex (V erDt[nbv]). For some time integration schemes, one also needs to
store the solution at the beginning of the iteration (SolIni[nbv][dim + 2]).

In the case of implicit integration, a matrix system must be stored. We need the arrays asso-
ciated with the right-hand side system member and the unknown vector (SysRhs[nbv][dim + 2]
and SysSol[nbv][dim + 2]). We also need to store the matrix whose size is the number of ver-
tices times the number of vertices where each entry is a block of size (dim + 2)(dim + 2). This
matrix is sparse because the non-zero entries are the diagonal and the pairs (i, j) associated with
the edges of the mesh. Now, two choices are possible. Either standard sparse matrix storage
is considered, such as the CSR (Compressed Sparse Row) format (Chapter 3). Or alternatively,
one can take advantage of the edge data structure to store the matrix. At this point, the matrix is
rewritten in the form:
A=L+D+U,
where L is the strict lower part, D is the diagonal part and U is the strict upper part of the
matrix. D is thus an array whose size is the number of vertices times the size blocks, that is
(SysDia[nbv][(dim + 2) × (dim + 2)]). For some resolution methods, one may also need to
store D−1 . L and U are two arrays with a size equal to the number of edges times the size of the
blocks (SysLow[nbe][(dim + 2) × (dim + 2)] and SysU pp[nbe][(dim + 2) × (dim + 2)]).
Meshes and Finite Volume Calculation 259

7.3.2. Resolution algorithms

The main steps of the solver for a time-explicit resolution are presented in Algorithm [7.16].

The first pre-processing step concerns the reading of the mesh and the initial solution, and
the construction of the geometric data structures. First, the edges of the mesh are constructed
by hashing (Chapter 4, Volume 1 and Chapter 1, Volume 3). Since this construction is done by
looping over the mesh elements, the area of the cells, the heights associated with the vertices
and vectors −η→
ij can be calculated at the same time. The last geometric structure to be built is
for the upstream and downstream elements for each edge. To find these two elements, the balls
of elements for every vertex are temporarily needed (Chapter 4, Volume 1). These elements are
found by calculating the intersection between the edge under consideration and the opposite face
of the element.

The second resolution step contains a main loop which, in the steady case, is iterated until
convergence or when the maximal number of iterations is reached and, in the unsteady case, is
iterated until the final simulation time. At each resolution iteration, four loops are run as follows:

i) a loop over the number of vertices to calculate the time step;


ii) a loop over the number of edges to calculate the volume fluxes;
iii) a loop over the boundary faces to calculate the edge fluxes;
iv) a loop on the vertices to make the solution move forward in time.

The third and last postprocessing steps consist of calculating functions of interest to engineers
with which they will be able to quantify the design that they are studying. For example, these
might include drag and lift in aeronautics, pressure drops and flows in turbomachinery or the
impact of the blast of an explosion, to name just a few examples. In this step, the various solution
fields are also written.

Main steps for a time-explicit scheme [7.16]


– Preprocessing stage to build all the data structures.
– Mesh reading and initial solution.
– Geometric preprocessing.
– Resolution. Loop over the number of iterations:
- calculation of the time step;
- flux calculation:
i) calculation of volume fluxes;
ii) calculation of boundary fluxes;
- progression of the solution in time.
– End loop.
– Postprocessing step to build all the data structures.
– Writing of the solution.
260 Meshing, Geometric Modeling and Numerical Simulation 3

In the case of an implicit time resolution, everything that is matrix-related must be added.
Algorithm [7.17] presents the case of an implicit time resolution.

Main steps for a time-implicit scheme [7.17]


– Preprocessing stage to build all the data structures.
– Mesh reading and initial solution.
– Geometric preprocessing.
– Matrix preprocessing.
– Resolution. Loop over the number of iterations:
- calculation of the time step;
- flux calculation:
i) calculation of volume fluxes;
ii) calculation of boundary fluxes;
- matrix assembly:
i) assembly of volume blocks;
ii) assembly of boundary blocks;
iii) assembly of the mass matrix;
- Resolution of the linear system;
- Forward progression of the solution in time.
– End loop.
– Postprocessing step to build all the data structures.
– Writing of the solution.

The matrix structure is defined and allocated in the preprocessing step. Then in the resolution
phase, it can be seen that there are two additional steps: matrix assembly and resolution of the
linear system. At every resolution iteration, three loops are performed for the matrix assembly:

i) a loop over the number of edges to assemble the volume blocks;


ii) a loop over the boundary faces to assemble the boundary blocks;
iii) a loop over the vertices to assemble the mass matrix.

The linear system is solved using conventional methods.

7.4. Numerical examples

To conclude this chapter with a few illustrations, we consider two examples, one in two
dimensions and the other in three dimensions.

The first two-dimensional example is an unsteady regular flow representing the evolution of
Kelvin–Helmholtz (KH) instabilities with a clear interface between two fluids. The accurate
simulation of the development of instabilities is fundamental to the study of fluid mixtures. This
is the case in astrophysics applications, especially in the study of supernovae.
Meshes and Finite Volume Calculation 261

Figure 7.4. Adapted mesh (on top) and final solution (at the bottom)

This simulation is done in a rectangular computational domain [−1.5, 1.5] × [0, 1] with left
and right periodic boundary conditions. The central fluid has a density ρ1 = 2 and a horizontal
velocity u1 = 0.5, and it is surrounded by a second fluid of lower density ρ2 = 1 with an opposite
horizontal velocity u2 = −0.5, which creates shearing between the two fluids. The fluids are
initialized with pressure equilibrium with p = 2.5 and γ = 1.4. A threshold function is used
to gradually move from one fluid to another. Finally, a sinusoidal velocity perturbation in the
direction y is introduced to initiate instability.

This simulation was carried out with mesh adaptation where the L2 -norm space-time inter-
polation error of the density field was controlled (Chapter 8, Volume 2). In Figure 7.4, the final
solution (at the bottom) can be seen as well as the associated adapted mesh (on top) after eight
adaptations.

The second three-dimensional example is the flow around an aircraft in a high-lift configura-
tion. The high-fidelity execution of such simulations is nowadays fundamental in aeronautics, in
particular for determining the maximum lift of the aircraft, which will make it possible to define
the maximum take-off load.
262 Meshing, Geometric Modeling and Numerical Simulation 3

Figure 7.5. Adapted tetrahedral mesh (cross-section)

Figure 7.6. Solution with wingtip vortices


Meshes and Finite Volume Calculation 263

The geometry under consideration is the high-lift version by the NASA Common Research
Model (NASA, CRM), which was studied at the Third AIAA High Lift Prediction Workshop.
A flow is simulated at Mach 0.2 with an angle of attack of 16◦ and a Reynolds number of
3.26 million. Here, the flow is turbulent and modeled by the Reynolds Averaged Navier–Stokes
(RANS) equations, that is the Navier–Stokes equations were coupled with a turbulence model,
in this case the Spalart–Allmaras model.

This simulation was carried out with mesh adaptation where the L4 -norm interpolation error
on the local Mach number was controlled (Chapter 8, Volume 2). In Figure 7.5, the mesh adapta-
tion can be seen behind the aircraft that allows calculating with high accuracy the aircraft wake,
and in particular wingtip vortices (Figure 7.6).


∗ ∗

Today, in industry, the majority of numerical schemes used to study turbulent flows are finite
volume schemes. Therefore, it seemed appropriate and important to us to talk about this type of
numerical method in this book. These methods are widely used because they have proven over
the years, to be very robust and versatile to deal with a wide range of problems, In particular, it is
easy to consider new boundary conditions, which is not always the case with numerical methods.
It can be seen that high-order methods have still not made a breakthrough in the industrial world,
because access to meshes of high order has not been democratized, and robustness as well as
the cost of these methods are still an issue. However, they are found in a few niche applications
where their full potential can be exploited. In summary, there is still a bright future for finite
volume schemes.

In this chapter, a finite volume method has been briefly described, which has proven to
be very efficient with unstructured tetrahedral adapted meshes (see [Alauzet, Frazza-2019] and
[Loseille et al. 2019]). One should recall that vertex-centered finite volume methods lend them-
selves well to tetrahedral adapted meshes, whereas cell-centered methods today prove to be much
less robust with such meshes. This method makes it possible to deal with complex (arbitrary)
geometries, as demonstrated in the example of the aircraft. The study of this numerical method
has highlighted how the methods manipulating mesh data can be used in the preprocessing stage
of a numerical solver in order to make it efficient, and that mastering algorithms such as hashing
or the construction of a ball of elements was fundamental.

To remain succinct, we have not described the extension of this method to Navier–Stokes
equations with the treatment of viscous terms. The curious reader can find more details in
[Menier et al. 2014]. However, switching to Navier–Stokes does not change data structures and
the preprocessing presented in this chapter.

The extension of finite volume methods to high-order meshing is not at all natural and does
not appear to be appropriate nowadays; finite element methods or discontinuous spectral methods
that are well posed in this context are preferred.
Chapter 8

Examples Through Practice

Throughout the three volumes of this book, we have covered many points dealing with both
the methods (algorithms) and the tools necessary for the implementation of these methods (data
structures and basic algorithms). Many schemes, sometimes seen as pseudo-code, have allowed
us to describe the methods synthetically. Nevertheless, understanding a method, a scheme or a
description is one thing, whereas actually implementing (programming) a method, a scheme or
a description is another. In this chapter, we propose that the readers get their hands dirty, since it
is the only way to encounter and understand the underlying difficulties.

As such, this chapter groups together some numerical applications covering a variety of topics
covered in the volumes of the book and proposes exercises related to these applications. The
examples and exercises are of varying degrees of difficulty and should allow motivated readers
to address some of the problems related to mesh generation and adaptation, particularly in terms
of algorithmic complexity.

We made the decision to consider a matrix programming language close to the mathematical
notations of this book, such as MatLab or Octave. These interpreted languages are very easy to
access by simplifying data structures, and more specifically vectors and matrices only. They also
allow for on-the-fly debugging or printing without recompilation. In addition, all the concepts
can be easily implemented in (more) advanced languages such as Fortran, C/C++, Python and
so on.

We start with a very simple exercise, reading a mesh. Since we now know how to read
a mesh, we suggest performing some operations on the mesh being read, namely to calculate
the quality of its elements, build a hash table and then use this to obtain certain information
about this mesh. This will be an opportunity to reflect on the complexity required to obtain
this information. Next, we consider an algorithm (Delaunay-based) for inserting points that will
be illustrated by means of an application concerning image compression. This algorithm is the
background to discuss how to implement the building blocks that make it up (Delaunay kernel,
point localization, compression method, etc.). We then show how the connected components of

Meshing, Geometric Modeling and Numerical Simulation 3: Storage,


Visualization and In Memory Strategies, First Edition. Paul Louis George,
Frédéric Alauzet, Adrien Loseille and Loïc Maréchal.
© ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
266 Meshing, Geometric Modeling and Numerical Simulation 3

a mesh (having several of them) can be found. To finalize, some exercises related to metrics are
proposed, which is a concept whose significance has been stressed over and over again.

8.1. Reading, writing and manipulating a mesh

In computing terms, a mesh, in its simplest form, is stored by means of two matrices. The
first, the coordinate matrix (of real numbers), contains the x, y (or x, y, z in three dimensions)
of each mesh vertex and the second, the connectivity matrix (of integers), serves to define the
entities: triangles, tetrahedra, edges, etc., using the indices of their vertices. All the provided
mesh files can be found in .mesh format, which is Inria’s standard for the description of meshes
(seen as) unstructured. For example, for the trivial meshing of a unit square composed of two
triangles (Figure 8.1), both matrices of the matrix representation are:

coor = [ 0, 1, 1 , 0 ; 0, 0, 1, 1]; % a 2 x 4 matrix

and:

tri = [ 1 2 % a 3 x 2 matrix
2 3
4 4 ];

Figure 8.1. The square meshed into two triangles

The .mesh format1 includes a set of tools, for example a reading function, readmesh. This
function allows the arrays representing a mesh to be created. It has the following form:

1. Presented in Chapter 1 in this volume and described extensively at https://github.com/Loic-


Marechal/libMeshb.
Examples Through Practice 267

function [dim,coor,tri,tet,edg,crn] = readmesh(name)


% open the mesh file name
% outputs the vertices (coor) array, the triangles (tri) array,
% the tetrahedra (tet) array, the edges (edg) array,
% the corners (crn) array, ...
% example: [coor,tri,tet,edg,crn] = readmesh(’toto.mesh’)

coor = [];
tri = [];
tet = [];
crn = [];
edg = [];

fid = fopen(name,’r’);
if ( fid == -1 )
error([’Mesh file ’ name ’ does not exist’]);
else
disp([ ’% ’ name ’ OPENED ’]);
end

while ( ~feof(fid) )

str = fgetl( fid );


str = str(find(str~=’ ’));
switch ( lower(str) )

case ’dimension’
dim = fscanf( fid, ’%d’, 1 );
if ( dim ~= 2 && dim~=3 )
error(’ Invalid inout mesh ’);
end

case ’vertices’
NbrVer = fscanf( fid, ’%d’, 1 );
if ( dim == 2 )
coor = fscanf( fid, ’%f %f %d’, [3, NbrVer] );
coor = coor(1:2,:);
else
coor = fscanf( fid, ’%f %f %f %d’, [4, NbrVer] );
coor = coor(1:3,:);
end

case ’triangles’
NbrTri = fscanf( fid, ’%d’, 1 );
tri = fscanf( fid, ’%d %d %d %d’, [4, NbrTri] );
268 Meshing, Geometric Modeling and Numerical Simulation 3

tri = tri(1:3,:);

case ’tetrahedra’
NbrTet = fscanf( fid, ’%d’, 1 );
tet = fscanf( fid, ’%d %d %d %d %d’, [5, NbrTet] );
tet = tet(1:4,:);

case ’edges’
NbrEdg = fscanf( fid, ’%d’, 1 );
edg = fscanf( fid, ’%d %d %d’, [3, NbrEdg] );

case ’corners’
NbrCor = fscanf( fid, ’%d’, 1 );
crn = fscanf( fid, ’%d ’, [1, NbrCor] );

end
end

fclose(fid);
end

This function can be used to read (fill in the arrays) meshes in two or three dimensions. For
example:

[dim,coor,tri] = readmesh(’carre_4h.mesh’)

retrieves the value of dim and fills in the arrays coor and tri for the mesh of the file named
carre− grossier.mesh, while:

[dim,coor,tri,tet] = readmesh(’ssbj.mesh’)

gives dim, and fills in the arrays coor, tri and tet for the mesh of the file named ssbj.mesh.

The completed arrays, therefore the output arrays, are those whose name is given. They
are filled in by “scanning” the file for the keywords (case in the above code). The arrays not
mentioned, such as edg, the edge array, or crn, the corner array, are ignored, as is the array
tet, during the reading of a two-dimensional mesh.

In addition to the reading function, other functions can be found, such as a writing function.

To practice and experiment with the programs that the reader is willing to develop and val-
idate, a set of meshes and functions is downloadable from the address given at the end of the
chapter.
Examples Through Practice 269

As an exercise, we can propose writing a drawing program in which the elements are drawn
via their edges (wireframe mode) over a loop iterating them. This naive algorithm plots the edges
multiple times, once per element in which they are contained. Writing a more efficient algorithm
requires the construction of an array of edges quickly (included in the next section). In this way,
the edges will be plotted only once.

Another exercise is, for a selected quality function, to calculate the quality of the elements
and to represent a histogram using them.

8.2. Programming a hashing algorithm

The use of only two tables, that of the coordinates coor and the elements tri or tet, is
generally not sufficient to develop efficient meshing algorithms. It is therefore very often useful
to build additional arrays such as the structure of neighborhood relations (per edge in two dimen-
sions or for surfaces or per face in three dimensions) or even, the array of edges (in a surface, or
volume). For the neighbor array in two dimensions, it is necessary to quickly find and, equiva-
lently, to know for each mesh edge the triangle or triangles sharing it. To understand the interest
in obtaining an efficient algorithm, it is instructive to build this array using a naive method where
the array is iterated to verify whether an edge is already present or needs to be added. As ex-
plained in previous volumes, such an algorithm has a quadratic complexity, and we will let the
reader verify that such an algorithm is no longer considerable for meshes from the moment they
include more than 1,000 or even over 100 triangles if one is not very patient in nature. To simply
calculate the neighbors, but with a quadratic approach, complete the following algorithm:

[dim,coor,tri] = readmesh(’carre_4h.mesh’);

tic;
voi = zeros(size(tri));
for itri=1:size(tri,2)

% edge 1
ip1 = tri(2,itri);
ip2 = tri(3,itri);
for itri2=1:size(tri,2)
if ( itri2 ~= itri )
% check if edge ip1,ip2 already exists
error(’ To be written’);
end
end

% edge 2
ip1 = tri(3,itri);
ip2 = tri(1,itri);
for itri2=1:size(tri,2)
270 Meshing, Geometric Modeling and Numerical Simulation 3

if ( itri2 ~= itri )
% check if edge ip1,ip2 already exists
error(’ To be written’);
end
end

% edge 3
ip1 = tri(1,itri);
ip2 = tri(2,itri);
for itri2=1:size(tri,2)
if ( itri2 ~= itri )
% check if edge ip1,ip2 already exists
error(’ To be written’);
end
end

end
toc;

We should note the edge numbering convention that consists of taking as edge number i, the
edge opposite to the vertex of local number i.

The functions tic and toc enable the elapsed CPU time to be displayed. We can then try
to apply the algorithm on small cases, notice (deplore) its slowness and verify that the efficient
algorithm that we have also implemented (see below) is valid and of practically linear complexity
and, thus, really usable for large mesh sizes.

Hash tables (Chapters 4 and 9 of Volume 1) are classic data structures in algorithms that
provide fast access to objects with a key that characterizes them. The key is usually not unique,
as several objects can have the same one. It is therefore necessary to be able to store several
objects that are different, but have a similar key. Such objects are said to be in collision. When
a given object is searched for, we calculate its key, and then only iterate through the objects
that have that key. This iterating is therefore quadratic, but only with the colliding elements.
It is therefore the quality of the hash key that will guarantee the efficiency of the process by
minimizing these different paths.

A hash table therefore consists of a head (a table of heads) and a table of values that stores
values describing the colliding objects in addition to a pointer to a possible new object with the
same key. The head therefore gives the index in the table of values and links of the first object
with a given key. This first object (its link) makes it possible to move, if there is one, to the next
object of the same key and, one by one, to every object with the same key. The parameters of
a hash table are therefore the maximum size of the table of heads (which depends on the key
function chosen) and the maximal number of objects to be stored (which is a known number or,
in most cases, rounded up).

The function to initialize the hashing process is given. The last value in the list is considered
to be the link to the next object and is equal to 0 for the last object in the list:
Examples Through Practice 271

function [head,list,nlist] = iniHashTable(maxHead,maxVal)


% initialisation of a haching table with at most maxVal values
head = zeros(maxHead);
list = zeros(5,maxVal);
nlist = 0

end

The values of maxVal and maxHead depend on the problem under consideration. For example, in
cases whose focus is to build the neighbors per triangle edges, the table of values can be defined
as the list:

list(1,) = first endpoint of the edge


list(2,) = second endpoint of the edge
list(3,) = first triangle with that edge
list(4,) = second triangle (if any) with that edge
list(5,) = link (in list) to the next edge with the same key
head(iPos) = link to the first edge with key iPos
nlist = last edge stored in list

There are other options for defining the table list, for example, with only:

list(1,) = MAX of the first and the second endpoint of the edge
list(2,) = first triangle with that edge
list(3,) = local number of the edge in the above triangle
list(4,) = link (in list) to the next edge with the same key
head(iPos) = link to the first edge with key iPos
nlist = last edge stored in list

The choice of the information to be stored depends on the key. The objective is to be able
to differentiate between objects with the same key and to store useful information depending on
what it is expected to do (simply a table of edges, a table of neighbors, etc.). For example, the
choice of keeping just the maximum of the two indices has meaning only if the key and this value
alone characterize the edge being processed, which excludes a key defined via a modulus.

That said, and once the necessary information has been defined, the basic function of the hash
tables still needs to be implemented, for searching/adding an object in the table. For more sim-
plicity, we consider the context of the construction of the table of neighbors for a two-dimensional
mesh. The objects here are edges. They are therefore characterized by two (indices of) points.
At first, complete the following two functions in this particular case, then extend these functions
to faces (three-dimensional problem).
272 Meshing, Geometric Modeling and Numerical Simulation 3

function [head,list,nlist] = addHashTable(iip1, ip2, itri, head, list,


nlist)
% add edge ip1,ip2 of triangle itri in the haching table
% either add a new entry in list at index nlist + 1
% or update list(4) (the neighbor, first choice for the
definition of list))

code = ip1 + ip2;


iPos = mod(code,size(head,1)) + 1;

% other keys can be tested


error(’ To be written’);

% check if (ip1,ip2) is already in list head(iPos)


% if not, add the edge in head(iPos) either continue

error(’ To be written’);

end

Adapt the hash table to quickly calculate the number of edges of a mesh. Compare the average
and maximum numbers of collisions for different keys. What is the number of edges in the mesh?

Change the hash table structure to search in a mesh, the number of boundary triangles. A
mesh is given where the surface elements (triangles) are not explicit and therefore input on read-
ing. For the mesh ssbj_nogeo.mesh, give the number of boundary triangles, in other words, do
yourselves what the reading function could have done.

In section 8.3, the hash table structure will be used in a Delaunay triangulation method as
part of an image compression problem.

8.3. One point insertion operator per cavity, application to image compression

In this section, we focus on how to compress a grayscale image using the interpolation and
uniqueness properties of the Delaunay triangulation2 in two dimensions.

• A refresher on Delaunay triangulation (Chapter 5 of Volume 1 and Chapter 4 of Volume 2).


Given a set of N points of the plane E = (xi , yi )i∈[1,N ] , the Delaunay triangulation is a confor-
mal mesh of triangles whose vertices, here, are the points of E that verify the property known
as the empty sphere property or Delaunay criterion. Given a triangle K, there are no vertices
of the triangulation (other than the vertices of K) contained in the circumball of K. The main

2. This is a rare non-academic application of a two-dimensional triangulation problem.


Examples Through Practice 273

result (Delaunay’s lemma) is that if the criterion is verified locally, that is, for each pair of adja-
cent triangles, then it is globally verified. This triangulation makes it possible to find the closest
neighbor of a cloud of points (see function dsearchn in Matlab) or to construct the convex hull
of a point cloud.

If we add to each point (xi , yi ) of the discrete set E, a solution ui (a gray level), the De-
launay triangulation enables a continuous representation u of the image to be obtained such that
u(xi , yi ) = ui at every point of E. u is simply the linear interpolation on each triangle of the
discrete values of ui . An image I can therefore be represented as a function u known on a struc-
tured grid of [1, m] × [1, n], as a function with value in R. An example of this representation
is given in Figure 8.2. This representation will be used in the rest of this exercise to develop an
image compression algorithm.

Figure 8.2. Representation of the image of the Mona Lisa (on the left) by an unstructured
mesh (on the right) where each mesh vertex has the gray level of the image as a solution

• Application to image compression/decompression. The interest of Delaunay triangulation


for compressing an image is due to the uniqueness of the triangulation. If the mesh representing
the Mona Lisa given by Figure 8.3 is a Delaunay mesh, only the list of the significant pixels has
to be stored along with their gray level. In particular, the connectivity between vertices (list of
triangles) is not necessary since it can be regenerated by creating the Delaunay triangulation of
the list of significant pixels. If the initial image comprises m × n pixels, its storage is of the order
of m × n; if the Delaunay mesh contains Nv vertices, the memory needed to store the image
compressed in this format is of the order of 2Nv (list of the numbers of the significant pixels
and associated gray level). In the case of the example in Figure 8.3, the compression factor is
therefore given as:
127, 000
α= = 4.5.
27, 000
274 Meshing, Geometric Modeling and Numerical Simulation 3

Figure 8.3. Representation of the image of the Mona Lisa on a Delaunay mesh composed of
27,000 vertices instead of the 127,000 vertices of the original mesh in Figure 8.2

The exercise is broken down into several operations: the Delaunay kernel (point insertion), the
localization of a point and the image processing itself (how to compress, uncompress an image).

• The Delaunay kernel. This part concerns writing a two-dimensional Delaunay kernel. The
so-called Bowyer–Watson method will be implemented. This is an incremental method that
enables inserting a point P in the current triangulation T n . The steps are as follows:
i) localize the point P in T n ;
ii) calculate the cavity CP associated with P and remove the triangles of Cp of T n ;
iii) create the triangles of the ball BP by starring around the point P . The triangulation
becomes:
T n+1 = T n − CP + BP .
Figure 8.4 illustrates steps 2 and 3 of the algorithm. It is important to propose a data structure
adapted to this algorithm in order to obtain an efficient kernel. In particular, this kernel can
be independently tested on the insertion of a list of randomly generated points in the plane. A
complexity curve giving the CPU time according to the number of points will allow the correct
implementation of the algorithm to be verified. It should be noted that step 1 can be indepen-
dently processed first, and it also intervenes in image decompression.

• The localization of a point. This operation is equivalent to localizing a point P in an


unstructured mesh (algorithm identical to step 1 of the Delaunay kernel). A local path technique
will be used where, starting from an initial triangle, one moves in the mesh by traversing the edges
of the triangles bringing us closer to P . A typical path is represented in Figure 8.5. The choice(s)
of the edge to be traversed (therefore of the neighboring triangle to be visited) is given by the
sign of the barycentric coordinates of the point P with respect to the current triangle K. We
can recall that the barycentric coordinate of a point P for the triangle [P1 P2 P3 ] and vis-a-vis the
oriented edge P1 P2 is the signed area of the triangle [P P1 P2 ]. A negative barycentric coordinate
is a possible choice for the edge to be traversed to continue the progression toward the solution
Examples Through Practice 275

triangle. Figure 8.6 illustrates the different choices. When several edges can be traversed, a
random choice is made to avoid an infinite loop where the same triangles are infinitely visited
(Figure 8.7). When the three barycentric coordinates are positive, the point is localized.

Figure 8.4. Insertion of a point P in the triangulation T n : calculation of the cavity CP ,


drawing (middle) of T n − Cp , reconnection (on the right) T n+1 = T n − CP + BP

K0

Figure 8.5. From triangle K0 , we travel through the mesh by traversing


the edges that bring us closer to P . Note that several paths are possible. A random choice
can be made in this situation

P −++
P2
P2
K0 P2
K0

P
K +++
K
K
P1 P0
P0 P1
K2 P −+− P0 P1

Figure 8.6. Possible choice of an edge based on the sign of the barycentric
coordinates for different configurations

• Image processing, decompression. During the image decompression phase, the gray levels
of the n × m pixels (of the uncompressed image) are interpolated from the gray levels of the
unstructured image mesh. The pixel (of the structured mesh) must therefore be located in the
276 Meshing, Geometric Modeling and Numerical Simulation 3

K3
K4

P K2
K5
K1
K0

Figure 8.7. Configuration where the point P is never reached if the same
choice of edges to be traversed is done

unstructured mesh, that is what we have just seen. The color of the pixel, once localized, is
interpolated using the gray levels of the three vertices of the triangle K that was found.

• Image processing, compression. This part focuses on the choice of the most significant
pixels. This part is open. It is advisable to consider a finer naive method (take 1 pixel out of 5 for
example) before developing finer methods3: edge detection, contrast, interpolation error control
(see below) between two images, etc. In order to validate the quality of the compressed image,
one will be able to calculate the peak signal-to-noise ratio given by:
Å 2ã
d
P SN R = 10 log10 ,
qc
where qc represents the mean quadratic error, that is:

1 
m n
qc = |I(i, j) − Ic (i, j)|2 ,
m n i=1 j=1

and d is the maximal image amplitude, 255 for a grayscale image. The characteristic values of
P SN R are of the order of 30–50 decibels.

The exercise now consists of implementing the Delaunay insertion process applied to image
processing. We start by translating a given image (a grid of pixels) into an unstructured mesh
with a scalar solution that represents the gray levels. It is the following program:

function [coor,tri,edg,crn,sol] = im2mesh(file)

% load image
im = imread(file);
info = imfinfo(file);

3. The goal here is not to discuss compression methods, but to play with Delaunay.
Examples Through Practice 277

Width = info.Width;
Height = info.Height;

% convert image to a grey-level image from rgb


% grey = 0.3*r+0.59*g+0.11*b

im = 0.3*im(:,:,1) + 0.59*im(:,:,2) + 0.11*im(:,:,3);


%imshow(im);

nrow = Height;
ncol = Width;

tri = [];
edg = [];
crn = [];

coor = zeros(2,nrow*ncol);
sol = zeros(1,nrow*ncol);
tri1 = zeros(3,(nrow-1)*(ncol-1));
tri2 = zeros(3,(nrow-1)*(ncol-1));
edg = zeros(3,2*(ncol-1)+2*(nrow-1));
crn = [ 1, ncol, 1 + ncol*(nrow-1), ncol*(nrow-1) + ncol ];

idx = 1;
for i=1:nrow

ind = idx:(idx+ncol-1);
coor(1,ind ) = 1:ncol;
coor(2,ind ) = nrow-i+1;
sol(ind) = im(i,1:ncol);
idx = idx + ncol ;

end

Before concentrating on the case of images, we want to implement a more generic Delaunay
insertion algorithm. It follows the following pattern:

% Aim: Delaunay point insertion


% readmesh is used to read the given points
[dim,coor] = readmesh(’carre_4h.mesh’);
% 1. Creating an enclosing box made up of 2 triangles
xmin = min(coor(1,:));
xmax = max(coor(1,:));
ymin = min(coor(2,:));
ymax = max(coor(2,:));
278 Meshing, Geometric Modeling and Numerical Simulation 3

dx = abs(xmax-xmin);
dy = abs(ymax-ymin);

xmin = xmin - 0.5*dx;


xmax = xmax + 0.5*dx;
ymin = ymin - 0.5*dy;
ymax = ymax + 0.5*dy;

coor = [ [xmin,xmax,xmax,xmin;ymin,ymin,ymax,ymax] ,coor];


tri = [ [1,2,4]’,[2,3,4]’];
voi = [ [2,0,0]’,[0,1,0]’];

% 2. Loop over the points : inserting one point


for i=5:size(coor,2)
disp([’Inserting point ’ num2str(i) ]);

% Delaunay insertion of point ip


% 2.1 Localisation
% 2.2 Constructing the cavity
% 2.3 Creation the ball
% 2.4 updating tri and voi
[coor,tri,voi] = delaunay(coor,tri,voi,ip);

plotMesh(coor,tri);
pause

end

The primary function of this process is Delaunay. It includes the four steps indicated above (lo-
calization, cavity construction, ball construction and updates). This function is of the following
form:

function [coor,tri,voi] = delaunay(coor,tri,voi,ip)


%%% insert point Xp (index ip) via the Delaunay algorithm

% coordinates of point ip

Xp = coor(:,ip);
cavity = buildCavity(Xp,coor,tri,voi);
[coor,tri] = buildBall(coor,tri,voi,ip,cavity)

% remove the old triangles (those of the cavity) from table tri
Examples Through Practice 279

idx = 1:size(tri,2);
idx = setdif(idx,cavity);
tri = tri(:,idx) ;
voi = voi(:,idx);

This function uses the following functions, first bdyCavity:

function bdy = bdyCavity(coor,tri,voi,cavity)


%%% This function computes the boundary of the cavity
%%% by means of the table bdy
%%% the edges of the boundary are
%%% bdy(1,i) = first vertex of the boundary edge number i
%%% bdy(2,i) = second vertex

%%% The algorithm finds the edges seen only one time
%%% in the set of elements in the cavity

maxHead = 16;
maxVal = 3*length(Cavity)
[head,list,nlist] = iniHashTable(maxHead,maxVal);

%% we add all the degs of the cavity


for itri=1:length(cavity)
ip1 = tri(2,cavity(itri));
ip2 = tri(3,cavity(itri));
[head,list,nlist] = addHashTable(ip1, ip2, itri, head, list, nlist)

ip1 = tri(3,cavity(itri));
ip2 = tri(1,cavity(itri));
[head,list,nlist] = addHashTable(ip1, ip2, itri, head, list, nlist)

ip1 = tri(1,cavity(itri));
ip2 = tri(2,cavity(itri));
[head,list,nlist] = addHashTable(ip1, ip2, itri, head, list, nlist)
end

%% table bdy is made up of the edges seen only one time in the cavity
bdy = [];
for iar=1:nlist
if ( list(4,iar) == 0 )
bdy = [bdy [list(1,iar);list(2,iar)]];
end
end
280 Meshing, Geometric Modeling and Numerical Simulation 3

Next, the function buildBall:

function [coor,tri,voi] = buildBall(coor,tri,voi,ip,cavity)


%% remove the elements in the cavity and define the ball of Xp

bdy = bdyCavity(coor,tri,cavity)

%% new triangles are created and added at the end of table tri
newTri = zeros(3,size(bdy,2));
for i=1:size(bdy,2)
newTri(1,i) = bdy(1,i);
newTri(2,i) = bdy(2,i);
newTri(3,i) = ip;
%% chech the positivness of the new triangles, if not
verified error
end

% add the new triangles


tri = [ tri , newTri];

The function inCavity:

function ok = inCavity(X0,X1,X2,Xp)
%
% Check if triangle with vertices X0,X1,X2
% belongs to the cavity of point Xp

%% compute the circumcircle (radius and centre)


%% e.g. solve a linear system
rayon = 1; % to be written
centre = 1/3*(X0+X1+X2); % to be written

error(’ To be written’);

ok = 0
if ( norm(Xp-centre,2) <= rayon )
ok = 1;
end

Then, the function buildCavity:

function cavity = buildCavity(Xp,coor,tri,voi)


Examples Through Practice 281

%% naive function to define the cavity of point Xp


%% by traversing all the triangles neighbor of iTri

cavity = [];
for itri=1:size(tri,2)
% coordinates of the 3 vertices of triangle itri
X0 = coor(:,tri(1,itri));
X1 = coor(:,tri(2,itri));
X2 = coor(:,tri(3,itri));
if ( inCavity(X0,X1,X2,Xp) == 1 )
cavity = [cavity itri];
end
end

Question: is there a quadratic part in this algorithm? Unfortunately, yes, that is the function
buildCavity, which is naive and therefore easy to code, but an example of what should not be
done4. It is not even necessary to localize the point to insert. The ease of this coding comes at
the cost of this quadratic behavior. Breaking the latter requires the use of neighboring relations,
therefore the construction of this table using a hashing technique (again not to be quadratic in this
construction) in addition to updating this table at each insertion. It is then possible to construct
the cavity per neighborhood starting from the element that contains the point to be inserted and,
subsequently, the localization function has to be written. It is thus necessary to modify this cavity
function, add the functions that were not required and compare CPU times with the raw naive
approach written above.

The last exercises consist of going back to the images and using this insertion algorithm
for image compressing or decompressing by considering as vertices for the unstructured mesh
only certain initial points (compression) or by filling in the grid (decompression) from the mesh
vertices.

8.4. Retrieving a connected component

Here, we propose to find the connected components of a mesh in three dimensions. A con-
nected component is a set of elements such that for any two elements of that set, there is a path
through the two-dimensional edges or the three-dimensional faces joining these two elements
and that do not pass through a boundary (edge) face. For this exercise, we consider a cubic do-
main where there is a network of internal boundaries (triangles seeing two tetrahedra) as shown
in Figure 8.8.

We are trying to color all connected subdomains. The first step is to adapt the calculation of
the neighbors by integrating the list of boundary triangles in the search. It is also necessary to

4. For more than one reason, and not just because of its algorithmic complexity. We will let the readers
verify this by themselves.
282 Meshing, Geometric Modeling and Numerical Simulation 3

Figure 8.8. Solid mesh of a cube with internal surfaces (on the left).
These surfaces form the boundary of the connected components. The challenge is to color
all elements by connected component (on the right)

have a priority rule so that boundary faces be searched for first. Indeed, for the faces of the tetra-
hedra that form the interface between subdomains, there is generally (unless the tetrahedron is at
the outer boundary of the domain) another tetrahedron seeing this face as well as the boundary
triangle. To make sure that these boundaries will not be crossed, both tetrahedra must have a
number of triangles (or zero) in the neighbor’s table. The algorithm is broken down as follows:
i) initialization: all tetrahedra are colored with color zero;
ii) search for a germ, that is, a null-colored element, and choose a non-zero subdomain
number;
iii) color by neighborhood all the elements without crossing any boundaries;
iv) go to (ii).

For step two, a stack will be used to store all the elements already colored whose neighbors
will have to be visited. When the stack is empty, the whole connected component has been
detected. Below we give a stack structure that is very simple with the two fundamental functions,
pop and push:

function [ielem, stack] = popStack(stack)


% POPSTACK : pop the last pushed element on the stack,
% return 0 if the stack is empty

ielem = 0;
if ( length(stack) ~= 0 )
ielem = stack(end);
stack = stack(1:end-1);
end

for popping and pushing:


Examples Through Practice 283

function [stack] = pushStack(ielem,stack)


% PUSHSTACK : push element ielem on the stack

stack = [ stack , ielem ];

Then complete the general structure of the algorithm:

[dim,coor,tri,tet,edg,crn] = readmesh(’cube_inerface.mesh’)

% initialize array of neigh. for the tetrahedra


voi = zeros(4,size(tet,2));

% Compute the neighbors

stack = [];

color = zeros(1,size(tet,2))

% Step 1. : ndomn is the id of the sub-domains


ndomn = 0
for itet=1:size(tet,2)
if ( color(itet) == 0 )
ndomn = ndomn + 1
color(itet) = ndomn
stack = pushStack(itet,stack)
% main loop on the size of the stack
% pop the elements
% scan the neighbors and push/color the tetrahedra
% seen by a non boundary face : use array voi
end
end

How many connected components does the mesh cube_interface.mesh contain?

8.5. Exercises on metrics

In Volumes 1 and 2, a number of error estimates were described that make it possible to
control the precision of a numerical solution by refining (adapting) the mesh, particularly by
formalizing this problem as the control of an interpolation error. Furthermore, if we take the
example above, the initial grid of an image with its gray levels I(i, j) can be interpreted as a
continuous solution u by interpolating the solution outside (from) the pixels. It is then desirable
to find the anisotropic mesh that minimizes the following error:

u − Πh uL2 (Ω) ,
284 Meshing, Geometric Modeling and Numerical Simulation 3

for a given number of points. It is known (Chapter 8, Volume 2) that the optimal mesh minimizing
the interpolation error is the unit mesh for the metric defined at each point by:
Å ã−1
1 1
Mopt
L2 = N det(|HR (uh ))|) 3 dx det(|HR (uh )|)− 6 |HR (uh )| , [8.1]
Ω

where uh is the given numerical solution, HR (uh ) is the Hessian reconstructed from uh , and N
is the complexity of the mesh, that is the size of the mesh (for example in number of vertices).
|HR (uh )| = t R|Λ|R denotes the absolute values of the Hessian. The algorithm for calculating
the metrics is therefore decomposed into several steps:
i) the reconstruction from the numerical solution ui of a Hessian by least squares (Chapter 2,
Volume 2);
ii) the diagonalization of the reconstructed Hessian matrices Hi at the vertices and the trun-
cation of the eigenvalues (λki )k=1...2 :
Å ã
max(|λ1i |, num )
|Hi | = Ri 2
t
Ri ,
max(|λi |, num )

a classic value of the constant num is 10−10 ;


iii) the local normalization to L2 -norm that gives the metric field:
1
Mi = det(|Hi |)− 6 |Hi |.

iv) the calculation of the complexity of the field Mi :


Ñ é
 |K|  »
C= det(Mj ) ,
4
K∈Ωh Pj ∈K

with Ωh the mesh and Mj the metric at the vertex of index j. Finally, we perform the normal-
ization of the metric field Mi to get the desired complexity N :
Å ã
N
Mi = Mi .
C
In the least-squares approach, a second-order polynomial is looked for that approximates as
closely as possible the variation of the solution around the current point. From an algorithmic
point of view, it is therefore necessary to have for each point the list of its neighboring vertices.
The following three functions getEdg, setBall and getBall can be used to obtain the ball of
each point. The function getEdg, which gives the set of edges in a two-dimensional mesh, has
the following shape:

function edg = getEdg(tri)


% GETEDG Compute edges of mesh
% from triangles array tri
Examples Through Practice 285

if ( size(tri,1) == 3 )

edg = [tri([1,2],:),tri([1,3],:),tri([2,3],:)]’;
edg = unique(sort(edg,2),’rows’)’;

else
error(’Argument tri has the wrong shape !’);
end

The function setBall, which calculates the ball of all the vertices from a list of edges, has the
following form:

function [link, ball] = setBall(edg)


% SETBALL Build the link tab and neighboring
% vertices array ball

if ( size(edg,1) ~= 2 )
error(’Argument edg has the wrong shape !’);
end

NbrVer = max(edg(:));
NbrEdg = size(edg,2);

NbrVoi = zeros(1,NbrVer+1);
link = zeros(1,NbrVer);
ball = zeros(1,2*NbrEdg);

NbrEdg = NbrEdg;
NbrVer = NbrVer;

% first pass : counting


for iEdg=1:NbrEdg
NbrVoi(edg(1,iEdg)) = NbrVoi(edg(1,iEdg)) + 1;
NbrVoi(edg(2,iEdg)) = NbrVoi(edg(2,iEdg)) + 1;
end

% second pass: build the link tab


link(1) = 1;
for iVer=2:NbrVer
link(iVer) = link(iVer-1) + NbrVoi(iVer-1);
end
link(NbrVer+1) = link(NbrVer) + NbrVoi(NbrVer);

NbrVoi(:) = 0;
286 Meshing, Geometric Modeling and Numerical Simulation 3

% third pass: filling the ball


for iEdg=1:NbrEdg
i1 = edg(1,iEdg);
i2 = edg(2,iEdg);

ball(link(i1) + NbrVoi(i1)) = i2;


ball(link(i2) + NbrVoi(i2)) = i1;

NbrVoi(i1) = NbrVoi(i1) + 1;
NbrVoi(i2) = NbrVoi(i2) + 1;

end

The function getBall returns the set of vertices neighboring a vertex and has the following
shape:

function ver = getBall(iver,link,ball)


% GETBALL: Gives neigboring vertices of iver
% see setBall.m and getEdg.m
%

if ( ~isscalar(iver) || ~isvector(link) || ~isvector(ball) )


error(’Wrong arguments type, iver is scalar, link and ball are
vectors ’);
end

NbrVer = length(link) - 1;
if( (iver>= 1) & (iver <= NbrVer))
ver = ball(link(iver):(link(iver+1)-1));
else
warning(’Ivalid vertex index’);
ver = [];
end

Using the above functions, complete the following function to reconstruct a numerical Hessian.

function [gra,hes] = getHess(coor,tri,sol,link,ball)


% GETHESS hessian/gradient recovery by least square method
% see also: GETBALL

hes = zeros(3,size(coor,2));
gra = zeros(2,size(coor,2));
Examples Through Practice 287

ListPb = [];
for iVer=1:size(coor,2)

vois = getBall(iVer,link,ball);
nvois = length(vois);
if ( nvois <= 5 )
% for critical points (with less than 5 neigh.)
% we add neighbors of neighbors !
ListPb = [ListPb iVer];
for ivoi=1:nvois
vois = [vois getBall(vois(ivoi),link,ball) ];
end
vois = unique(vois);
nvois = length(vois);
end

one = ones(nvois,1);

% we write the Taylor expansion around point (x,y)


% f(xv,yv) = f(x,y) + a(xv-x) + b(yv-y) + 1/2(c(xv-x)^2
+ 2d(xv-x)(yv-y) + e(yv-y)^2)
% Unknowns are [a,b,c,d,e]
% Hessian matrix is then H = | c,d,|
% | d,e |

x = coor(1,iVer)*one;
y = coor(2,iVer)*one;
f = sol(iVer)*one;

xv = coor(1,vois)’;
yv = coor(2,vois)’;
fv = sol(vois)’;

dx = xv - x;
dy = yv - y;

lmat = [ dx, dy, 1/2*dx.*dx , dy.*dx , 1/2*dy.*dy ];

A = lmat’ * lmat;
B = lmat’ * (fv-f);
288 Meshing, Geometric Modeling and Numerical Simulation 3

X = A\B;
hes(:,iVer) = X(3:5);
gra(:,iVer) = X(1:2);

end

disp([’ Number of critical points ’ int2str(length(ListPb)) ]);

end

In the previous function, the number of critical points is the one whose number of neighbors
(crowns) is not sufficient to deduce an approximate gradient and a Hessian. Adapt the above
function to treat this pathology. One can then calculate the normalization of this Hessian and the
global normalization term N in the following adaptation script:

[coor,tri,edg,crn,sol] = im2mesh(file)

nite = 10;
N = 1000;

ite = 0;
while ( ite < nite )

% hessian
edgM = getEdg(tri);
[link, ball] = setBall(edgM);
[Grec,Hu] = getHess(coor,tri,sol,link,ball);

Met = zeros(3,size(coor,2));
Det = zeros(1,size(coor,2));

% compute M_L2 on each vertex


for i=1:size(coor,2)

H = [ Hu(1,i) , Hu(2,i) ; Hu(2,i) , Hu(3,i)];


[vc,vp] = eig(H);
% make correction if one of the eigenvectors is null/too small
vp = abs(vp);
vp([1,4]) = max(vp([1,4]),[1e-8,1e-8] ) ;

p = 2;
Det(i) = det(vp)^(p/(2*p+2));
vp = det(vp)^(-1/(2*p+2)) * vc * vp * transpose(vc);
Examples Through Practice 289

Met(:,i) = vp([1,2,4]);

end

% Compute normalization factor N


C = 0;
Air = 0;
for i=1:size(tri,2)
i1 = tri(1,i);
i2 = tri(2,i);
i3 = tri(3,i);
crs = 1/2*cross( [coor(:,i2) - coor(:,i1); 0 ],
[coor(:,i3) - coor(:,i1); 0] );
airTri = abs(crs(3));
Air = Air + airTri;
C = C + 1/3*airTri*( Det(i1) + Det(i2) + Det(i3) );
end

% normalize the metric field


Met = N/C * Met;

% adapt the mesh and project back the solution


[coor,sol,tri,edg,crn] = adapMesh(coor,sol,tri,edg,crn,Met,1.3);
writemesh(’current.mesh’,coor,tri,edg,crn);

ite = ite+1;
end

The function that performs the mesh adaptation is provided with the header:

function [acoor,atri,aedg, acrn] = adapMesh(coor,tri,edg,crn,met,hgrad)


% ADAPMESH 2d mesh adaptation
% outputs new adapted mesh acoor/atri arrays
% that are adapted from initial coor/tri mesh
% with the metric met under a gradation hgrad.

This function calls for an external code available for download for Linux, Windows and Mac
platforms.
290 Meshing, Geometric Modeling and Numerical Simulation 3


∗ ∗

In this chapter, we have proposed that the reader write a few programs using many of the
elementary techniques and structures described throughout the book. These constitute the basic
ingredients needed to develop effective algorithms.

In order to validate these programs, a few meshes are provided, and they are represented in
Figure 8.9.

To visualize all these meshes, the software ViZiR, which has been described in part in previ-
ous chapters5, can be used.

In the example on image compression, the file joconde.ppm is provided. The reader will be
able to try any image of reasonable size. Finally, an example of a mesh is given in triangulation
form, which can be used as an illustration of the expected result6.

We hope that the reader has tried (or will try as soon as possible) the various exercises pro-
posed in this chapter. We can reuse a sentence from the introduction which stated that “un-
derstanding a method, a scheme or a description is one thing, whereas actually implementing
(programming) a method, a scheme or a description is another” but, in light of the exercises in
the chapter, this sentence can be completed.

There is nothing better than writing a program and seeing how it behaves with a few examples.
Following this behavior and, in particular, if it is deemed unsatisfactory in any aspect, how it can
be avoided. Some reflection could be given on how to implement it, but one might also consider
whether a more subtle basic data structure and/or technique are not part of the solution.

How should we react when an algorithm that is right on paper produces wrong (or inaccurate)
results or even produces an error? Since the program is right, it is useless to look for a (program-
ming) error but one must think about reliability issues in the calculations, typically those relating
to floating numbers7. Good luck and find, certainly for this point, the reason behind many of
the discussions developed throughout the book, in particular, those relating to robustness and the
way to achieve a good level of robustness.

5. It can be downloaded at the link: vizir.inria.fr.


6. All the files to be completed are available at: pyamg.saclay.inria.fr/download/bookV3examples.tgz.
7. The computations that manipulate integers are known to be right.
Examples Through Practice 291

i) ii) iii) iv)

v) vi)

vii) viii)
Figure 8.9. The proposed meshes for the exercises presented in this chapter. For mesh ma-
nipulation and the calculation of two-dimensional neighborhood tables, the meshes of increas-
ing sizes carre4h.mesh, carre2h.mesh, carreh.mesh and carre05h.mesh will be used,
namely cases (i) to (iv). These meshes will allow measuring the importance of getting rid of
any quadratic part in a meshing algorithm. Concerning neighbors in three dimensions and to
retrieve the boundary of a domain, the mesh ssbj.mesh will be used, which are cases (v) and
(vi). For the exercise on subdomain reconstruction, the mesh cube_interface.mesh will be
used, which are cases (vii) and (viii)
Chapter 9

Some Algorithms and Formulas

As in the first two volumes, a few formulas and algorithms are given here related to the topics
presented in this volume. Some of these formulas are already available, scattered throughout the
different volumes, but it seemed interesting to gather them here to avoid having to look for them
elsewhere. This is the case for the Bernstein polynomials and the Bézier forms given here at
the beginning of the first section. This section ends with the expression of surfaces (volumes) of
curved elements, given in Chapter 6 for some elements only.

We then revisit localization problems. Addressed in Volume 1 for triangulations (therefore of


degree 1), we look at what happens for curved meshes. This will be an opportunity to see how
to find the coordinates in the parameter space of a current point of an element, which is again
a trivial problem for simplexes (degree 1) and already more sensitive, even for a quadrilateral
element (of degree 1 × 1) before even considering other degrees.

Space-filling curves, seen in Volume 1 for insertion algorithms, reviewed in Volume 2 under
the topic of parallelism and also found in this volume regarding renumbering (and partitioning)
methods, have not been detailed in terms of their actual construction; this is therefore an oppor-
tunity to come back to that point.

9.1. Bernstein polynomials and Bézier forms

9.1.1. Bernstein polynomials

Their expressions depend on the chosen system of parameters, namely barycentric coordi-
nates1 or natural ones. The coefficients of the binomial are given for the degree d:
d!
Cid = ,
i! (d − i)!

1. Simplexes (straight-sided) only.

Meshing, Geometric Modeling and Numerical Simulation 3: Storage,


Visualization and In Memory Strategies, First Edition. Paul Louis George,
Frédéric Alauzet, Adrien Loseille and Loïc Maréchal.
© ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
294 Meshing, Geometric Modeling and Numerical Simulation 3

d d! d d! d d!
or Cij = or Cijk = or even Cijkl = ,
i! j! i! j! k! i! j! k! l!
which, depending on the coordinate system and the dimension of the space, will be involved in
the expression of the Bernstein polynomials, d again denotes the degree:

Bid (u) = Cid ui (1 − u)d−i , 0 ≤ u ≤ 1, 0 ≤ i ≤ d,


d
or Bij u v , 0 ≤ u, v ≤ 1, u + v = 1, i + j = d,
d i j
(u, v) = Cij
d
or Bijk d
(u, v, w) = Cijk ui v j wk , 0 ≤ u, v, w ≤ 1, u + v + w = 1, i + j + k = d,
d
or Bijkl d
(u, v, w, t) = Cijkl ui v j wk tl , 0 ≤ u, v, w, t ≤ 1, u + v + w + t = 1, i + j + k + l = d.

9.1.2. Bézier forms

Bézier forms are constructed from the Bernstein polynomials and from coefficients of dif-
ferent kinds (point, vector, value, etc.). On this nature depends the very nature of the form and
what it can represent. Bézier forms are the image of a reference element, which is the parameter
space.

• Bézier curves. From d + 1 control points, we define a Bézier curve of degree d by:


d
γ(u) = Bid (u)Pi , 0 ≤ u ≤ 1,
i=0

or γ(u, v) = d
Bij (u, v)Pij , 0 ≤ u, v ≤ 1, u + v = 1,
i+j=d

with Pi or Pij the control points according to the chosen system of parameters (natural or
barycentric). The curve resides in the space where the control points reside.

• Bézier patches (planes, surfaces or solids). We shall successively see the various patches,
triangles and tetrahedra, quadrilaterals and hexahedra, prisms and pyramids:

Triangle σ(u, v, w) = d
Bijk (u, v, w)Pijk , 0 ≤ u, v, w ≤ 1, u + v + w = 1,
i+j+k=d


Tetrahedron θ(u, v, w, t) = d
Bijkl (u, v, w, t)Pijkl , 0 ≤ u, v, w, t ≤ 1, u+v+w+t = 1,
i+j+k+l=d


d 
d
Quadrilateral σ(u, v) = Bid (u)Bjd (v)Pij , 0 ≤ u, v ≤ 1,
i=0 j=0


d 
d 
d
Hexahedron θ(u, v, w) = Bid (u)Bjd (v)Bkd (w)Pijk , 0 ≤ u, v, w ≤ 1,
i=0 j=0 k=0
Some Algorithms and Formulas 295

 
d
Prism θ(u, v, w, t) = d
Bijk (u, v, w)Bld (t)Pijkl , 0 ≤ u, v, w, t ≤ 1, u + v + w = 1,
i+j+k=d l=0

  d−k
d d−k  u v
Pyramid θ(u, v, w) = Bkd (w)Bid−k ( )B d−k ( )Pijk , 0 ≤ u, v ≤ 1 − w,
1−w j 1−w
k=0 i=0 j=0

with 0 ≤ w ≤ 1.
For pyramids, we have proposed (Volume 1) a definition via a degenerated reduced hexahe-
dron, as a possible definition for d = 2 and d = 3. The above definition, valid for any d
[Bergot et al. 2010], [Johnen, Geuzaine-2015], [Chan, Warburton-2016], [Feuillet-2019], seems
more relevant.

Classic Lagrange finite elements correspond to solid patches. For the others, in the plane
case (the control points are in R2 ), we find the classic Lagrange finite elements. If we are in R3 ,
patches are just geometric patches that can be used to define surfaces.

• Bézier forms of the Jacobian polynomial of a transformation. This is the transformation


defining the finite elements, the K, as images of a reference element, K̂ (Volumes 1 and 3). We
have:
 2(d−1)
J (u, v, w) = Bijk (u, v, w)Nijk , 0 ≤ u, v, w ≤ 1, u + v + w = 1,
i+j+k=2(d−1)

for a plane triangle, the NIjk that are the control coefficients (Volume 1) actually measure trian-
gle surface areas that have control points as their vertices. Likewise, one has:
 3(d−1)
J (u, v, w, t) = Bijkl (u, v, w, t)Nijkl , 0 ≤ u, v, w, t ≤ 1, u+v +w +t = 1,
i+j+k+l=3(d−1)

for a tetrahedron. The control coefficients (Volume 1) actually measure volumes of tetrahedra
supported on control points.

For the other elements, similar expressions are used and the control coefficients measure
triangle surface areas (in the plane) or tetrahedra volumes for solid elements. Consider, respec-
tively, for a planar quadrilateral, a hexahedron and a prism:

 2d−1
2d−1 
J (u, v) = Bi2d−1 (u)Bj2d−1 (v)Nij , 0 ≤ u, v ≤ 1,
i=0 j=0

3d−1
 3d−1
 3d−1

J (u, v, w) = Bi3d−1 (u)Bj3d−1 (v)Bk3d−1 (w)Nijk , 0 ≤ u, v, w ≤ 1,
i=0 j=0 k=0
296 Meshing, Geometric Modeling and Numerical Simulation 3

 3d−1

3d−2
J (u, v, w, t) = Bijk (u, v, w)Bl3d−1 (t)Nijkl ,
i+j+k=3d−2 l=0

0 ≤ u, v, w, t ≤ 1, u + v + w = 1,
the pyramid case is more sensitive, we shall see them as degenerated hexahedra and reuse the
formula of these elements by adapting it.

The control coefficients will allow the calculation of the surface area of the plane curved
elements and the volume of the curved elements as follows.

9.1.3. Formulas (lengths, surface areas and volumes) for curved elements

In Chapter 6, the expression of the surface area of a curved plane triangle of degree 2 was
given; such expressions exist for any degree and for any element, and also make it possible, in
three dimensions, to express volumes. After giving the length of a curve of degree d, the areas
of planar or surface elements and the volumes of the solid elements are given.

• Curve lengths. Consider a Bézier curve of degree d and with control points the Pi , that is
d
γ(u) = Bid (u)Pi , 0 ≤ u ≤ 1, and one calculates its length L. This length is the curvilinear
i=0
abscissa, s(.), for u = 1. By definition, we have:
 1
L = s(1) = ||γ  (u)|| du.
0


d−1
−−−−→ −−−−→
Since γ  (u) = d Bid−1 (u)Pi Pi+1 and if we denote Ni = dPi Pi+1 , one has:
i=0
Ã
 1 »  1 
d−1 
d−1
L= < γ  (u), γ  (u) > du = < Bid−1 (u)Ni , Bid−1 (u)Ni > du
0 0 i=0 i=0
Ã
 1 
d−1 
d−1
= Bid−1 (u)Bjd−1 (u) < Ni , Nj > du,
0 i=0 j=0

and the result (the case d = 1 being trivial) will be approximated using a numerical integration
or (Volume 2) by the representation of the curve by a polygonal line.

• The surface area of a curved


plane triangle. Starting from the Bézier notation of a triangle
d
of degree d, let σ(u, v, w) = Bijk (u, v, w)Pijk whose Jacobian polynomial is equal to
 i+j+k=d
2(d−1)
J (u, v, w) = Bijk (u, v, w)Nijk . The area of the triangle, denoted by K, is the
i+j+k=2(d−1)
integral of this polynomial for (u, v, w) traveling K̂, the reference element, that is:
Some Algorithms and Formulas 297


|K| = J (u, v, w)dK.
(u,v,w)∈K̂

d 1
Since Bijk (u, v, w)dK = , for any triplets ijk, it follows that:
(u,v,w)∈K̂ (d + 1)(d + 2)
 
2(d−1)
|K| = Nijk Bijk (u, v, w)dK,
i+j+k=2(d−1) (u,v,w)∈K̂

2  Nijk
that is: |K| = .
(2(d − 1) + 1)(2(d − 1) + 2) 2
i+j+k=2(d−1)

The Nijk are given in Volume 1.

• The surface area of a curved planar quadrilateral. An analogous calculation gives:

2d−1 2d−1
1  
|K| = Nij .
(2d)2 i=0 j=0

The Nij are given in Volume 1.

• The volume of a curved tetrahedron:

6  Nijkl
|K| = .
(3(d − 1) + 1)(3(d − 1) + 2)(3(d − 1) + 3) 6
i+j+k+l=3(d−1)

The Nijkl are given in Volume 1.

• The volume of a curved hexahedron:

3d−1 3d−1 3d−1


1   
|K| = Nijk .
(3d)3 i=0 j=0
k=0

The Nijk are given in Volume 1.

• The volume of a curved prism:

 3d−1

2 Nijkl
|K| = .
3d ((3d − 2) + 1) ((3d − 2) + 2) 2
i+j+k=3d−2 l=0

The Nijkl are given in Volume 1.


298 Meshing, Geometric Modeling and Numerical Simulation 3

• The volume of a curved pyramid. The formula of the hexahedron is going to be reused by
changing the coefficients, thereby one has:

3d−1 3d−1 3d−1


1    *
|K| = Nijk .
(3d)3 i=0 j=0
k=0

* , we refer to [Feuillet-2019]. The idea is to build a hexahedron based on known


To define the Nijk
control points and the creation of new points.

• The surface area of a curved triangular surface of degree d at least equal to 2. Such a
triangle is defined by the expression

d
σ(u, v, w) = Bijk (u, v, w)Pijk ,
i+j+k=d

with the control points, the Pijk s, in R3 . The normal at (u, v, w) is expressed (there are three
ways to write it) by denoting |i| = i + j + k:
ß ™ ß ™

→ ∂σ(u, v, w) ∂σ(u, v, w) ∂σ(u, v, w) ∂σ(u, v, w)
n (u, v, w) = − ∧ −
∂u ∂w ∂v ∂w
⎧ ⎫ ⎧ ⎫
⎨  −− −− − −−−− −−→ ⎬ ⎨  −−− −−−− −− −−→ ⎬
= d2 d−1
Bijk (u, v, w)Pij(k+1) P(i+1)jk ∧ d−1
Bijk (u, v, w)Pij(k+1) Pi(j+1)k
⎩ ⎭ ⎩ ⎭
|i|=d−1 |i|=d−1
 
= d2 Bid−1
1 j1 k1
(u, v, w)Bid−1
2 j2 k2
(u, v, w)
|i1 |=d−1 |i2 |=d−1
¶−−−−−−−−−−−−−−−→ −−−−−−−−−−−−−−−→©
Pi1 j1 (k1 +1) P(i1 +1)j1 k1 ∧ Pi2 j2 (k2 +1) P(i2 +1)j2 k2

(u, v, w)−

2(d−1)
= d2 Bijk n ijk
|i|=2(d−1)

with:
d2  ¶−−−−−−−−−−−−−−−→ −−−−−−−−−−−−−−−→©


n ijk = Cid−1 Cid−1 Pi1 j1 (k1 +1) P(i1 +1)j1 k1 ∧ Pi2 j2 (k2 +1) P(i2 +1)j2 k2 .
2(d−1) 1 j1 k1 1 j2 k2
Cijk i1 +i2 =i
j1 +j2 =j
k1 +k2 =k

We calculate:
  »
|K| = ||−

n (u, v)|| dK = <→

n (u, v, w), −

n (u, v, w) > dK
(u,v,w)∈K̂ (u,v,w)∈K̂
 !  4(d−1)
= Bijk (u, v, w)Aijk dK
(u,v,w)∈K̂ i+j+k=4(d−1)
Some Algorithms and Formulas 299

with:
d2 
Ci1 j1 k1 Ci2 j2 k2 < −

n i1 j 1 k 1 , −

2(d−1) 2(d−1)
Aijk = 4(d−1)
n i2 j2 k2 > .
Cijk i1 +i2 =i
j1 +j2 =j
k1 +k2 =k

To approximate |K|, one obviously has to make a numerical integration. This tedious solution
can be replaced by a solution, equally approximated, based on a (De Casteljau) subdivision with
a sufficient level so that two neighboring elements are almost coplanar.

We denote, for d = 1, the trivial result:


  √  −
  A000 <→
n 000 ∧ −

n 000>
A000 dK = A000 dK = = ,
(u,v,w)∈K̂ (u,v,w)∈K̂ 2 2

the norm of −

n 000 which is the cross-product of two consecutive edges.

• The surface area of a curved quadrilateral defined on a surface (a priori warped if d = 1).
Such an element is defined by the expression:


d 
d
σ(u, v) = Bid (u)Bjd (v)Pij ,
i=0 j=0

3
with the control points, the Pij s, in R . Therefrom, it is inferred that:

∂σ(u, v) 
d−1 d
−−−−−−→ ∂σ(u, v)
=d Bid−1 (u)Bjd (v)Pij Pi+1,j and
∂u i=0 j=0
∂v


d 
d−1
−−−−−−→
=d Bid (u)Bjd−1 (v)Pij Pi,j+1 .
i=0 j=0

And the normal is written as:



→ ∂σ(u, v) ∂σ(u, v)
n (u, v) = ∧ ,
∂u ∂v

d−1 
d 
d 
d−1
−−−−−−−−→ −−−−−−−−→
= d2 Bid−1
1
(u)Bjd1 (v)Bid2 (u)Bjd−1
2
(v)(Pi1 j1 Pi1 +1,j1 ∧ Pi2 j2 Pi2 ,j2 +1 ),
i1 =0 j1 =0 i2 =0 j2 =0

that is, for i1 , i2 , j1 and j2 in the above ranges and for i = 0, 2d − 1 and j = 0, 2d − 1:

 2d−1
2d−1    −−−−−−−−→ −−−−−−−−→
= αi1 i2 j1 j2 Bi2d−1 (u)Bj2d−1 (v)(Pi1 j1 Pi1 +1,j1 ∧ Pi2 j2 Pi2 ,j2 +1 ),
i=0 j=0 i1 +i2 =i j1 +j2 =j

with:
Cid−1 Cid2 Cjd1 Cjd−1
αi1 i2 j1 j2 = d2 1 2
.
Ci2d−1 Cj2d−1
300 Meshing, Geometric Modeling and Numerical Simulation 3

  −−−−−−−−→ −−−−−−−−→
Let −
n→
ij denote the vector αi1 i2 j1 j2 (Pi1 j1 Pi1 +1,j1 ∧ Pi2 j2 Pi2 ,j2 +1 ) and thus:
i1 +i2 =i j1 +j2 =j

2d−1
 2d−1



n (u, v) = Bi2d−1 (u)Bj2d−1 (v) −
n→
ij .
i=0 j=0

The (non-oriented) surface area of the quadrilateral is equal to:


 1  1  1  1 »
|K| = ||−

n (u, v)|| du dv = <→ −
n (u, v), −

n (u, v) > du dv.
u=0 v=0 u=0 v=0

The expression of −

n (u, v) leads to the formula:
Ã
 1  1 2(2d−1) 2(2d−1)
  2(2d−1) 2(2d−1)
|K| = Bi (u)Bj (v)Aij du dv,
u=0 v=0 i=0 j=0

with, for i = 0, 2(2d − 1), j = 0, 2(2d − 1),

  Ci2d−1 Ci2d−1 Cj2d−1 Cj2d−1


Aij = βi1 i2 j1 j2 j < −
n−→ −−→
i1 j1 , ni2 j2 > with βi1 i2 j1 j2 =
1 2 1
2(2d−1) 2(2d−1)
2
.
i1 +i2 =i j1 +j2 =j Ci Cj

To approximate |K|, as for a triangle, a numerical integration must obviously be used. This
tedious solution, even for d = 1, can be replaced by a solution, equally approximate, based
on a subdivision. For a sufficient level of this (De Casteljau) subdivision, on the one hand,
the subelements are practically plane and their surface area is thus the sum of those of the two
triangles built on each of them and, on the other hand, two neighboring elements are also almost
coplanar.

As an exercise, we look at the case where d = 1 to see if a more obvious result appears. It
successively follows that:
1 
 1
σ(u, v) = Bi1 (u)Bj1 (v)Pij ,
i=0 j=0

1 1
∂σ(u, v)  1 −−−−→ ∂σ(u, v)  1 −−−−→
= Bj (v)P0j P1j and = Bi (u)Pi0 Pi1 ,
∂u j=0
∂v i=0

1 
 1

→ −−−−→ −−−−→
n (u, v) = Bj1 (v)Bi1 (u)(P0j P1j ∧ Pi0 Pi1 ),
j=0 i=0

1 
 1

→ Bj1 (v)Bi1 (u) −
n→ −→ −−−−→ −−−−→
n (u, v) = ij , with nij = P0j P1j ∧ Pi0 Pi1 ,
j=0 i=0
 1  1  1  1 »
|K| = ||−

n (u, v)|| du dv = <→

n (u, v), −

n (u, v) > du dv,
u=0 v=0 u=0 v=0
Some Algorithms and Formulas 301

Ã
 1  1 2 
2
= Bi2 (u)Bj2 (v)Aij du dv,
u=0 v=0 i=0 j=0
  1
Aij = βi1 i2 j1 j2 j < −
n−→ −−→
i1 j1 , ni2 j2 > with βi1 i2 j1 j2 = ,
i1 +i2 =i j1 +j2 =j
Ci2 Cj2
  1
or also Aij = βij <−
n−→ −−→
i1 j1 , ni2 j2 > with βij = .
i1 +i2 =i j1 +j2 =j
Ci2 Cj2

To see how the −


n→
ij (therefore the vectors constructed on control points) intervene, we make the
Aij explicit:

A00 =< −
n→ −→ −→ −→ −→ −→
00 , n00 >, A10 =< n00 , n10 >, A20 =< n10 , n10 >,

1
A01 =< − n→ −→
00 , n01 >, A11 = {< −n→ −→ −→ −→ −→ −→
00 , n11 > + < n01 , n10 >} , A21 =< n10 , n11 >,
2
A02 =< − n→ −→ −→ −→ −→ −→
01 , n01 >, A12 =< n01 , n11 >, A22 =< n11 , n11 >,

so the −
n→
ij involved are:
− −−−−→ −−−−→
n→
00 = P00 P10 ∧ P00 P01 ,
− −−−−→ −−−−→
n→
10 = P00 P10 ∧ P10 P11 ,
− −−−−→ −−−−→
n→ = P P ∧ P P ,
01 01 11 00 01
− −−−−→ −−−−→
n→
11 = P01 P11 ∧ P10 P11 ,

or, as expected, the normals at the four vertices, which combine with a visible mechanism in the
Aij to form these coefficients.

9.2. Localization problems in a curved mesh

The concern is to localize a point in a mesh, in other words, to find the element containing
it (if it exists). To do so, one must find out if there is a relevant value for the parameters (for
example, for triangles, the triplet (u, v, w) whose image is the given point (x, y)) associated with
the current position of the point.

9.2.1. Current point parameter values

Except for (straight-sided) simplices for which the answer is immediate (Volume 1), this
question is more sensitive for high degrees and curved elements in particular, but also simply
when outside the simplicial case or when considering surface elements.

• Simplicial refresher (degree 1). The object is to find the


 parameter triplet (u, v, w) of a
1
point M of coordinates (x, y), where M = σ(u, v, w) = Bijk (u, v, w)Pijk with the
i+j+k=1
usual notations. If the Pijk are the vertices of a triangle, the values of the three parameters will
302 Meshing, Geometric Modeling and Numerical Simulation 3

enable knowing if the point M is inside this triangle or not and this information will be used to
find a mesh element containing this point.

Figure 9.1. Calculation of the barycentric coordinates of the point M in the triangle [123]

In Figure 9.1, a triangle of vertices [P100 P010 P001 ] is shown and simply denoted 1, 2 and 3
and a point M of coordinates (x, y). We define the three triangles [M 23], [1M 3] and [12M ]. Let
S denote the surface area of [123] and Si the surface area2 of subtriangles. If λi designates the
Si
barycentric coordinate of index i of M , we have λi = and one just has:
S
u = λ1 , v = λ 2 , w = λ 3 .

In other words, a simple surface area calculation on the (current) element directly gives the triplet
sought for, as a specificity of the simplex. The same method applies to triangles of any degree
with straight edges (the curved case is described below).

• Quadrilaterals of degree 1 × 1. Here, we are looking for the pair of parameters (u, v) of
1  1
a point M of coordinates (x, y), or M = σ(u, v) = Bi1 (u)Bj1 (v)Pij . The solution is
i=0 j=0
significantly more complicated than for a triangle (because of bilinearity) and is based on solving
an optimization problem. The cost function f is defined as follows:
1
f (u, v) = ||M − σ(u, v)||2 ,
2
which measures the distance between M and the image of the couple (u, v) and its minimum is
searched for. The gradient is written as:
Å ã Ç å
∇1 < M − σ(u, v), − ∂σ(u,v) >
∇f (u, v) = = ∂u .
∇2 < M − σ(u, v), − ∂σ(u,v) >
∂v

2. Since S is signed, so are the Si according to the definition of subtriangles.


Some Algorithms and Formulas 303

The Hessian matrix is written as:


Å ã
∇11 ∇12
H(u, v) = ,
∇21 ∇22

with: ß ™
∂σ(u, v) ∂σ(u, v) ∂ 2 σ(u, v)
∇11 = − < − , > + < M − σ(u, v), > ,
∂u ∂u ∂u∂u
ß ™
∂σ(u, v) ∂σ(u, v) ∂ 2 σ(u, v)
∇12 = ∇21 = − < − , > + < M − σ(u, v), > ,
∂u ∂v ∂u∂v
ß ™
∂σ(u, v) ∂σ(u, v) ∂ 2 σ(u, v)
∇22 = − < − , > + < M − σ(u, v), > .
∂v ∂v ∂v∂v
The iterative algorithm is defined as follows:
Å ã Å ã
un+1 un
= − H−1 (un , vn ) ∇f (un , vn ),
vn+1 vn
Å ã
u0
with the initial value. The solution is obtained at a given precision ε, that is f ≤ ε.
v0
Ç 1 å
2
The initial value is simply set to 1
or estimated by decomposing the quadrilateral into two
2
triangles, and using the barycentric coordinates of M with respect to these triangles.

• The general case, all the elements, all the degrees. We have seen the special case of straight-
sided triangles where the result is immediate. We have just seen the case of the quadrilateral of
degree 1 × 1; if one examines the other elements in the same way, we shall have to distinguish
the elements depending on whether or not they have a parametric representation with a barycen-
tric component (triangles or triangular faces) because in this case the mechanical extension of
the above method (for the quadrilateral) must be refined. However, there is a valid notation for
all elements involving the traditional definition – the reference element K̂ and the mapping FK
(Volume 1). The reference coordinate system is thus (x̂, ŷ) or (x̂, ŷ, ẑ) according to the dimen-
sion of the space (except in the case of surfaces) and in this system, independently of the element,
the method is written in such a unique manner, here in two dimensions (with a trivial extension
to three dimensions).

Therefore, we search for the pair of parameters (x̂, ŷ) of a point M of coordinates (x, y), that
is M = FK (x̂, ŷ). We define the cost function f as follows:
1
f (x̂, ŷ) = ||M − FK (x̂, ŷ)||2 ,
2
that measures the distance between M and the image of the couple (x̂, ŷ) and its minimum is
searched for. The gradient is written as:
Å ã Ç å
∇1 < M − FK (x̂, ŷ), − ∂FK∂(x̂,ŷ) >
∇f (x̂, ŷ) = = x̂ .
∇2 < M − FK (x̂, ŷ), − ∂FK∂(x̂,ŷ)
ŷ >
304 Meshing, Geometric Modeling and Numerical Simulation 3

The Hessian matrix is written as:


Å ã
∇11 ∇12
H(x̂, ŷ) = ,
∇21 ∇22

with:
ß ™
∂FK (x̂, ŷ) ∂FK (x̂, ŷ) ∂ 2 FK (x̂, ŷ)
∇11 = − < − , > + < M − FK (x̂, ŷ), > ,
∂ x̂ ∂ x̂ ∂ x̂∂ x̂
ß ™
∂FK (x̂, ŷ) ∂FK (x̂, ŷ) ∂ 2 FK (x̂, ŷ)
∇12 = ∇21 = − < − , > + < M − FK (x̂, ŷ), > ,
∂ x̂ ∂ ŷ ∂ x̂∂ ŷ
ß 2 ™
∂FK (x̂, ŷ) ∂FK (x̂, ŷ) ∂ FK (x̂, ŷ)
∇22 = − < − , > + < M − FK (x̂, ŷ), > .
∂ ŷ ∂ ŷ ∂ ŷ∂ ŷ
The iterative algorithm is:
Å ã Å ã
x̂n+1 x̂n
= − H−1 (x̂n , ŷn ) ∇f (x̂n , ŷn ),
ŷn+1 ŷn
Å ã
x̂0
with the initial value.
ŷ0

This notation, via FK , is unique but the derivatives to be calculated are those of FK , which
is rather complicated. The Bézier notation is therefore interesting because these derivative cal-
culations are immediate. This is shown below.

• The curved triangle of degree d. We are looking for the triplet


 of parameters (u, v, w) of
d
a point M of coordinates (x, y), we have M = σ(u, v, w) = Biik (u, v, w)Pijk . The
i=j+k=d
function costs f , whose minimum is searched for, and is written as:
1
f (u, v) = ||M − σ(u, v, w)||2 ,
2
in the two parameters u and v only because w = 1 − u − v. The gradient is therefore written as:
Å ã Ç å
∇1 < M − σ(u, v, w), − ∂σ(u,v,w) + ∂σ(u,v,w) >
∇f (u, v) = = ∂u ∂w .
∇2 < M − σ(u, v, w), − ∂σ(u,v,w) + ∂σ(u,v,w) >
∂v ∂w

For the Hessian matrix, one has:


ß
∂σ(u, v, w) ∂σ(u, v, w) ∂σ(u, v, w) ∂σ(u, v, w)
∇11 = < − + ,− + >
∂u ∂w ∂u ∂w

∂ 2 σ(u, v, w) ∂ 2 σ(u, v, w) ∂ 2 σ(u, v, w)
+ < M − σ(u, v, w), − +2 − > ,
∂u∂u ∂u∂w ∂w∂w
ß
∂σ(u, v, w) ∂σ(u, v, w) ∂σ(u, v, w) ∂σ(u, v, w)
∇12 = ∇21 = < − + ,− + >
∂v ∂w ∂u ∂w
Some Algorithms and Formulas 305


∂ 2 σ(u, v, w) ∂ 2 σ(u, v, w) ∂ 2 σ(u, v, w) ∂ 2 σ(u, v, w)
+ < M − σ(u, v, w), − + + − > ,
∂u∂v ∂u∂w ∂v∂w ∂w∂w
ß
∂σ(u, v, w) ∂σ(u, v, w) ∂σ(u, v, w) ∂σ(u, v, w)
∇22 = < − + ,− + >
∂v ∂w ∂v ∂w

∂ 2 σ(u, v, w) ∂ 2 σ(u, v, w) ∂ 2 σ(u, v, w)
+ < M − σ(u, v, w), − +2 − > .
∂v∂v ∂v∂w ∂w∂w

The algorithm is the same; once the solution is converged, å calculates w using the relation
Ç one 1
3
u + v + w = 1. The simplest initialization is the pair 1
. Using a subdivision via the
3
triangles whose three vertices are three of the six nodes makes it possible, eventually, to find a
better starting point.

• The curved quadrilateral of degree d × d. The method seen for d = 1 in variables u and v
Ç 1
å of σ(u, v) is to be adjusted with the degree. The initialization
applies as is, only the definition
2
can be the same, simply 1
, estimated by taking the subtriangles supported on the nodes of
2
degree d seen as triangles of degree 1.

• Other elements (solid elements). A hexahedron is treated as a quadrilateral, the other ele-
ments having a triangle-type dimension combine the two ways of addressing the variables.

• Surface elements. There is no difference with the triangle or quadrilateral case except that
the distance to be minimized is in R3 .

9.2.2. Localization

The above can be used to find out in which mesh element a given point is located (we are not
including surface cases). The algorithm relating to simplices of degree 1, which (Volume 1) is
based on the calculation of the barycentric coordinates3 of the element being examined vis-à-vis
an element. If the three coordinates are positive, thereby smaller than 1, the point is inside the
element, otherwise it is assumed inside the neighbor corresponding to the negative coordinate4.
The same strategy is reused, but since there are no barycentric coordinates the value of the pa-
rameters of the point being examined relatively to a given element is analyzed and, if necessary,
one shifts to the neighboring element designated by these values.

For example, for a quadrilateral, a value of the parameter u greater than 1 leads to analyzing
the neighboring element by the current edge associated with u = 1.

3. These coordinates obtained directly by way of the calculation of surface areas are, here, exactly the
triplets (u, v, w) in the reference element of the antecedent of the point being examined.
4. Including all borderline cases to be examined.
306 Meshing, Geometric Modeling and Numerical Simulation 3

In other words, the classic algorithm is applicable, with all its characteristics (for instance,
an accelerating structure) by relying on the parameter values. In addition, one can speed up the
process by merely considering the (straight) elements built on the vertices of the curved elements
only. The solution found is necessarily the sought-after element or a neighboring element en-
abling the triggering of the adapted search in the presence of high degree and especially in the
curved case.

9.3. Space-filling curves

Space-filling curves, used or seen repeatedly throughout the three volumes of the book, have
not been detailed in terms of their actual construction; this is therefore an opportunity to review
this point. The goal is to be able to go through a set (of points or elements) by moving along a
curve that sweeps the space (the scene) in a particular way.

In practice, in our applications, the space is covered by a uniform grid whose cells are refer-
enced by two indices (in two dimensions) or three indices (in three dimensions). The elements
that we are interested in (points, for example) are identified by the indices of the cell that contains
them. The curve will serve to shift from one cell to another according to its own strategy and thus
will allow the elements of the set being considered to be addressed by following this strategy,
and therefore the underlying order.

In general, space-filling curves are defined in an iterative way and, for some, present a fractal
aspect. There are several types of such curves, and we will only look at two of them in the
following.

9.3.1. A Z-curve

This curve is the simplest to construct because its construction is based on a single pat-
tern that will be repeated. This pattern is none other than the uniform decomposition into four
(two-dimensional) of a cell of a given level defining the four cells of the following level. This
decomposition corresponds to a trivial binary numbering. From this numbering will be deduced
the index of the cell in the curve. We start with a single cell that includes the scene. This cell is
cut into four (Figure 9.2). Each cell is numbered as shown in the figure, namely 00, 01, 10 and
11; we go from the bottom left corner to the top right corner5. The curve associated with the first
level corresponds to the path of these boxes that form the first level, level = 1, and allows the
index of these four cells to be found. This index is calculated by the relation:

Ind = 2i bi , [9.1]
i=0,2level −1

5. Another choice allows one to move from the bottom right corner to the top left corner and we then find a
regular Z.
Some Algorithms and Formulas 307

where bi is the value of the ith bit6 of the cell number (binary). The index for this level is shown
in the figure on the right, and generates the Z-curve drawn in that same figure. This curve is
obtained by joining the centers of the cells traveled by following their respective indices.

Figure 9.2. The generic pattern for the construction of a Z-curve. On the left, the four cells of
level 1 and their binary numbering. On the right, the index, relation [9.1], of the cells with respect
to the underlying curve and the curve effectively inferred thereof (in red)

Figure 9.3. Recursive application of the generic pattern for the construction of the Z-curve of
level 2. On the left, the boxes of level 2 and their binary numbering. On the right, the index of
the cells with respect to the underlying curve and the curve inferred therefrom

By applying the generic construction pattern to each level 1 cell, the curve of the upper level is
obtained, here level 2, Figure 9.3 and the index of the cells on that level is found. If α designates
the index of the parent cell (that is, a sequence of 2 level bits at 0 or at 1), the index α00 is created
and then the index of the four cells of the pattern is added; the four indices of α00, α01, α10 and
α11 are obtained by following the pattern path and the index of Z is the translation of this binary
number (formula 9.1).

One observes the "sequential" path between cells 0 and 7, the (spatial) jump between cells
7 and 8 and then, again, a "sequential" path between cells 8 and 15. Let us recall that it is this

6. With the convention of writing from left to right or right to left, starting with the most significant bit or
the least significant one.
308 Meshing, Geometric Modeling and Numerical Simulation 3

curve behavior, combining a sequential aspect and jumps, that is used to build certain algorithms
or for optimizing the behavior (complexity) of other algorithms.

By iterating the construction, the curve of the desired level is obtained (up to 32). In practice,
the construction is based on a tabulation and an algorithm (short, but not really readable by a
non-specialist). The three-dimensional extension makes it possible to deal with scenes of this
dimension. In two dimensions, the algorithm is written as:

in C:

#include <stdio.h>
#include <inttypes.h>

int64_t Zcode(int I, int J)


{
int64_t code = 0;

// Loop over the I and J coordinates 32-bit words


for(int b=0; b<32; b++)
{
// If the bit number b from the I coordinate is set,
// set the bit number 2b of the resulting zcode
if(I & (1<<b))
code = code | 1<<(2*b);

// If the bit number b from the J coordinate is set,


// set the bit number 2b+1 of the resulting zcode
if(J & (1<<b))
code = code | 1<<(2*b+1);
}
// The final 64-bit zcode contains the interleaved bits
// from the two 32-bits source coordinates
return(code);
}
int main()
{
printf("code (1,3) = %lld\n", Zcode(1, 3));
printf("code (6,5) = %lld\n", Zcode(6, 5));
return(0);
}

in Motorola assembly language:

#d1 = input I coordinate


#d2 = input J coordinate
Some Algorithms and Formulas 309

#d3 = output Zcode

move #1, d1
move #3, d2
bsr zcode

move #6, d1
move #5, d2
bsr zcode

Zcode:
move #32, d0
.nextbit:
rol #1, d1
roxl #1, d3
rol #1, d2
roxl #1, d3
dbf d0, .nextbit
rts

No comment!

9.3.2. A Hilbert curve

Constructing a Hilbert curve is more technical, the generation pattern is trivial, but is used
with symmetries and rotations. In fact, we have, by symmetry and rotation, the four patterns
shown in Figure 9.4.

Geometrically, the curve travels through the four cells starting with one of them and termi-
nating in another (Figure 9.4, from left to right):
– pattern (i): start at the bottom left, end at the bottom right, it goes from left to right;
– pattern (ii): start at the top right, end at the top left, it goes from right to left;
– pattern (iii): start at the top right, ends at the bottom right, it goes from top to bottom;
– pattern (iv): start at the bottom left, end at the top left, it goes from bottom to top.

These four patterns will allow the curves of every level to be recursively constructed.

For the first level, one considers a cell encompassing the scene and it is cut into four subcells
using pattern (i), as shown in Figure 9.4 (another pattern could have been chosen); the curve thus
starts at the bottom left and ends at the bottom right.

For the second level, the cells of the first level are cut in the traveling order of the curve of
this first level. The curve of this second level is developed by applying to each cell quadruplet
the pattern that allows one to follow the curve of level 1. In Figure 9.5, the level 1 curve can
310 Meshing, Geometric Modeling and Numerical Simulation 3

be seen on the left, and then in the middle, the level 2 curve. To obtain this curve, we have
successively used patterns (iv), (i) then (i) again and finally (iii). The construction of the level
3 curve utilizes these patterns as well as pattern (ii) (see Figure 9.5 on the right).

Figure 9.4. The generic pattern for the construction of a Hilbert curve with its four variations

Figure 9.5. From left to right, curves of levels 1, 2 and 3. In some places, we have denoted the
patterns used to shift from one level to the next

To find the indices of the cells of a given level, the same method as for a Z-curve is reused.
If α designates the index of the parent cell (namely a sequence of 2 level bits at 0 or at 1), the
index α00 is created and we add7 the index of the four cells of the pattern used; the four indices
are then obtained by following the path of this pattern8, namely α00, α01, α10 and α11 or α10,
α11, α00 and α01 or one of the other two possibilities and the Hilbert index is the translation of
this binary number via formula [9.1].

7. These two operations correspond to a bit shift and a logical operation.


8. The numbering of the cells of the pattern, according to its definition, is not that of the single pattern of
the Z-curves as seen above.
Some Algorithms and Formulas 311

By iterating the construction, the curve of the desired level is obtained. In practice, as with the
Z-curve, the construction is based on a tabulation and an algorithm (short but even less readable
[than the one of a Z-curve] by a non-expert). The three-dimensional extension makes it possible
to deal with scenes of this dimension.
Conclusion and Perspectives

Does this three-volume book that is nearly 1,000 pages long contain everything about every-
thing? The answer is obviously no, and that is so, for several reasons.

Concerning triangulations (Volume 1), we think that we have been roughly exhaustive. For
conventional meshes1, we have covered most methods. We have purposefully omitted certain
methods that, appearing here and there at one time, have proved to be inefficient, not very generic
and, as a result, ephemeral. Nevertheless, we have presented some methods that do not really
convince us in order to allow readers to form their own opinion. Genericity, in our view, has been
brought through the notion of metrics widely debated in different parts of Volumes 1 and 2.

For high-degree meshes (Volume 2) and, in particular, curved meshes, we believe that we
have shown that it is wise, for the geometrical aspect, to restrict oneself to either 2 or 3 degrees
(not too high). This broad topic allowed us to address many avenues, but we are aware that there
is still a lot to do. Hopefully, we have seen the full benefits of the Bézier formulation of Lagrange
elements of any degree. The Bézier–Lagrange duality (and Lagrange–Bézier) has appeared on
numerous occasions, far beyond anything that concerns geometric modeling, in Volume 1, as an
elegant and pragmatic means to solve many issues of rather and sometimes surprisingly different
natures (of mesh construction, optimizations and even visualization).

For mesh optimization aspects (Volume 2), we have seen different methods either in the clas-
sical case (without metric field) or indeed in the presence of a metric field. Different quality
functions, both for individual elements and entire meshes, have been presented based on the con-
cept of intrinsic metrics (of an element). Here again, high-degree meshes have been considered,
resulting in the (proposition of) a possible definition for the concept of quality in this specific
case.

The effective utilization of meshes is addressed in many different ways: in finite element or
finite volume methods (Volume 3), and in an advanced manner in methods for mesh adaptation

1. Read: whose element edges are straight segments and where only the extremities of these segments are
nodes.

Meshing, Geometric Modeling and Numerical Simulation 3: Storage,


Visualization and In Memory Strategies, First Edition. Paul Louis George,
Frédéric Alauzet, Adrien Loseille and Loïc Maréchal.
© ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
314 Meshing, Geometric Modeling and Numerical Simulation 3

and adaptive calculations (Volume 2). The continuous mesh theory has been put forward as a
powerful basis for justifying the processes of these calculations with error estimation and mesh
adaptation. The effective utilization, and through concrete examples involving meshing, remesh-
ing and adaptation tools, has naturally led us to address issues related to parallelism with a set
of problems to be solved according to the paradigms chosen about the nature of memory and its
management.

By observing how to visualize a mesh and a solution field (Volume 3), we encountered a few
nice questions whose interest falls beyond the scope of visualization. The trouble of experienc-
ing practical aspects, here for visualization but more generally outside this specific context, has
allowed us to come back several times, and throughout every volume, to very pragmatic issues:
namely databases, basic structures, advanced structures, memory management and acceleration
methods (with the emphasis on the role of space-filling curves), to mention just a few specific
points. It should be noted that it is the cross-knowledge of several methods or techniques that is
the condition for success.

Readers have not failed to notice that for various purposes and, sometimes, in very different
contexts, the methods advocated (because they are effective) are still in small numbers. This is
the reason why we have dedicated several sections to them. It could be said, from this point
of view, that the cocktail hashing/Bézier/De Casteljau/Bernstein/Taylor/Hilbert and sometimes a
good old rule of three provide a solid foundation for all, or almost all, developments related to
meshing.

Most of the discussions throughout the three volumes are based not only on proven results
(based on our own experience, but not only) but also on advanced ideas that should merely be
starting points. It should be noted that the book also presents, sometimes in the form of a simple
footnote, a few unpublished results. In doing so, we hope to give incentive to new advances in
the field based on well-established material, but also by digging deeper into the more innovative
ideas presented here and there.

In conclusion, be reassured, although we have greatly enjoyed writing the three volumes of
this book, we do not envisage a Volume 4.
Bibliography

[Alauzet, Frazza-2019] F. A LAUZET AND L. F RAZZA, 3D RANS anisotropic mesh adaptation on the
high-lift version of NASA’s Common Research Model (HL-CRM), AIAAFLUID25, AIAAP 2019-2947,
Dallas, USA, 2019.
[Barth-1992] T. BARTH, Aspects of unstructured grids and finite-volume solvers for the Euler and Navier-
Stokes equations, Technical Report 787, AGARD, 1992.
[Barth, Larson-2002] T.J. BARTH AND M.G. L ARSON, A posteriori error estimates for higher order Go-
dunov finite volume methods on unstructured meshes, NASA Technical Report, 02-001, 2002.
[Bergot et al. 2010] M. B ERGOT, G. C OHEN AND M. D URUFLÉ, Higher-order finite elements for hybrid
meshes using new nodal pyramidal elements, Journal of Scientific Computing, 42(3), 345-381, 2010.
[Boissonnat, Yvinec-1997] J.D. B OISSONNAT AND M. Y VINEC, Algorithmic Geometry, Cambridge Uni-
versity Press, 1997.
[Carey-1997] G.F. C AREY, Computational Grids: Generation, Adaptation and Solution Strategies, Taylor
and Francis, 1997.
[Chan, Warburton-2016] J. C HAN AND T. WARBURTON, A short note on a Bernstein-Bezier basis for the
pyramid, SIAM Journal on Scientific Computing, 38, A2162-A2172, 2016.
[Cheng et al. 2012] S.-W. C HENG , T.K. D EY AND J.R. S HEWCHUK, Delaunay Mesh Generation, CRC
Press, 2012.
[Chevalier, Pellegrini-2008] C. C HEVALIER AND F. P ELLEGRINI, PT-Scotch: A tool for efficient parallel
graph ordering, J. Parallel Comput., 48(1), 318-331, 2008.
[Ciarlet, Lunéville-2009] P. C IARLET AND E. L UNÉVILLE, La méthode des éléments finis. De la théorie
à la pratique, 1 and 2, Les Presses de l’ENSTA, 2009.
[Crank, Nicholson-1996] J. C RANK AND P. N ICHOLSON, A practical method for numerical evaluation
of solutions of partial differential equations of the heat-conduction type, Adv. Comput. Math., 6(1),
207-226, 1996.
[Curtiss, Hirschfelder-1952] C.F. C URTISS AND J.O. H IRSCHFELDER, Integration of stiff equations, Pro-
ceedings of the National Academy of Sciences, 38(3), 235-243, 1952.
[Cuthill, McKee-1969] E. C UTHILL AND J. M C K EE, Reducing the bandwidth of sparse symmetric ma-
trices, Proc. 24th Nat. Conf. Assoc. Comput. Mach., 157-172, 1969.
[Cuthill-1972] E. C UTHILL, Several strategies for reducing the bandwidth of sparse symmetric matrices,
Sparse Matrices and Their Applications, Plenum Press, 1972.

Meshing, Geometric Modeling and Numerical Simulation 3: Storage,


Visualization and In Memory Strategies, First Edition. Paul Louis George,
Frédéric Alauzet, Adrien Loseille and Loïc Maréchal.
© ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
316 Meshing, Geometric Modeling and Numerical Simulation 3

[Dervieux et al. 1992] A. D ERVIEUX , L. F EZOUI AND F. L ORIOT, On high resolution extensions of
Lagrange-Galerkin finite element schemes, INRIA Research Report, 1703, 1992.
[Dey-2007] T.K. D EY, Curve and Surface Reconstruction, Cambridge University Press, 2007.
[Dompierre et al. 1999] J. D OMPIERRE , P. L ABBÉ , M.G. VALLET AND R. C AMARERO, How to subdi-
vide pyramids, prisms and hexahedra into tetrahedra, Rapport Cerca, 99(78), 1999.
[Edelsbrunner-1987] H. E DELSBRUNNER, Algorithms in Combinatorial Geometry, 10, EATCS Mono-
graphs on Theoretical Computer Science, Springer, 1987.
[Edelsbrunner-2001] H. E DELSBRUNNER, Geometry and Topology for Mesh Generation, Cambridge Uni-
versity Press, 2001.
[Ellis et al. 2006] D. E LLIS , S. K ARMAN , A. N OVOBILSKI AND R. H AIMES, 3D visualization and ma-
nipulation of geometry and surface meshes, 44th AIAA Aerospace Sciences Meeting and Exhibit, Amer-
ican Institute of Aeronautics and Astronautics, Reno, USA, 2006.
[Feuillet-2019] R. F EUILLET, Embedded and high-order meshes: two alternatives to linear body-fitted
meshes, Thesis, Université Paris-Saclay, 2019.
[Frey, George-2008] P.J. F REY AND P.L. G EORGE, Mesh Generation, Hermès, Paris, 2000 (1st edition),
and in English, ISTE Ltd and Wiley, 2008 (2nd edition).
[George, Liu-1979] J.A. G EORGE AND J.W. L IU, An implementation of a pseudoperipheral node finder,
ACM Trans. Math. Software, 5(3), 284-295, 1979.
[George-1991] P.L. G EORGE, Automatic Mesh Generation. Applications to Finite Element Methods,
Wiley, 1991.
[George, Borouchaki-1998] P.L. G EORGE AND H. B OROUCHAKI, Delaunay Triangulation and Meshing,
Application to Finite Element, Hermès, Paris, 1998.
[Gibbs et al. 1976] N.E. G IBBS , W.G. P OOLE AND P.K. S TOCKMEYER, An algorithm for reducing the
bandwidth and profile of a sparse matrix, SIAM J. Num. Anal., 13(2), 236-250, 1976.
[Glowinski-1973] R. G LOWINSKI, Approximations externes par éléments finis de lagrange d’ordre un
et deux du problème de Dirichlet pour l’opérateur biharmonique, Méthode itérative de résolution des
problèmes approchés, Academic Press, 123-171, 1973.
[Gourvitch et al. 2004] N. G OURVITCH , G. ROGÉ , I. A BALAKIN , A. D ERVIEUX AND T. KOZUBSKAYA,
Tetrahedral-based superconvergent scheme for aeroacoustics, INRIA Research Report, 5212, 2004.
[Haasdonk et al. 2003] B. H AASDONK , M. O HLBERGER , M. RUMPF, A. S CHMIDT AND K.G. S IEBERT,
Multiresolution visualization of higher order adaptive finite element simulations, Computing, 70(3),
181-204, 2003.
[Hendrickson, Lelan-1995] B. H ENDRICKSON AND R. L ELAN, An improved spectral partitioning algo-
rithm for mapping parallel computations, Siam Journal on Scientific Computing, 16(2), 452-469, 1995.
[Hirsch-1988] C. H IRSCH, Numerical Computation of Internal and External Flows. Volume 1: Funda-
mentals of Numerical Discretization, Wiley, 1988.
[Hirsch-1990] C. H IRSCH, Numerical Computation of Internal and External Flows. Volume 2: Computa-
tional Methods for Inviscid and Viscous Flows, Wiley, 1990.
[Johnen, Geuzaine-2015] A. J OHNEN AND C. G EUZAINE, Geometrical validity of curvilinear pyramidal
finite elements, Journal of Computational Physics, 299, 124-129, 2015.
[Karypis, Kumar-1998a] G. K ARYPIS AND V. K UMAR, A parallel algorithm for multilevel graph parti-
tioning and sparse matrix ordering, J. Parallel Distrib. Comput., 48(1), 71-95, 1998.
Bibliography 317

[Karypis, Kumar-1998b] G. K ARYPIS AND V. K UMAR, Multilevel k-way partitioning scheme for irregu-
lar graphs, J. Parallel Distrib. Comput., 48(1), 96-129, 1998.
[Karypis, Kumar-1998c] G. K ARYPIS AND V. K UMAR, A fast and high quality multilevel scheme for
partitioning irregular graphs, Siam Journal on Scientific Computing, 20, 359-392, 1998.
[Lai-1998] Y.-C. L AI, A three-step renumbering procedure for high-order finite element analysis, Inter-
national Journal for Numerical Methods in Engineering, 41, 127-135, 1998.
[Lo-2015] D.S.H. L O, Finite Element Mesh Generation, CRC Press, 2015.
[Löhner-2008] R. L ÖHNER, Applied CFD Techniques, Wiley, Chichester, 2008.
[Lorensen, Cline-1987] W.E. L ORENSEN AND H.E. C LINE, Marching cubes: A high resolution 3D sur-
face construction algorithm, Comput. Graphics, 21(4), 163-169, 1987.
[Loseille, Feuillet-2018] A. L OSEILLE AND R. F EUILLET, Vizir: High-order mesh and solution visual-
ization using OpenGL 4.0 graphic pipeline, 2018 AIAA Aerospace Sciences Meeting, Kissimmee, USA,
2018.
[Loseille et al. 2019] A. L OSEILLE , L. F RAZZA AND F. A LAUZET, Comparing anisotropic adaptive
strategies on the Second AIAA Sonic Boom Workshop geometry, Journal of Aircraft, 56(3), 2019.
[Menier et al. 2014] V. M ENIER , A. L OSEILLE AND F. A LAUZET, CFD validation and adaptivity for
viscous flow simulations, AIAAFLUID44, AIAAP 2014-2925, Atlanta, USA, 2014.
[Nelson et al. 2011] B. N ELSON , R. H AIMES AND R.M. K IRBY, GPU-based interactive cut-surface
extraction from high-order finite element fields, IEEE Transactions on Visualization and Computer
Graphics, 17(12), 1803-1811, 2011.
[Parlett et al. 1982] B.N. PARLETT, H. S IMON AND L.M. S TRINGER, On estimating the largest eigen-
value with the Lanczos algorithm, Mathematics of Computations, 18(157), 153-165, 1982.
[Peiro et al. 2015] J. P EIRO , D. M OXEY, B. J ORDI , S.J. S HERWIN , B.W. N ELSON , R.M. K IRBY, AND
R. H AIMES, High-order visualization with ElVis, Notes on Numerical Fluid Mechanics and Multidis-
ciplinary Design, Springer International Publishing, 521-534, 2015.
[Pothen et al. 1990] A. P OTHEN , H. S IMON AND K.P. L IOU, Partitioning sparse matrices with eigenvec-
tors of graphs, Siam Journal Matrix Anl. Appl., 11(3), 430-452, 1990.
[Preparata, Shamos-1985] F.P. P REPARATA AND M.I. S HAMOS, Computational Geometry, an Introduc-
tion, Springer, 1985.
[Remacle et al. 2007] J.F. R EMACLE , N. C HEVAUGEON , E. M ARCHANDISE AND C. G EUZAINE, Effi-
cient visualization of high-order finite elements, International Journal for Numerical Methods in Engi-
neering, 69(4), 750-771, 2007.
[Rogers, Adams-1989] D.F. ROGERS AND J.A. A DAMS, Mathematical Elements for Computer Graphics,
McGraw-Hill, 1989.
[Sellers et al. 2014] G. S ELLERS , R.S. W RIGHT AND N. H AEMEL, OpenGl SuperBibl, Addison Wesley,
2014.
[Selmin, Formaggia-1996] V. S ELMIN AND L. F ORMAGGIA, Unified construction of finite element and
finite volume discretizations for compressible flows, IJNME, 39, 1-32, 1996.
[Shu, Osher-1988] C.W. S HU AND S. O SHER, Efficient implementation of essentially non-oscillatory
shock-capturing schemes, J. Comput. Phys., 77, 439-471, 1988.
[Spiteri, Ruuth-2002] R.J. S PITERI AND S.J. RUUTH, A new class of optimal high-order strong-stability-
preserving time discretization methods, SIAM J. Numer. Anal., 40(2), 469-491, 2002.
318 Meshing, Geometric Modeling and Numerical Simulation 3

[Steger, Warming-1981] J.L. S TEGER AND R.F. WARMING, Flux vector splitting of the inviscid gas dy-
namic equations with application to finite-difference methods, J. Comput. Phys., 40, 263-293, 1981.
[Stoufflet et al. 1987] B. S TOUFFLET, J. P ERIAUX , L. F EZOUI AND A. D ERVIEUX, Numerical simulation
of 3-D hypersonic Euler flows around space vehicles using adapted finite element, AIAA 25th Aerospace
Sciences Meeting, AIAA-1987-0560, Reno, USA, 1987.
[Taubin-1994] G. TAUBIN, Distance approximations for rasterizing implicit curves, ACM Trans. on
Graphics, 13(1), 3-42, 1994.
[Thompson et al. 1985] J.F. T HOMPSON , Z.U.A. WARSI AND C.W. WASTIN, Numerical Grid Genera-
tion. Foundations and Applications, North-Holland Publishing, 1985.
[Topping et al. 2004] B.H.V. T OPPING , J. M UYLLE , P. I VANYI , R. P UTANOWICZ AND B. C HENG, Fi-
nite Element Mesh Generation, Saxe-Coburg Publications, 2004.
[Toro-2009] E.F. T ORO, Riemann Solvers and Numerical Methods for Fluid Dynamics: A Practical Intro-
duction, Springer, 2009.
[Van Leer-1972] B. VAN L EER, Towards the ultimate conservative difference scheme I. The quest of
monotonicity, Lecture Notes in Physics, 18, 163-168, 1972.
[Van Leer-1974] B. VAN L EER, Toward the ultimate conservative difference scheme II. Monotonicity and
conservation combined in a second order scheme, JCP, 14(4), 361-370, 1974.
[Vlachos et al. 2001] A. V LACHOS , P. J ÖRG , C. B OYD AND J.L. M ITCHELL, Curved PN triangles, Pro-
ceedings of the 2001 Symposium on Interactive 3D Graphics, ACM, New York, USA, 159-166, 2001.
[Wolff-2011] D. W OLFF, OpenGL 4.0 Shading Language Cookbook, Packt Publishing, 2011.
Index

area of a essential boundary condition, 240


curved plane triangle, 220 explicit time method, 244
planar triangle of degree 2, 219 filter, 9
array, 2 finite elements, 185
assembly finite volume cell
of a matrix, 237 Barth, 252
right-hand side, 237 median, 251
bandwidth (of a matrix), 83 Voronoi, 252
Bernstein polynomial, 293 finite volumes, 243
Bézier form, 294 frontal
bucket, 6 element renumbering, 92
centered cell, 250 node renumbering, 86
coloring, 22 geometric operators (visualization), 106
compact storage (of a matrix), 82 Gibbs, 86
cracks, 65 Gouraud (model), 170
cross-renumbering, 94 GPU, 117
Cuthill-McKee, 86 grid, 6
cutting hash
by a line, 120 grid, 13, 60
by a plane, 119, 160 table, 269
into simplices, 46 hashing, 27, 269
data structure heat equation
basic, 2 discrete formulation, 189
external, 34 mass matrix, 193
internal, 29 right-hand side, 193
decomposition into simplices, 48 stiffness matrix, 191
Delaunay triangulation (exercise), 274 variational formulation, 187
dilatation, 40 Hilbert curve, 88, 309
coefficient(s), 40 homogeneous coordinates, 40
dynamic threshold, 27 immersion (mesh), 77
elasticity equation intersection, 120, 160
discrete formulation, 189 isovalue, 171
mass matrix, 196 level lines, 171
right-hand side, 196 linked component (exercise), 281
variational formulation, 189 linked list, 3

Meshing, Geometric Modeling and Numerical Simulation 3: Storage,


Visualization and In Memory Strategies, First Edition. Paul Louis George,
Frédéric Alauzet, Adrien Loseille and Loïc Maréchal.
© ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.
320 Meshing, Geometric Modeling and Numerical Simulation 3

marching-cubes (method), 171 Shaders, 126


merging (meshes), 65 shading, 170
mesh sorting, 24
cleaning, 66 space-filling curve, 87, 306
reading, 266 element renumbering, 93
reconnection, 58 stack, 4
metric (exercise), 283 storage profile (of a matrix), 82
morse storage (of a matrix), 82 subdivision
node renumbering adaptive, 110
(arbitrary degree mesh), 89 into curved triangles, 116
by a space-filling curve, 88 of a solution, 153
octree, 14 uniform into same degree elements, 147
palette, 157 uniform subdivision into 1st-degree
partition elements, 108
(of a mesh), 94
surface area of a
and frontal approach, 96
plane triangle of degree d, 296
and graph, 96
surface quadrilateral of degree d × d, 299
and space-filling curve, 95
surface triangle of degree d, 298
Peano curve, 88
tessellation, 154
Phong (model), 170
physical-geometric reconnection, 65 adaptive, 110
pipeline, 126, 129 multi-parametrized, 115
quadrilateral time implicit method, 244
cut into triangles, 166 topological operators (visualization), 107
of degree 1×1 (heat), 201 tree, 14
quadtree, 14 triangle
queue, 5 (curved) of degree 2 (heat), 219
randomness, 24 (right-angled) of degree 1 (elasticity), 234
renumbering, 26, 82 of degree 1 (heat), 197
resizing a memory resource, 28 vertex-centered cell, 250
right triangle of degree 2 (heat), 209 volume of a curved tetrahedron of degree d, 297
rotation, 40 wireframe (mode), 174
axis, 40 Z-curve, 88, 306
Other titles from

in
Numerical Methods in Engineering

2020
SIGRIST Jean-François
Numerical Simulation, An Art of Prediction 2: Examples

2019
DA Daicong
Topology Optimization Design of Heterogeneous Materials and Structures
GEORGE Paul Louis, BOROUCHAKI Houman, ALAUZET Frédéric,
LAUG Patrick, LOSEILLE Adrien, MARÉCHAL Loïc
Meshing, Geometric Modeling and Numerical Simulation 2: Metrics,
Meshes and Mesh Adaptation
(Geometric Modeling and Applications Set – Volume 2)
MARI Jean-Luc, HÉTROY-WHEELER Franck, SUBSOL Gérard
Geometric and Topological Mesh Feature Extraction for 3D Shape Analysis
(Geometric Modeling and Applications Set – Volume 3)
SIGRIST Jean-François
Numerical Simulation, An Art of Prediction 1: Theory
2017
BOROUCHAKI Houman, GEORGE Paul Louis
Meshing, Geometric Modeling and Numerical Simulation 1: Form
Functions, Triangulations and Geometric Modeling
(Geometric Modeling and Applications Set – Volume 1)

2016
KERN Michel
Numerical Methods for Inverse Problems
ZHANG Weihong, WAN Min
Milling Simulation: Metal Milling Mechanics, Dynamics and Clamping
Principles

2015
ANDRÉ Damien, CHARLES Jean-Luc, IORDANOFF Ivan
3D Discrete Element Workbench for Highly Dynamic Thermo-mechanical
Analysis
(Discrete Element Model and Simulation of Continuous Materials Behavior
Set – Volume 3)
JEBAHI Mohamed, ANDRÉ Damien, TERREROS Inigo, IORDANOFF Ivan
Discrete Element Method to Model 3D Continuous Materials
(Discrete Element Model and Simulation of Continuous Materials Behavior
Set – Volume 1)
JEBAHI Mohamed, DAU Frédéric, CHARLES Jean-Luc, IORDANOFF Ivan
Discrete-continuum Coupling Method to Simulate Highly Dynamic
Multi-scale Problems: Simulation of Laser-induced Damage in Silica Glass
(Discrete Element Model and Simulation of Continuous Materials Behavior
Set – Volume 2)
SOUZA DE CURSI Eduardo
Variational Methods for Engineers with Matlab®

2014
BECKERS Benoit, BECKERS Pierre
Reconciliation of Geometry and Perception in Radiation Physics
BERGHEAU Jean-Michel
Thermomechanical Industrial Processes: Modeling and Numerical
Simulation
BONNEAU Dominique, FATU Aurelian, SOUCHET Dominique
Hydrodynamic Bearings – Volume 1
Mixed Lubrication in Hydrodynamic Bearings – Volume 2
Thermo-hydrodynamic Lubrication in Hydrodynamic Bearings – Volume 3
Internal Combustion Engine Bearings Lubrication in Hydrodynamic
Bearings – Volume 4
DESCAMPS Benoît
Computational Design of Lightweight Structures: Form Finding and
Optimization

2013
YASTREBOV Vladislav A.
Numerical Methods in Contact Mechanics
2012
DHATT Gouri, LEFRANÇOIS Emmanuel, TOUZOT Gilbert
Finite Element Method
SAGUET Pierre
Numerical Analysis in Electromagnetics
SAANOUNI Khemais
Damage Mechanics in Metal Forming: Advanced Modeling and Numerical
Simulation

2011
CHINESTA Francisco, CESCOTTO Serge, CUETO Elias, LORONG Philippe
Natural Element Method for the Simulation of Structures and Processes
DAVIM Paulo J.
Finite Element Method in Manufacturing Processes
POMMIER Sylvie, GRAVOUIL Anthony, MOËS Nicolas, COMBESCURE Alain
Extended Finite Element Method for Crack Propagation
2010
SOUZA DE CURSI Eduardo, SAMPAIO Rubens
Modeling and Convexity

2008
BERGHEAU Jean-Michel, FORTUNIER Roland
Finite Element Simulation of Heat Transfer
EYMARD Robert
Finite Volumes for Complex Applications V: Problems and Perspectives
FREY Pascal, GEORGE Paul Louis
Mesh Generation: Application to finite elements – 2nd edition
GAY Daniel, GAMBELIN Jacques
Modeling and Dimensioning of Structures
MEUNIER Gérard
The Finite Element Method for Electromagnetic Modeling

2005
BENKHALDOUN Fayssal, OUAZAR Driss, RAGHAY Said
Finite Volumes for Complex Applications IV: Problems and Perspectives

You might also like