Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

Ref.

Ares(2019)7887282 - 21/12/2019

D4.2: UNCERTAINTY ANALYSIS


AND IMPLEMENTATION

Work Package WP4 (D4.2)


Lead Authors (Org) D. Kumar (LIST), M. Marchi (ESTECO), S. Belouettar (LIST), C.
Kavka (ESTECO)
Contributing Author(s) (Org) Yao Koutsawa (LIST), Gast. Rauchs (LIST)
Reviewers (Org) Ali Daouadji (INSA), V. Regnier (e-Xstream)
Due Date 31-09-2019
Date 15-12-2019
Version V1.2

Dissemination level

PU: Public X
PP: Restricted to other programme participants
RE: Restricted to a group specified by the consortium
CO: Confidential, only for members of the consortium

1
Versioning and contribution history

Version Date Author Notes

v1.0 30-08-2019 D. Kumar, S. Belouettar Initial draft


V1.1 10-09-2019 M. Marchi, C. Kavka Updated version
V1.2 17-12-2019 D. Kumar, S. Belouettar Final Version

Disclaimer:
This document’s contents are not intended to replace consultation of any applicable legal
sources or the necessary advice of a legal expert, where appropriate. All information in
this document is provided “as is” and no guarantee or warranty is given that the information
is fit for any particular purpose. The user, therefore, uses the information at its
sole risk and liability. For the avoidance of all doubts, the European Commission has no
liability in respect of this document, which is merely representing the authors’ view.

2
TABLE OF CONTENTS

Versioning and contribution history………………………………………………………………………………… 2


Disclaimer………………………………………………………………………………………………………………………. 2
Task and Deliverable descriptions from the project proposal…………………………………………. 5
Compliance with the work-programme NMBP-23-2016…………………………………………………. 5
Executive Summary…………………………………………………………………………………………………………. 5
Introduction……………………………………………………………………………………………………………………. 6
Uncertainty Quantification……………………………………………………………………………………………… 7
Methods for Uncertainty Quantification………………………………………………………………………… 9
Monte Carlo Method………………………………………………………………………………………………………. 10
Perturbation or Moments Method…………………………………………………………………………………. 12
Sensitivity Analysis…………………………………………………………………………………………………………. 12
Polynomial Chaos…………………………………………………………………………………………………………… 13
Probability Distributions…………………………………………………………………………………………………. 16
Uniform Distribution………………………………………………………………………………………………………. 16
Normal Distribution………………………………………………………………………………………………………. 17
Beta Distribution……………………………………………………………………………………………………………. 17
Sampling Schemes …………………………………………………………………………………………………………. 20
Approach for Data-driven Analysis …………………………………………………………………………………. 20
Uncertainty Analysis in Composite Material……………………………………………………………………. 21
Composite Leafspring under Uncertainties……………………………………………………………………….21
Results Comparison………………………………………………………………………………………………………… 23
Buckling Analysis of AIRBUS Airframe under Uncertainties……………………………………………….25
Future Work…………………………………………………………………………………………………………………… 27
Acknowledgements………………………………………………………………………………………………………….29
References……………………………………………………………………………………………………………………….29

3
LIST of Figures and Tables

Figure 1: Computational framework: Deterministic


Figure 2: Computational framework: Non-deterministic
Figure 3: The first six 2D Legendre Polynomials
Figure 4: The probability distribution of uniformly distributed random variable
Figure 5: The probability distribution of normally distributed random variable
Figure 6: The probability distribution of Beta distributed random variable
Figure 7: Sampling points in 2D domain: 1. Random sampling 2. Latin hypercube 3. Hammersley 4. Sobol
Figure 8: Test case: leafspring
Figure 9: Probability distribution (histogram) for the leafspring mass
Figure 10: Probability distribution (histogram) for the leafspring stiffness
Figure 11: Sensitivity analysis for the leafspring case
Figure 12: Airframe model
Figure 13: Airframe components: Panel (top left), Frame (top right) and Stringer (bottom)
Figure 14: Probability density function of the buckle mode (histogram and fitted density)
Figure 15: Sensitivity of input parameters on the buckle mode
Table 1: 2D Legendre polynomials of PC order 0,1,2,3
Table 2: Description of the input uncertainties for leafspring case
Table 3: Statistical results for leafspring with 1000 LHS MC
Table 4: Statistical results for leafspring with number of samples in random and LHS MC
Table 5: Description of the input uncertainties for Airframe
Table 6: Statistical results for Airframe buckling mode with number of samples

4
Task and Deliverable descriptions from the project proposal
WP definition: In the WP4, an optimisation approach will be implemented in a goal-oriented
manner to satisfy the essential industrial and market requirements. The material selection process
will be performed via several mappings among the desired performances of the final component,
the manufacturing process inputs, the component configuration, cost, and industrial requirements.
A process selection methodology integrating materials modelling, optimisation and control will be
developed. The multidisciplinary evaluation/selection will consider the production, use and end-of-
life of the composite material, as well as explore their potential future market penetration among
other constraints in order to develop relevant scenarios.
Task 4.1: Uncertainty analysis and Implementation. Based on the applicability, task 4 is divided into
3 subtasks. Task 4.1 is related to the development of various methods for efficient uncertainty
quantification with applications in the analyses of several test cases provided by project partners.
Task 4.2 is devoted to the development of model selection methods based on graph theory. In Task
4.3, we proposed to develop methods related to probabilistic and Bayesian network classifier (BNC)
multidisciplinary optimisation. This final report is dedicated to the progress made on Task 4.1.
Duration: M18-M32.
D.4.2: Final report on Uncertainty analysis and implementation: This final deliverable presents the
progress made on uncertainty quantification methods and their implementation in a multiscale
system as in COMPOSELECTOR.

Compliance with the work-programme NMBP-23-2016


The definition of the architecture of the plugin interfaces of the BDSS matches the scope and
expected impact as described in the call text of NMBP-23-20161. Indeed, the definition of the plugin
interface, essential for the definition of BDSS apps, is a key step towards the definition and
implementation of the complete BDSS and will ensure that the developed business decision tool is
realistic and useful to end-users. A careful analysis of security aspects also contributes to the
expected integration of humans in new more complex industrial environments, supporting decision
makers to confidentially access the decision-making platform as indicated in the work-programme.

Executive Summary
In this deliverable, we focus on the implementation of uncertainty quantification tools applied to
particulate and fibre-reinforced composites. We developed a methodology and tool for
microstructure analysis and microstructure generation that includes uncertainty evaluation.

1 http://ec.europa.eu/research/participants/portal/desktop/en/opportunities/h2020/topics/nmbp-23-

2016.html

5
1 Introduction
Task 4.1 of the COMPOSELECTOR project is dedicated to the implementation of uncertainty
quantification methods in the stochastic modelling and uncertainty analyses of the structural
applications related to composite materials.
Nowadays, mathematical modelling and computer simulations are used in almost every
field of science to understand real world phenomena. In the last few decades, a major revolu-
tion is seen with the emergence of computer simulation. Due to the development of powerful
computer hardware and efficient numerical algorithms, it became affordable to analyse real-
istic problems with complex mathematics more accurately. The fidelity of the mathematical
models is significantly improved. Mathematical modelling is nowadays an unavoidable tool
for scientists and engineers, and is fundamental in the designing and manufacturing of engi-
neering products such as airplanes, trains, cars, turbines, nuclear power plants, bridges, etc.
However, in spite of increasing computational power and accuracy of the mathematical pre-
dictions, models are still not able to predict the solution of a real world problem accurately.
The results from computational modelling usually differ from experiments. The possible
sources of these discrepancies may be due to modelling errors, numerical errors and/or un-
certainties due to inadequate information of the parameters and geometric domain considered
in the computational modelling. By definition, models are idealisations of reality, i.e. they
always rely on simplified assumptions and neglect some aspects of the phenomena of inter-
est. Usually, the mathematical model of a real world problem is a set of equations based
on some hypothesis. These governing equations are then solved by applying proper numer-
ical schemes. Owing to the numerical algorithms developed in the art of computing, it can
be assumed that numerical schemes are sufficiently accurate to predict the underlying phe-
nomena. Numerical errors can always be avoided or reduced by mesh refinement, thanks to
developments in the field of computer hardware. The discretisation errors are usually small
in comparison to uncertainties.
In computational modelling, there are always numerous input parameters such as mate-
rial properties, geometry, boundary conditions, initial conditions, etc. whose estimation is
difficult and their values are often inaccurate or uncertain. Due to the lack of information of
several sources, i.e., uncertainties present in the mathematical modelling, the computational
output is also uncertain. The computational domain (including geometrical uncertainties due
to manufacturing variability and imprecise geometrical definitions) and model parameters are
commonly encountered as uncertain by engineers when designing new products. Geometri-
cal uncertainties usually arise from production tolerances, loading deformation or wear of the
product. The effect of these uncertain parameters should be quantified and included in the
final solution to assure and improve the quality of the results. In a model, some sources of
uncertainties may have a minor influence, and some sources of uncertainties may have a ma-
jor influence. In chaotic systems, an extreme case is known as the butterfly effect. According
to this hypothesis, the flap of a butterfly’s wings would be enough to alter the course of the
weather forever.
There are various sources of uncertainties (aleatory and epistemic) in addressing a busi-
ness decision support system (BDSS) platform. In this project, the effect of uncertain input
parameters on output parameters is determined. Moreover, the results of sensitivity analysis
are used to judge the reliability of the underlying model and its parameters to predict upper
and lower bounds of the output. Based on the sensitivity indices, it is possible to determine
the most important uncertain input-parameters, their correlations and also their impact on the
predicted output.
These days, advanced composite materials are being used in almost every industry due to
a number of useful properties such as high specific strength and stiffness. In composite struc-
6
tures, material properties are typically not homogeneous and can vary randomly within large
structures such as in aircraft wing and fuselage. In this work, the impact of uncertainties is es-
timated on the design and analysis of composite structures considering failure mechanisms at
different length scales. This work aims to develop and apply multiscale methods of quantify-
ing the role of uncertainties in the design and analysis of composite structures. This research
will possibly lead to next-generation lightweight and robust composite structures to be used
in numerous industries (e.g., aircraft, tyre, high-speed vehicles, sports equipment, etc.). First,
several uncertainty quantification methods proposed in the literature are briefly discussed and
a framework to accurately propagate the uncertainties from micro to macro scale is provided.
On the other hand, we will explore bottom-up approaches for the stochastic multiscale anal-
ysis in the test cases composite materials.
The robust optimisation tools developed in ModeFRONTIER module of ESTECO are
used here to implement uncertainties in the modelling and design process. The incremental
nature of this work package makes this a relatively straightforward task. One of the key en-
ablers of the proposed approach in COMPOSELECTOR is the ability to account for different
aspects of uncertainties within models, experiments, and the design process. Uncertainty is
a first order concern in the process of simulation-supported, decision-based materials design.
In this framework, for uncertainty analyses, only aleatory uncertainties are considered. It
is assumed that the main uncertainties lie in the PMC microstructure and/or in the material
parameters. Input parameters are assumed to be stochastic while the probability distribution
function including the mean value and standard deviation are known for some input parame-
ters. Based on sensitivity and complexity measures, different models can be tested and their
predictive capabilities will be quantified.
In this report, several aspects of stochastic methods and analyses are detailed. Some test
cases are also included to update the progress in this framework.

2 Uncertainty Quantification
Over the last decades, uncertainty quantification became an essential tool for the robustness of
engineering applications. New methodologies are therefore required to quantify the effect of
these uncertainties efficiently. Efficient uncertainty quantification of a model due to uncertain
input parameters is the first step for the robust design of a component. The second one
is to consider these uncertainties in the optimization process to have robust design of the
component.
Uncertainties are associated to the selection of the models and to the input parameters
(operational and geometrical) of the physical model considered for analysis. For example,
in computational fluid modeling, the physical properties of the fluid or the boundary condi-
tions may not be known exactly. Geometrical uncertainties mainly arise due to manufacturing
tolerances and imprecise geometrical definitions in the model. Wearing and possible small
deformations during the operation can also be counted as geometrical uncertainties. How-
ever, large geometrical changes, e.g. aeroelastic wing deformations, can not be considered
as random behavior. Aeroelastic effects can be treated with unsteady fluid-structure inter-
action models, where, if needed, at each time step UQ methods can be used to model wing
manufacturing uncertainties.
The uncertainties associated to the system of consideration can have different origins and
can be classified as epistemic and aleatory [1]. The epistemic uncertainties are related to
the properties of the model considered. While, the aleatory uncertainties are related to the
properties of the system being considered for the analysis.
Epistemic uncertainties are due to any lack of information in any phase of the modeling
process. These uncertainties may arise from mathematical assumptions or simplifications
7
used in the derivation process of the governing equations. The parameters with epistemic
uncertainty are not associated to any probability density functions and are not well charac-
terized by stochastic approaches. Typically they are specified as an interval. In uncertainty
quantification of epistemic uncertainty, the output result is also specified as an interval. Epis-
temic uncertainties can be quantified using sampling based methods (suh as random sampling,
Latin hypercube sampling [1, 2]), interval analysis, fuzzy set theory [3, 4], etc. Turbulence
modeling assumption is a typical example of the source of epistemic uncertainty.
Aleatory uncertainties are associated with the physical system or the environment un-
der consideration. Material properties, operating conditions, geometrical variability, initial
conditions, boundary conditions etc. are some of the examples of aleatory uncertainties.
Uncertain parameters of aleatory uncertainty are always associated to a particular probabil-
ity distribution function. Using appropriate uncertainty quantification (UQ) methods, the
probability distribution function of the output quantity can be determined. In UQ methods,
the quantities of interest are probability distribution function (PDF), cumulative distribution
function (CDF), the mean, variance and other higher order moments of the solution.
Computer simulations without the inclusion of uncertainties in the model are called de-
terministic simulations, and the output solution is called deterministic solution. However,
when uncertainties are considered in the model, the simulations are called non-deterministic
simulations, and the results are called non-deterministic or stochastic. In deterministic sim-
ulations, one solves a system of governing equations using proper numerical schemes. For
example, in aeronautics and CFD the Navier-Stokes equations are solved. In Figure 9, a
typical computational framework for deterministic simulations is shown. The mathematical
model consists of numerical parameters and a suitable discretization scheme to solve the gov-
erning system of equations. The model parameters are provided to the model to compute the
output solution.

Figure 1. Computational framework: Deterministic

In non-deterministic simulations, uncertainties associated with model parameters and


geometry are propagated to the output of the simulation. Introducing the probabilistic
8
nature of the input uncertainties in the model, the deterministic process transforms into
non-deterministic process governed by stochastic partial differential equations (SPDE).

A mathematical model can be written as:

K(u, a) = 0 (1)

where u is the output solution and a is a set of input parameters {a1 , a2 ....an }. If a is
uncertain and parameterised as a vector of independent random variables ξ = {ξ1 , ξ2 ....ξn }, the
output solution u also becomes stochastic i.e. u(ξξ ). The stochastic output solution u(ξξ ) can be
computed using intrusive or non-intrusive stochastic methods by specifying the probability
distribution of ξ .
A deterministic approach leads to a single value of the output solution. When uncer-
tainties in the input parameters are introduced by their probability density function (PDF),
the results from non-deterministic simulations are interpreted in terms of their statistical mo-
ments (i.e. mean, variance, skewness, flatness) and probability distribution functions (PDF).
In Figure 2, a typical computational framework for non-deterministic simulations is shown.
Using an appropriate uncertainty quantification method, a confidence level of the estimated
quantities (or the solution of a model) can be calculated and the reliability of the results can
be characterized in a stochastic or probabilistic sense.

Figure 2. Computational framework: Non-deterministic

A unique set of methods are required to quantify each form of uncertainty. Epistemic
uncertainties are considered as reducible uncertainties. They can be reduced by research
advancement. However, aleatory uncertainties are unavoidable. The work in this report is
restricted to stochastic methods to quantify aleatory uncertainties only.

3 Methods for Uncertainty Quantification


For the reproducibility of experiments, typically the results from laboratory experiments are
reported with error bars due to the uncertainties associated to the measurements. The main
9
objective of the stochastic methods is to build a framework to predict similar error bars in the
computational results due to the uncertainties in a mathematical model.
Over the past decade, UQ methods have received a lot of interest and attention. Depend-
ing on the properties of the input uncertainties, UQ methods can be divided into probabilistic
(for aleatory) and non-probabilistic (for epistemic) methods. The uncertainty quantification
methods described in this article can be employed if the probability distribution functions of
the described uncertainties is known or defined.
For uncertainty quantification, a range of intrusive and non-intrusive methods have been
developed. The implementation of intrusive approaches requires considerable modification
of the deterministic codes. The non-intrusive methods are sampling based methods where the
original deterministic code does not need any modifications and it can be used as a black box.
The objective of all uncertainty quantification methods is to compute the statistical mo-
ments and/or PDF of the output quantity u(ξξ ) due to the uncertainties in the input parameters
ξ . The mean value < u > and the variance σ2u of the output quantity u can be written as:

Z ∞
E[u] =< u >= u fu (u)du
−∞

Z ∞
E[(u− < u >) ] =
2
σ2u = (u− < u >)2 fu (u)du
−∞ (2)
= E[u ] − (E[u])2
2

where fu (u) is the probability density function of the output u. The above integrals for the
statistical moments can be computed numerically by using an appropriate UQ method.
Usually one is interested mostly in the first two moments as they give the mean value and
an idea of the spreading of the value. Nevertheless sometimes higher order moments are also
needed such as the skewness (3rd moment), which is a measure of the lack of symmetry of the
pdf, and the kurtosis (4th moment), which is a measure of the tails of the pdf i.e. the outliers.
A variety of different UQ methods such as Monte Carlo methods [15], polynomial chaos
methods [16–20], perturbation method [21–24] and sensitivity methods [25–28] have been
proposed for uncertainty quantification. More recently, the polynomial chaos based methods
have been successfully applied by various researchers for a wide range of problems [29–31].

3.1 Monte Carlo Method

The most commonly used non-intrusive UQ method is the Monte Carlo (Fishman [15]). For
performing basic uncertainty quantification, Monte Carlo method is the most simplest and
straight-forward to implement. Monte Carlo method relies on a random sampling of the
input random variables. The main advantage of the Monte Carlo method is to use the existing
code for solving the model equations as a black box.
In Monte Carlo method, based on their distributions, several samples of the input pa-
rameters are generated. Each sample corresponds to a deterministic problem and is solved
individually using a deterministic solver. The probabilistic solution can be obtained using the
output sample solutions. The simple procedure for Monte Carlo is as:

1. Compute the realizations of the input uncertain parameters from their assumed proba-
bility density function or interval.
2. Compute the output solution for each realization of the input parameters using the
computational solver.
10
3. Compute statistical moments such as the mean and the variance from obtained output
samples.

Let a be the set of n uncertain input parameters and u be the output quantity of interest of
a mathematical model. If ξ = {ξ1 , ξ2 .....ξn } is a vector of independent random variables and ξ i
is a realization of ξ , the corresponding values of the input parameters and output solution are
ai = a(ξξ i ) and ui = u(ξξ i ). Each realization ai corresponds to a unique solution of the output
ui . By computing a collection of sample solutions {u1 , u2 , ....}, the statistical results of u, i.e.
the mean and the variance can be computed simultaneously as:

N
X
E[u] = lim ui wi (3)
N→∞
i=1

or
N
1 X i
< û >≈ u
N i=1

where < û > is the mean of u and wi is the weight of the realization i. In Monte Carlo
methods, usually the weight wi is considered as equal to N1 where N is the total number of
realizations. The variance of u can be written as:

N
X
σ2u = lim (ui − < û >)2 wi
N→∞
i=1
(4)
N
1 X i
≈ (u − < û >)2
N i=1

For computing the sample variance (unbiased) using Monte Carlo simulation, the above
N
variance is multiplied by the Bessels’s correction factor, i.e. N−1 .
To compute the statistical results accurately, the domain of the input parameters should
be sampled with a very fine resolution i.e. the number of samples N should be very high.
The main limitation of the Monte Carlo method is associated with the computational cost. A
large number of solution samples are needed to compute statistical moments accurately.
The Monte Carlo method is very straightforward and can be implemented easily in a
non-intrusive way but often requires a large number of deterministic simulations to achieve
accurate statistical results. It can be shown easily that the error associated to Monte Carlo
simulations is proportional to √1N . Consequently, the Monte Carlo method cannot be used for
very complex stochastic problems due to its slow convergence rate.
For very complex industrial problems, it is used as a last option and often is used only
for the validation of other methods. Therefore, due to the complexity of physical models,
more efficient approaches need to be employed for uncertainty quantification of engineering
problems. Several techniques have been developed to accelerate Monte Carlo simulations
such as use of Latin hypercube sampling (LHS) [37], multi-level Monte Carlo (MLMC) [39]
and quasi Monte Carlo [38] techniques.
11
3.2 Perturbation or Moment Methods

For uncertainty quantification, the most widely used non-sampling methods are perturbation
methods. This method is also known as method of moments. In perturbation methods, the
random output solution is expanded around the mean value of the input quantity via the Taylor
series where the series is truncated upto a certain order. The perturbation methods are limited
to small perturbations only.
A multi-dimensions function u(ξξ ) can be expanded using the Taylor series as:

n n n
X 1 XX
u(ξξ ) = u(ξ̄ξ ) + Gi δξi + Hi j δξi δξ j + O(|δξξ |3 ) (5)
i=1
2 i=1 j=1

where ξ ≡ {ξ1 , ξ2 ....ξn } is a n-dimension vector, δξi is the small perturbation, the term Gi
is the gradient and Hi j is the Hessian. Taking the average of the above equation, the mean
can be written as:

n n
1 XX
< u >= u(ξξ ) ≈ u(ξ̄ξ ) + Hi j C i j (6)
2 i=1 j=1

where Ci j = δξi δξ j is the covariance. Also note that the average of the second term in the
Taylor series becomes zero as δξi = 0.
For computing the stochastic moments (i.e. mean, variance), the Hessian will be needed.
In Perturbation methods, the resulting expansion becomes very complicated beyond the sec-
ond order approximation or for large number of uncertainties.

3.3 Sensitivity Analysis

Sensitivity analysis has been used to estimate the relative contributions of the input parame-
ters (deterministic or non-deterministic) to the output solution [6–8]. Using sensitivity meth-
ods, the change in the output solution due to an individual input parameter can be assessed.
The model sensitivity is estimated by gradients of the output solution with respect to the input
parameters.
Using sensitivity analysis, the variability of the model output due to each of the input
parameters can be computed. If a is the set of n input parameters {a1 , a2 ....an } and u is the
vector of m output variables {u1 , u2 , ...um }, the sensitivity of the output solution with respect
to the input parameters, i.e. S i j can be written as:

∂ui
S ij = (7)
∂a j

Adjoint methods or finite difference methods can be used to calculate sensitivity deriva-
tives.
Sensitivity methods are often combined with other UQ methods for efficient uncertainty
analysis. In the first step, using sensitivity analysis, insignificant parameters can be removed
and then a UQ method can be applied for reduced uncertain parameters.
12
3.4 Polynomial Chaos Method

In this work, the polynomial chaos methods are used for development of efficient uncertainty
quantification and robust optimization methods. The polynomial chaos method (PCM) is a
more recent UQ approach, which offers a large potential for non-deterministic simulations.
In PCM, stochastic variables with different distributions can be handled both with intrusive
and non-intrusive approaches. The properties of the input random variables and the output
stochastic solution can be described in terms of the mean, variance, higher order moments
and probability density function.
The polynomial chaos theory was originally formulated by Wiener [32] in 1938. Based
on the chaos theory of Wiener, the polynomial chaos methodology was developed by Ghanem
and Spanos (1991) for a Gaussian distribution in an intrusive framework [17]. Further, it was
expanded for arbitrary distribution by Xiu and Karniadakis (2002) [16]. They showed that
polynomial chaos theory can be applied for a large number of probability laws with corre-
sponding orthogonal polynomials from the Askey scheme [40]. Orthogonal polynomials are
classes of polynomials which are orthogonal to each other with respect to a weight function.
Hermite polynomials, Laguerre polynomials, Jacobi polynomials and Legendre polynomials
are some of the most popular orthogonal polynomials used in PC based stochastic applica-
tions.
In polynomial chaos methods, a random variable or function can be decomposed into sep-
arable deterministic and non-deterministic orthogonal polynomials. For example, a random
variable u(x, ξ) can be decomposed as:

P
X
u(x, ξ) = ui (x)ψi (ξ) (8)
i=0

where ui (x) are the deterministic expansion coefficients and ψi (ξ) are basis functions or
orthogonal polynomials. P is the total number of terms in the expansion.
Orthogonality means:

Z
ψi (ξ)ψ j (ξ)Wξ (ξ)dξ =< ψi ψ j >= δi j < ψ2i > (9)
ξ

where Wξ (ξ) is the probability distribution of the random variables ξ and δi j is the Kro-
necker delta.
The mean of u(x) can be computed as:

E[u] = u0 (10)

and the variance as:

P
X
E[(u − E[u])2 ] = σ2u = u2i (x) < ψ2i > (11)
i=1

In many problems of practical interest the number of uncertain parameters is high, i.e. of
the order of 20 or higher. For such high dimensional stochastic problems the computational
effort increases exponentially, which is known as the curse of dimensionality, and which is
the main drawback of all PCM. Therefore, the development of efficient stochastic models
13
Table 1. 2D Legendre polynomials of PC order 0, 1, 2, 3

j Order of PC jth polynomial


0 p=0 1
1 p=1 ξ1
2 ξ2
1 2
3 p=2 2 (3ξ1 − 1)
4 ξ1 ξ2
1 2
5 2 (3ξ2 − 1)
1 3
6 p=3 2 (5ξ1 − 3ξ1 )
2 ξ2 (3ξ1 − 1)
1 2
7
2 ξ1 (3ξ2 − 1)
1 2
8
1 3
9 2 (5ξ2 − 3ξ2 )

is of great interest for the uncertainty analysis of complex industrial applications. Over the
years, some alternative methodologies such as sparse polynomial chaos [10, 11], sparse grid
techniques [14], compressive sampling [12] and reduced models [9, 34–36] have been inves-
tigated to overcome the curse of dimensionality in PC schemes.

3.4.1 Multi-dimensional or multivariable polynomials

Almost all computational models of a real world application contain a large number of ran-
dom variables. To study the influence of each random variable of the model onto the final
solution we need to construct multi-dimensional polynomials starting from the one dimen-
sional polynomials. The multi-dimensional polynomial chaos of order p in terms of 1D
polynomials can be written as explained below.
If {Ψ0 , Ψ1 , Ψ2 , Ψ3 } is the set of 1D orthogonal polynomials of PC order 3, then a 2D
stochastic quantity u(ξ1 , ξ2 ) can be decomposed in terms of PC expansion of order 3 as:

u(ξ1 , ξ2 ) = u00 Ψ0
+ u10 Ψ1 (ξ1 ) + u01 Ψ1 (ξ2 )
(12)
+ u20 Ψ2 (ξ1 ) + u11 Ψ1 (ξ1 )Ψ1 (ξ2 ) + u02 Ψ2 (ξ2 )
+ u30 Ψ3 (ξ1 ) + u21 Ψ2 (ξ1 )Ψ1 (ξ2 ) + u12 Ψ1 (ξ1 )Ψ2 (ξ2 ) + u03 Ψ3 (ξ2 )
or

u(ξ1 , ξ2 ) = u0 ψ0 + u1 ψ1 (ξ1 , ξ2 ) + u2 ψ2 (ξ1 , ξ2 ) + ..... + u9 ψ9 (ξ1 , ξ2 ) (13)

where {ψ1 , ψ2 , ......., ψ9 } is the set of two-dimensional polynomials of PC order 3.


For example, the set of 2D Hermite basis of PC order 3 is shown in Table 1. The shape of the
first six 2D Hermite polynomials is also shown in Figure 3.
If ξ = (ξ1 , ξ2 ......., ξn ) is a set of n-dimensional independent random variables, the proba-
bility density function (PDF) of ξ can be written as:

n
Y
W (ξξ ) = Wi (ξi ) (14)
i=1

where Wi (ξi ) is the individual PDF of the random variable ξi .


14
Figure 3. The first six 2D Legendre polynomials

3.4.2 Regression method to estimate PC coefficients


The regression method is a statistical method used to estimate the relationship between de-
pendent and independent variables and to approximate the solution of an over-determined
system in linear algebra. It is also known as the least squares method. The regression based
non-intrusive polynomial chaos method was first introduced by Walters [37] to compute the
polynomial coefficients. In the sampling based regression method, the uncertain variables are
written in terms of their polynomial expansions. The stochastic quantity of interest u(x; ξ )
can be approximated in terms of PC expression as:

P
X
u(x; ξ ) = ui (x)ψi (ξξ ) (15)
i=0

If one chooses a set of m observations (ξξ j = {ξ1 , ξ2 .....ξns } j ; j = 1, 2, 3...m) in the stochas-
tic space and evaluates the deterministic output u(x; ξ j ) at these sampling points, the PC
coefficients can be determined by solving a system of equations as:

 ψ0 (ξξ ) ψ1 (ξξ ) ψ2 (ξξ 1 ) . . ψP (ξξ 1 )


 1 1  
  u0 (x) 
 ψ (ξξ 2 ) ψ (ξξ 2 ) ψ2 (ξξ 2 ) . . ψP (ξξ 2 )   u (x) 
 0 1   1 
 ψ0 (ξξ 3 ) ψ1 (ξξ 3 ) ψ3 (ξξ 3 ) . . ψP (ξξ 3 )   u2 (x) 
. . . . . .   . 
   


 . . . . . .   . 
  
ψ0 (ξξ m ) ψ1 (ξξ m ) ψ2 (ξξ m ) . . ψP (ξξ m ) uP (x)
(16)
 u(x; ξ ) 
 1 

 u(x; ξ 2 ) 


 u(x; ξ 3 ) 
 
= 
.

 

 . 

u(x; ξ )
m 

(17)
15
or
Au = b (18)

The least squares solution of the system of equation 18 is given by

u = (AT A)−1 AT b (19)

The coefficient matrix A is very crucial in the least squares method. One can solve the
over-determined system of equations by choosing more than P + 1 samples as its neces-
sary that the problem is well posed i.e. the matrix (AT A) has full rank. Different sampling
approaches such as random sampling, Latin Hypercube, sampling from quadrature points,
Sobol sampling etc., can be used in the regression-based PC. Consistent with the literature
[33], it was found that over-sampling with 2(P + 1) simulations is sufficient to achieve results
accurately. It was also observed that considering more sampling points does not improve the
accuracy of the statistical moments.
The issue of the sampling scheme and the number of samples have been investigated in
detail in the past by Hosder et al. [13] who compared random, LHS and Hammersley sam-
pling schemes. Concerning the number of points, 2 times the number of unknowns appeared
to be enough in their studies.

4 Probability Distributions
In the stochastic modeling, uncertainties are considered as random variables that have a cer-
tain probability distribution which, for continuous (as opposed to discrete) variables can be
described with a probability density function. The probability distributions introduced in the
following subsections are widely considered in CFD applications.

4.1 Uniform Distribution

The uniform distribution is a bounded distribution between [a, b] where the probability of the
random variable is equal within the bounded region. The probability density function (PDF)
of a uniformly distributed random variable ξ in [a, b] is given by:
Wξ (ξ) = b−a
1

where a and b are the minimum and the maximum values of the random variable ξ. In
Figure 4, the probability density functions of a uniformly distributed random variable for
given values of a and b are shown. It can be shown that the mean value of the random
variable ξ is:

a+b
µ= (20)
2
and the standard deviation is:

(b − a)
σ= √ (21)
2 3
16
ξ

Figure 4. The probability density function of uniformly distributed random variable

4.2 Normal Distribution

The normal distribution or Gaussian distribution is defined over (−∞, ∞). If µ is the mean
and σ is the standard deviation of a Gaussian distributed random variable ξ, the probability
density function of the random variable can be written as:

1 −(ξ − µ)2
Wξ (ξ) = √ exp (22)
σ 2π 2σ2

For natural phenomena and measured physical data, the Gaussian distribution is the most
suitable distribution. If µ = 0 and σ = 1, the distribution is called standard normal distribu-
tion. In Figure 5, the probability density functions of a normally distributed random variable
for the given values of µ and σ are shown.

4.3 Beta Distribution

The beta distribution has been applied to a wide range of probabilistic applications. The shape
of the PDF of beta distributed random variables are controlled by two shape parameters.
For a Beta distributed random variable ξ ∈ [−1, 1], the PDF is defined as:

(1 − ξ)q−1 (1 + ξ) p−1
Wξ (ξ, p, q) = (23)
2 p+q−1 B(p, q)
where p and q are the shape parameters and B(p, q) is Beta function defined as:
17
ξ

Figure 5. The probability density function of normally distributed random variable

Figure 6. The probability density function of Beta distributed random variable

18
Γ(p)Γ(q)
B(p, q) = (24)
Γ(p + q)
R∞
where Γ(p) = 0 x p−1 e−x dx. If p is a positive integer, Γ(p) = (p − 1)!.

For p = q = 1, the Beta distribution becomes a uniform distribution as the PDF f (ξ) = 12 .
For ξ ∈ [0, 1], the PDF is:

ξ p−1 (1 − ξ)q−1
Wξ (ξ, p, q) = (25)
B(p, q)
The PDFs of a Beta distributed random variable in [0, 1] with different shape parameters
are shown in Figure 6. The PDF of a Beta distributed variables in [a, b] can be found by
coordinate transformation as below.
If u ∈ [a, b] is an uncertain parameter such that u = a+b
2 + 2 ξ where ξ ∈ [−1, 1] is a Beta
b−a

distributed random variable with a PDF as in equation 23 then

(ξ − a) p−1 (b − ξ)q−1
Wξ (ξ, p, q) = (26)
B(p, q)(b − a) p+q−1
where ξ ∈ [a, b] and the shape parameters p, q > 0.

Figure 7. Sampling points in 2D domain: 1. Random sampling 2. Latin hypercube 3. Hammersley 4.


Sobol.

19
5 Sampling Schemes

Almost all non-intrusive stochastic methods are sampling based methods. The original Monte
Carlo method is based on random sampling schemes. Later on, several efficient sampling
techniques were introduced in the Monte Carlo for faster convergence. As mentioned in the
earlier section, Regression based polynomial chaos method was also proposed with several
sampling techniques. For numerically computing the integral in projection based polynomial
chaos method, several sampling techniques are considered.
Random sampling is one of the most popular sampling technique for probabilistic meth-
ods. In random sampling approaches, a newly generated sample point is completely unbiased
to the previous sampling points. Usually, sampling based UQ methods involve random sam-
pling from the random variables space according to the given PDFs where millions of samples
may be required to estimate accurate results. Alternatively, several non-random sampling ap-
proaches have been developed to accelerate sampling based UQ methods. Stratified sampling
is a non-random sampling where the probability space is divided into subgroups and then the
sample points are selected from the subgroups or strata.
Latin Hypercube described by McKey in 1979 [5], is a particular case of stratified sam-
pling method for generating samples close to random samples. The main idea of these im-
proved sampling schemes is to cover the parameters domain in a more uniform way than
the classical Monte Carlo methods. Quasi random sampling sequences such as Hammersley
sampling, Sobol sequence and Halton sequence are low-discrepancy and equally distributed
sequence. Thus these methods use pre-calculated sequences to compute the changes to be
made in the input parameter. In quasi Monte Carlo, a sequence of points is generated in a
way such that all samples are distributed more uniformly in the parameter domain and pro-
duce consistency in the solution.
In Figure 7, 200 samples generated by using different sampling schemes for two uni-
formly distributed uncertain parameters are shown. The Monte Carlo samples show forma-
tion of clusters and void spaces that yield the slow convergence. However, for other schemes
(Latin Hypercube, Sobol sequence and Hammersley sampling) the space is filled more regu-
larly and the occurrence of clusters is reduced.

6 Approach for Data-driven Analysis

Here we use data-driven approaches for statistical analyses. Suppose that we have “m” pa-
rameters to analyse, and “n” model responses of interest. The computing process is sketched
in Figure 8 and outlined as follows:
Step 1: Generate samples for the analysed parameters in given probability distributions
with Nd realizations, e.g. X = {X1 , X2 , ..., Xm } where Xi = {X(1)
i , Xi , ..., Xi
(2) (Nd ) T
} with 1 ≤ i ≤
m. There are several sampling methods, such as Latin Hypercube sampling, quasi-random
low discrepancy sequences and quasi-Monte Carlo (QMC) method, etc. In this work, the
input data set is generated by LHS method for its excellent property in space filling.
Step 2: Evaluate the multi-scale model at each group of design points using numerical
methods. The corresponding model responses of interest are gathered into the vectors, say
Y j = {Y(1)
j , Y j , ..., Y j } with 1 ≤ j ≤ n.
(2) (Nd ) T

Step 3: Transform the data (X, Y j ) (1 ≤ j ≤ n) into standardized form (x, y j ) (1 ≤ j ≤ n)


and construct the data-driven model for each model response Y j from the data set (x, y j ).
The polynomial chaos approximation is used to approximate the input/output relations of the
multi-scale model. Thus, the optimal expansion for each model response can be written as
Y j = M opt
j (x1 , x2 , ..., xm ) (1 ≤ j ≤ n). 20
Step 4: Perform GSA on the data-driven model instead of the multi-scale model. Sobol
decomposition is easily conducted on the SPCE model. And subsequently the Sobol indices
for each parameter are computed analytically. For every model response Y j , we can calculate
both the first-order sensitivity indices S i( j) and the total sensitivity indices S T( j)i to account for
interactions between parameters.

Samples of Parameters in Model Responses


Given Probability Distributions Multi-scale Regarding Generated Samples
Model
(X1, X2, ... , Xm) (Y1, Y2, ... , Yn)

Data-driven Model (SPCE)

{
opt
Input Data Y1=M 1 (x1, x2, ... , xm), Output Data
opt
Y2=M 2 (x1, x2, ... , xm),
......
opt
Yn=M n (x1, x2, ... , xm).

Sobol' Decomposition

Global Sensitivity Analysis

{
(1) (1)
S i , S Ti (i=1, 2, ... , m) for Y 1 ,
(2) (2)
S i , S Ti (i=1, 2, ... , m) for Y 2 ,
......
(n) (n)
S i , S Ti (i=1, 2, ... , m) for Y n .

Figure 8. The process of data-driven analysis

7 Uncertainty Quantification in Composite materials


The methods discussed above for uncertainty analyses are applied to two test cases. The first
one is Leafspring, a test case from the project partner Dow and the second one is airframe
from AIRBUS. For both of these test cases, the input uncertainties are introduced at the
micro-scale level and are propogated to macro-scale structures to quantify uncertainties on
the output parameters related to structural properties.

7.1 Composite Leafspring under Uncertainties

In this analysis, a composite leafspring, the test case of Dow is considered for the uncertainty
analysis. For the modeling of the composite material in leafspring, carbon fibre T300 and
Epoxy resin 914C are used as primary materials (Figure 9) with a volume fraction of 50% as
suggested by Dow. For the micro scale analysis and for computing the effective properties of
the composite material an in-house code CMBSFE developed at the MRT department, LIST
is used. Further, the FEM solver ABAQUS is connected with CMBSFE for the macro scale
structural analysis.
For the uncertainty analysis, currently three input uncertainties, the volume fraction, and
the densities of the carbon fibre and epoxy resin are considered. v f , the volume fraction of
the materials is considered as distributed uniformly with the variations as 5% of the mean
values. ρc , the density of the carbon fibre and ρr , the density of epoxy resin are considered
as distributed uniformly with the variations as 2% of the mean values. These values of un-
certainties are provided by Dow and are based on the expert opinions. The mean values and
their uncertainties are tabulated in Table 2.
21
Figure 9. Test case: leafspring

Table 2. Description of the input uncertainties for leafspring case

Volume frac. Density carbon fibre Density epoxy resin


(v f ) (ρc ) (ρr )

Mean 0.5 1.78 1.29


Distribution Uniform Uniform Uniform
Scale 5% of mean 2% of mean 2% of mean
Lower bound 0.475 1.7444 1.2642
Upper bound 0.525 1.8156 1.3158
Std. dev. 0.0144 0.0206 0.0149

The uncertainties are propagated from the micro scale to macro scale. The effect of these
uncertainties is estimated on the final design of the leafspring (macro scale) using several UQ
methods discussed in the previous section.
Usually in Monte Carlo method with random sampling, a large number of samples (an
order of 104 ) is required to predict statistical results accurately. However, for a real industrial
application, it is almost impossible to consider large number of samples as one simulation
itself is computationally very expensive and heavy storage is needed to save the simulation
data. Alternatively, Latin hypercube sampling (LHS) is an efficient technique to produce
accurate solutions with relatively lesser number of samples. Therefore, currently 1000 simu-
lations using LHS are considered for the uncertainty quantification. It is worth to report that
approximately 200 GB storage was required to save these simulations. The stochastic results
(mean and standard deviation of the output quantities) are shown in Table 3.
In Figure 10, the probability distribution (histogram) of the output quantities (the leaf-
spring mass and stiffness) are illustrated. It can be seen that as expected, for the uniformly
distributed uncertainties in the input parameters (volume fraction, densities of materials), the
output probabilities are not always uniform. From these probabilities, it can be seen that
the probability of the stiffness is similar to uniform distribution. However, for the leafspring
mass, the probability distribution is more like Gaussian. Distribution fitting tools can be used
to fit these distributions into a beta, Gaussian or any pre described distribution. In mode-
FRONTIER a wide variety of distribution fitting and analysis tools are available.
22
Table 3. Statistical results for leafspring with 1000 LHS MC

Mass Stiffness

Min. 2.360 12.864


Max. 2.486 14.244
Mean 2.423 13.546
Std. dev. 0.0229 0.398
Skewness 0.145 0.0243
Kurtosis 2.538 1.803
Variance 5.26E-4 0.159
Variation Coeff. 0.0095 0.0294

Figure 10. Probability distribution (histogram) for the leafspring mass (left) and stiffness (right)

In Figure 11, the sensitivity information is shown. The sensitivity (Sobol indices) of the
output quantities, leafspring mass and stiffness with respect to the input variables is shown.
It can be seen that all three input parameters are sensitive for the leafspring mass. The uncer-
tainty in fiber density is the most sensitive for the leafspring mass. For the stiffness, only the
volume fraction is the sensitive parameter. It means that uncertainties in the fiber and resin
densities are not influencing the output uncertainties in the stiffness.
These results will be used as a reference solution to compare results from polynomial
chaos method in the next section. We also study the effect of increasing number of samples
on the accuracy of the stochastic solution.

7.1.1 Results Comparison: Effect of UQ methods sampling schemes and number of


samples

In this section, results from LHS random sampling with different number of simulations are
compared. Further, the effect of sampling on polynomial chaos method is also discussed with
increasing polynomial order, sampling schemes and number of samples.
In Table 4, the stochastic results (mean and standard deviation) of the output quantities
(leafspring mass and stiffness) are tabulated. To compare the efficiency of the methods, sam-
pling schemes and the effect of number of samples, the tabulated results are obtained using
two different UQ methods (Monte Carlo and Polynomial Chaos), two different sampling tech-
niques (random and LHS sampling) and also for different number of samples. The results are
also compared with the reference solution discussed in the previous section. It is assumed
that for 1000 LHS samples, the results are conversed and can be considered as the reference
solution.
23
Figure 11. Sensitivity analysis for leafspring case

It can be noticed in Table 4 that for the particular test case considered in this study, the
Monte Carlo method with random sampling is not able to provide accurate standard deviation
upto 200 samples. However, for the Latin hypercube sampling, only 50 samples are sufficient
to get the solutions accurately. While using polynomial chaos approach (polynomial order
3), 50 samples are sufficient to reach the same accuracy. In conclusion, the polynomial chaos
approach is found superior in terms of accuracy. It can be noticed that the sampling scheme
is not affecting the accuracy of polynomial chaos approach. An independency in the sam-
pling technique in polynomial chaos suggests that a smaller number of LHS samples can also
provide accurate solution using polynomial chaos approach. Results also suggests that the
problem considered is not very complex in terms of stochasticity. In future, a more complex
test problem with a larger number of input parameters will be considered in order to resemble
all possible sources of uncertainties.

Table 4. Statistical results for leafspring with number of samples in random and LHS Monte Carlo

Mass Stiffness
Mean Std Mean Std

Rand, 50 2.42 2.23 E-2 13.51 3.81E-1


Rand, 100 2.42 2.28 E-2 13.55 4.08E-1
Rand, 200 2.42 2.27 E-2 13.55 4.00E-1
LHS, 50 2.42 2.30 E-2 13.55 3.99E-1
LHS, 100 2.42 2.29 E-2 13.55 3.99E-1
PC2, Rand, 50 2.42 2.29 E-2 13.55 3.98E-1
PC2, LHS, 50 2.42 2.29 E-2 13.55 3.98E-1
Ref(LHS, 1000 ) 2.42 2.29 E-2 13.55 3.98E-1

24
7.2 Buckling Analysis of AIRBUS Airframe under Uncertainties

Buckling is a kind of instability that leads to failure of a structure. It occurs when a structure
is under compressive axial stress and it is identified as a sudden sideways deflection of a
structure. If the axial load applied to a structure (such as a column) is increased, it ultimately
becomes large enough to make the structure unstable and causes a structural failure called
buckling. For engineering applications, it is very crucial to have exact information about
buckling mode. Uncertainties in the computer simulations make it difficult to predict buckling
accurately. So in order to avoid the failure of the structure under study, it is very important to
measure effect of input uncertainties on the buckling mode.
In this section, a relatively more complex test case, AIRBUS component airframe, shown
in Figure 12 is considered here for the stochastic structural analysis. The model contains three
components, the panel, stringer and the frame as shown in Figure 13. The panel and stringer
are made of 12 layers of composite material and the frame contains 16 layers. The layer
thickness is constant for all three components and is set as 0.18 mm. The layers orientation
angles for each component are tabulated in Table 5. For the modeling of the composite ma-
terial in the airframe, carbon fibre T300 and Epoxy resin 914C are used as primary materials
with a volume fraction of 50% same as the previous test case.

Figure 12. Airframe model

The FEM solver ABAQUS is used with CMBSFE for the macro scale structural anal-
ysis. For the structural analysis, buckling modes are considered as the model output using
finite element solver ABAQUS. The computational model is provided by AIRBUS, one of
the project partners in COMPOSELECTOR project. For uncertainty quantification and sen-
sitivity analysis, modeFRONTIER is used.
The input uncertainties at the micro-scale level are considered in the material properties
of the epoxy resin. We noticed that any changes in the material density does not make any
difference in the buckling mode. So for this analysis, the uncertainties are introduced in the
volume fraction (v f ), Young’s modulus (E) and Poisson ratio (ν) of the epoxy resin. The mean
values of the input uncertainties and their uncertainty bounds are tabulated in Table 6.
As currently, only 3 input uncertainties are considered, a fourth order polynomial approx-
imation (PC=4) is taken to get accurate stochastic results. As described earlier, a relatively
25
Figure 13. Airframe components: Panel (top left), Frame (top right) and Stringer (bottom)

Table 5. Orientation angle for the airframe components

Panel Stringer Frame

Layer1 -45 -45 45


Layer2 90 90 -45
Layer3 0 0 0
Layer4 90 90 0
Layer5 0 0 90
Layer6 45 45 0
Layer7 45 45 0
Layer8 0 0 45
Layer9 90 90 -45
Layer10 0 0 0
Layer11 90 90 0
Layer12 -45 -45 90
Layer13 0
Layer14 0
Layer15 -45
Layer16 45

26
Table 6. Description of the input uncertainties for Airframe

Volume frac. Young’s modulus Poisson ratio


(v f ) (E) (ν)

Mean 0.5 1.78E+9 0.2


Distribution Uniform Uniform Uniform
Scale 5% of mean 2% of mean 2% of mean
Lower bound 0.475 1.7444E+9 0.196
Upper bound 0.525 1.8156E+9 0.204
Std. dev. 0.0144 0.0206E+9 0.0023

more robust Latin Hypercube sampling scheme is used. Statistical results, mean and stan-
dard deviation of the first buckle mode are tabulated in Table 7. With increasing number of
samples, a convergence study in the mean and standard deviation is also shown in the table.
It can be noticed that a total of 70 samples are sufficient to produce accurate results. Results
are also compared with the adaptive sparse polynomial chaos approach with 40 samples. In
Figure 14, the probability distribution of the first buckle mode is shown. The range of buckle
mode due to the input uncertainties can be observed in the figure. In Figure 15, sensitivity in-
formation is also provided. Sensitivity information on the buckle mode is shown with respect
to the input parameters. It can be noticed that the Young modulus of the epoxy resin does not
exhibit any effect on the buckle mode. Volume fraction also shows a very minor effect on the
buckle mode. Poisson ratio is the most effective parameter on the buckle mode. A very small
change in the poisson ratio can make a huge impact on the failure of the structure.

Table 7. Statistical results for Airframe buckling mode with number of samples

Mean Std

PCE (LHS, 40) 0.790765362 0.018305561


PCE (LHS, 50) 0.790766812 0.018343126
PCE (LHS, 60) 0.790780994 0.018360053
PCE (LHS, 70) 0.790822128 0.018351899
PCE (LHS, 80) 0.790790962 0.018347191
PCE Adaptive (LHS, 40) 0.790786609 0.018346468

8 Future Work

The current work and methods implemented for uncertainty analysis will be further extended
for the cases where uncertainties in multiple scales (macro and micro) will be simultaneously
taken into account. Uncertainties in input material properties (in micro scale) and in the
orientation angles (macro scale) of the composite layers will be considered for multi scale
uncertainty propagation and quantification.
27
Figure 14. Probability density function of the buckle mode (histogram and fitted density)

Figure 15. Sensitivity of input parameters on the buckle mode

28
Acknowledgements
This study is performed in the framework of an on-going EU project COMPOSELECTOR.
The authors gratefully acknowledge this financial support.

References
[1] J. C. Helton, J. D. Johnson, W. L. Oberkampf, and C. J. Sallaberry, of Analysis Results
Involving Aleatory and Epistemic Uncertainty, Sandia National Laboratories, Tech. Rep.
SAND 2008-4379, 2008.
[2] R. R. Yager and L. Liu, Classic Works of the Dempster-Shafer Theory of Belief Func-
tions. Studies in Fuzziness and Soft Computing Series. Berlin: Springer, vol. 219, 2008.
[3] Iterative solvers for a spectral Galerkin approach to elliptic partial differential equations
with fuzzy coefficients. SIAM J. Sci. Comp, 35(5):S420-S444, 2013.
[4] Zimmermann H-J. An application-oriented view of modelling uncertainty. Eur J Oper
Res, 122:190–199, 2000.
[5] McKay, M.D., Beckman, R.J., Conover, W.J., A Comparison of Three Methods for Se-
lecting Values of Input Variables in the Analysis of Output from a Computer Code, Tech-
nometrics (JSTOR Abstract), American Statistical Association. 21 (2): 239–245, 1979.
[6] Andrew G. Godfrey and Eugene M. Cliff. Sensitivity equations for turbulent fows, AIAA-
paper 2001-1060. In 39th AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV,
January, CD-ROM, 2001.
[7] L. L. Green, P. A. Newman, and K. Haigler. Sensitivity derivatives for advanced CFD
algorithm and viscous modeling parameters via automatic differentiation. Journal of Com-
putational Physics, 125:313324, 1996.
[8] A. C. Taylor, L. L. Green, P. A. Newman, and M. M. Putko. Some Advanced Concepts
in Discrete Aerodynamic Sensitivity Analysis, AIAA Paper 2001-2529. In AIAA 15th
Computational Fluid Dynamics Conference, Anaheim, CA, June 2001.
[9] Sachdeva, Sachin, Nair, Prasanth B. and Keane, Andy J, Hybridization of stochastic re-
duced basis methods with polynomial chaos expansions. Probabilistic Engineering Me-
chanics, 21, (2), 182-192, 2006.
[10] G. Blatman, B. Sudret, Sparse polynomial chaos expansions and adaptive stochastic
finite elements using a regression approach, C. R. Mecanique 336, 2008.
[11] G. Blatman, B. Sudret, Adaptive sparse polynomial chaos expansion based on least
angle regression, Journal of Computational Physics, 230, 6, 2345 - 2367, 2011.
[12] Jerrad Hampton, Alireza Doostan, Compressive sampling of polynomial chaos expan-
sions: Convergence analysis and sampling strategies. Journal of Computational Physics,
280, 363 - 386, 2015.
[13] Hosder, S., Walters, R.W. and Balch, M., Efficient sampling for non-intrusive poly-
nomial chaos applications with multiple input uncertain variables, in 9th AIAA Non-
Deterministic Approaches Conference, Honolulu, April 2007.
[14] Gerstner, T. and Griebel, M., Numerical integration using sparse grids, Numerical Al-
gorithms 18: 209, 1998.
[15] G. S. Fishman, Monte Carlo: concepts, algorithms, and applications. Springer-Verlag,
New York, 1996.
[16] D. Xiu and G.E. Karniadakis, The wiener-askey polynomial chaos for stochastic differ-
ential equations. SIAM Journal on Scientific Computing, 24(2):619 - 644, 2002.
[17] R. Ghanem and P. Spanos, Stochastic Finite Elements: A Spectral Approach. Springer-
Verlag, New York, 1991.
29
[18] I. Babuska, F. Nobile, and R. Tempone, A stochastic collocation method for elliptic par-
tial differential equations with random input data. SIAM Journal on Numerical Analysis,
45(3):1005 - 1034, 2007.
[19] L. Mathelin, M. Y. Hussaini, and T. A. Zang, Stochastic approaches to uncertainty quan-
tification in CFD simulations. Numer. Algorithms, 38:209-236, 2007.
[20] X. Wan and G. Karniadakis, An adaptive multi-element generalized polynomial chaos
method for stochastic differential equations, Journal of Computational Physics 209,617 -
642, 2005.
[21] A.H. Nayfeh. Perturbation methods. John Wiley Sons, New York, 1973.
[22] Pao-Liu Chow, Perturbation Methods in Stochastic Wave Propagation, SIAM Re-
view,17(1), 57-81, 1975.
[23] A. C. Taylor, L. L. Green, P. A. Newman and M. M. Putko, Some advanced concepts in
discrete aerodynamic sensitivity analysis, AIAA 2001 - 2529, 2001.
[24] Gaspar, Jess and Judd, Kenneth L., Solving Large Scale Rational Expectations Models,
NBER Working Paper No. t0207, 1997.
[25] Atherton, R.W., Schainker, R.B., and Ducot, E.R., On the Statistical Sensitivity Analy-
sis of Models for Chemical Kinetics, AIChE. 21,441-448, 1975.
[26] Cacuci, D.G., Weber, C.E, Oblow, E.M., and Marable, J.H., Sensitivity Theory for Gen-
eral systems of Nonlinear Equations, Nuc. Sci. and Eng. 75, 88-110, 1980.
[27] Crick, M.J., Hill, M.D. and Charles, D.,The Role of Sensitivity Analysis in Assessing
Uncertainty. In: Proceedings of an NEA Workshop on Uncertainty Analysis for Perfor-
mance Assessments of Radioactive Waste Disposal Systems, Paris, OECD, 1-258, 1987.
[28] Zirnmerman, D.A., Hanson, R.T., and Davis, EA., A comparison of parameter esti-
mation and sensitivity analysis techniques and their impact on the uncertainty in ground
water flow model predictions. Albuquerque, NM: Sandia National Laboratory; Report No.
NUREG/CR-5522, 1991.
[29] O. Le Maitre, Knio M. Reagan, H. Najm, R. Ghanem, A stochastic projection method
for fluid flow. I: Basic formulation, Journal of Computational Physics 173, 481 - 511,
2001.
[30] C. Lacor, S. Smirnov, Non-Deterministic Compressible Navier - Stokes Simulations
using Polynomial Chaos, Mini-symposium Uncertainty quantification methods in CFD
and FSI problems, ECCOMAS Conf. Venice, 30/6 - 4/7/2008.
[31] C. Dinescu, S. Smirnov, Ch. Hirsch, and C. Lacor, Assessment of intrusive and non-
intrusive non-deterministic CFD methodologies based on polynomial chaos expansions.
Int. J. of Eng. Systems Modeling and Simulation, 2:87-98, 2010.
[32] N. Wiener. The homogeneous chaos. American Journal of Mathematics, 60(4):897 -
936, October 1938.
[33] Hosder, S., Walters, R. W., and Perez, R., A Non-Intrusive Polynomial Chaos Method
for Uncertainty Propagation in CFD Simulations, 44th AIAA Aerospace Sciences Meeting
and Exhibit, Reno, Nevada, January 9 - 12, 2006.
[34] A. Doostan, R. Ghanem, and J. Red-Horse. Stochastic model reduction for chaos repre-
sentations. Comp. Meth. in Appl. Mech. and Eng., 196:3951-3966, 2007.
[35] Nouy A. A generalized spectral decomposition technique to solve a class of linear
stochastic partial differential equations. Computer Methods in Applied Mechanics and
Engineering,196(45-48):4521-4537, 2007.
[36] L. Tamellini, O. Le Maitre, A. Nouy, Model reduction based on Proper Generalized
Decomposition for the stochastic steady incompressible Navier-Stokes equations, SIAM
J. Sci. Comput., 36(3), A1089 - A1117, 2014.
30
[37] Robert Walters. Stochastic Fluid Mechanics via Polynomial Chaos, 41st Aerospace Sci-
ences Meeting and Exhibit, Aerospace Sciences Meetings, 2003.
[38] R. Caflisch, Monte Carlo and quasi-Monte Carlo methods, Acta Numerica, pp 1-49,
1998.
[39] Giles, M.B., Multilevel Monte Carlo methods, Acta Numerica, 24, pp. 259–328, 2015.
[40] George E. Andrews, Richard Askey, Classical orthogonal polynomials, Polynômes Or-
thogonaux et Applications: Proceedings of the Laguerre Symposium, Springer Berlin Hei-
delberg, pp:36-62, 1985.

31

You might also like