Professional Documents
Culture Documents
Advances in Methods For Uncertainty and Sensitivity Analysis
Advances in Methods For Uncertainty and Sensitivity Analysis
Nicolas Devictor
CEA Nuclear Energy Division
nicolas.devictor@cea.fr
in co-operation with:
Nadia PEROT, Michel MARQUES and Bertrand IOOSS (CEA)
Julien JACQUES (INRIA Rhône-Alpes, PhD student),
Christian LAVERGNE (Montpellier 2 University & INRIA).
1
Introduction (1/2)
•In the framework of the study of the influence of uncertainties on the results of
severe accidents computer codes, and then on results of Level 2 PSA (responses,
hierarchy of important inputs…)
•Why taken account uncertainty ?
– A lot of sources of uncertainty ;
– To show explicitly and tracebly their impact decision process that could be
robust against uncertainties.
•Probabilistic framework is one of the tools for a coherent and rational treatment of
uncertainties in a decision-making process.
•Some applications of treatment of uncertainty by probabilistic methods
– For a best understanding of a phenomenon
• To evaluate the most influential input variables. To steer R&D.
– For an improvement of a modelling or a code
• Calibration, Qualification…
– In a risk decision-making process
• Hierarchy of contributors interest for actions to reduce uncertainty or to
define a mitigation mean (for example a SAM measure)
• Confidence intervals or probabilistic density functions or margins…
•In any analysis, we must keep in mind the choice in modelling and the assumptions.
– Case : a variable has a big influence on the response variability, but we have
a low confidence on his value…
Theory
Input variables
« mathematics »
Output
Meaning ? Code Equations
Variability ?
Numerical schemes
Convergence criteria
Model parameters
…
Uncertainties on:
- physical variables,
- model parameters, Inputs for the study
- models… Probabilistic models of the
uncertainties on physical variables
and parameters ;
Mathematical model of the ageing
or failure phenomenon ;
Process (codes, Acceptance criterion
experiences…)
•2nd Question : what is the part of the variance of Y that comes from the
variance of Xi (or a set {Xi}) ?
V E Y X
V Y
– Usual sensitivity indices
• Pearson’s correlation coefficient, Spearman’s correlation coefficient,
Coefficients from a linear regression, PRCC…
– In the case of non linear or non monotonous : Sobol’s method or FAST
• with very time consuming code ( use of response surface),
• problems with correlated uncertainties.
– All these indices are defined under the assumptions that the
variables inputs are satistically independent.
Sj
V EY/Xj
V Y
•Inputs are statistically independent the sum of these sensitivity indices = 1.
•Inputs are statistically dependent
– the terms of model function decomposition (Sobol’s method) are not
orthogonal, so it appears a new term in the variance decomposition.
the sum of all order sensitivity indices is not equal to 1.
– Effectively, variabilities of two correlated variables are linked, and so when
we quantify sensitivity to one of this two variables we quantify too a part of
sensitivity to the other variable. And so, in sensitivity indices of two variables
the same information is taken into account several times, and sum of all
indices is thus greatest than 1.
•We have studied the natural idea: to define multidimensional sensitivity indices for
groups of correlated variables.
– We can also define higher order indices and total sensitivity indices.
– If all input variables are independent, those sensitivity indices are the same
than in case of independant variables.
– The assessment is often time consuming (extension of Sobol’s method)
some computational improvements are in progress and very promising.
•The best function in the family F is then the function f0 that minimized a
risk function : R f Lz , f x, c dP x , y
•Neural networks
– Regression models (assumption : continuous function).
– Other possibility : discriminant function (logit, probit models).
– to improve the
Mean
generalization error,
– to estimate the 0,6000 Minimum
Comparison of results
–
0,0000
Pdf of the output, 30 40 50 60 70 80 90
po2 po2
Pression Pression
0 2 4 6 8 10 0 2 4 6 8 10
Density Traces
Density Traces
0,2 Variables
0,24 Variables
po2
po2
0,16 Pression 0,2 Pression
density
0,16
density
0,12
0,12
0,08
0,08
0,04 0,04
0 0
0 2 4 6 8 10 0 2 4 6 8 10
po2
6
po2
4
4
2
2
0
0 0 2 4 6 8 10
0 2 4 6 8 10 Pression
Pression
S and S is:
RS,i ,i
V f X 1, , X p
V f X 1, , X p
2 covE RS X1,, X p X i , E X1,, X p X i
V f X1,, X p
•Problem of the computation of the covariance term generally impossible to deduce results
on the “true” function from results obtained from a RS.
•Only cases where results can be deduce are :
– SR is a truncated model obtained from a decomposition in a orthogonal basis;
is not very sensitive of the variables X1, …, Xp
SSR,i / (V((x1, …, xp))+V(SR(x1, …, xp)))
• Probability distribution
– Simulation + fit + statistical tests (asymptotical)
• Confidence interval
PPmY M
– From the density function
– Wilks formula
1 N N 1 N 1
G u 0
• FORM approximation
P F HL
U* Failure
Safe domain
• SORM approximation
n
(Breitung) domain HL
PF HL1 HL i
1 2
i1 0
U1
G(U) = 0
• u* factors
Sensitivity
i
i
HL
OCDE/NEA/CSNI WGRISK WS on Level 2 PSA and SAM March 2004 23
CEA/Cadarache - Plant Operation and Reliability Laboratory
FORM – simple case
Domaine de
P f Pr obY1 HL P* défaillance
HL
HL 0
U1
G(U) = 0
U2
Domaine de
défaillance
P*
0 U1
G(u)=0
Simulations FORM/SORM
RESULTS RESULTS
ASSUMPTIONS
ASSUMPTIONS
NO ASSUMPTION ON THE RANDOM
VARIABLES (DISCRETE, CONTINOUS, CONTINOUS RANDOM VARIABLES
DEPENDANCY…)
CONTINOUS LIMIT STATE FUNCTION
DRAWBACKS
DRAWBACKS
NO ERROR ON THE ESTIMATION
COMPUTATION COSTS
(depends on the probability level) GLOBAL MINIMUM