Professional Documents
Culture Documents
Attachment 0-24
Attachment 0-24
Attachment 0-24
Ares(2019)7922089 - 27/12/2019
Dissemination level
PU: Public X
PP: Restricted to other programme participants
RE: Restricted to a group specified by the consortium
CO: Confidential, only for members of the consortium
Disclaimer:
This document’s contents are not intended to replace consultation of any applicable legal
sources or the necessary advice of a legal expert, where appropriate. All information in
this document is provided “as is” and no guarantee or warranty is given that the information
is fit for any particular purpose. The user, therefore, uses the information at its
sole risk and liability. For the avoidance of all doubts, the European Commission has no
liability in respect of this document, which is merely representing the authors’ view.
TABLE OF CONTENTS
Versioning and contribution history .................................................................................................................. 2
Disclaimer: ......................................................................................................................................................... 2
About COMPOSELECTOR ................................................................................................................................... 5
Task and Deliverable descriptions from the project proposal .......................................................................... 5
Compliance with the work-programme NMBP-23-2016 ................................................................................... 6
Executive Summary ........................................................................................................................................... 6
Introduction ...................................................................................................................................................... 6
Terminology related to Modelling and Simulation ........................................................................................... 7
Modelling Strategy Selection ............................................................................................................................ 8
Workflow Construction .............................................................................................................................. 9
Workflow Selection .................................................................................................................................. 10
Technical KPIs for Model Selection ................................................................................................................. 11
Model Robustness .................................................................................................................................... 12
Sensitivity ................................................................................................................................................. 12
Model Complexity .................................................................................................................................... 13
Model Uncertainty ................................................................................................................................... 14
Model selection using graph theory ................................................................................................................ 16
Business KPIs for model selection process ...................................................................................................... 18
Modelling workflow selection with DMN. ....................................................................................................... 19
The advantages of modelling business decisions with DMN ................................................................... 19
A general operative example in material modelling ................................................................................ 21
WORKLFOW SELECTION IN END USER’S APPLICATIONS .......................................................................... 23
Conclusions ..................................................................................................................................................... 25
Material Simulation
Layer Layer
Figure 1: Top level architecture of BDSS platform used in COMPOSELECTOR project and its
components
An overview of the Composelector BDSS platform is presented in Figure 1. The Business Layer
(BPMN based tool) interacts with Material Layer (database and workflow manager system). The
Simulation Layer (MuPIF) provides an infrastructure for defining and executing distributed
simulation workflows, consisting of several linked/coupled models designed for defined Key
Performance Indicators (KPIs). The individual models are connected to the platform by
implementing standardized APIs, allowing for model steering, data/metadata exchange, distributed
and remote computation, monitoring, etc. The metadata structure is defined by a schema where
the metadata can be attached to any component and validated against this schema. Since the
metadata are encapsulated in the individual components, they can be passed together with the data
in one consistent package. This novel development is also key for the management of the metadata
linked to modelling and simulation tasks.
1
https://doi.org/10.1016/j.compstruct.2018.06.121
Executive Summary
In this deliverable, we describe the developed approach and tool for model selection. The idea
proposed in COMPOSELECTOR is to consider graph theory and Decision Model and Notations (DMN)
standard to implement a practical model selection in the BDSS. Indeed, using DMN, decisions can
be represented in a business process just as a sequence of logical steps, which guide the execution
based on KPIs and input values.
Introduction
Selecting the appropriate model is of key importance for a reliable decision process. A good model
should capture all necessary effects and should contain as few input parameters as possible. As a
matter of fact, too simplistic model may fail to capture much of the useful information available and
an overly complex model may describe many irrelevant details. In addition, the large number of
existing material models makes it difficult to select the most relevant one. In other words, which
atomistic, electronic or mesoscopic models are available, and which model is appropriate.
Particularly, the following features should be analyzed and answered: i) matching modelling
strategies (predictive capability of a Model) with business decision needs; ii) translation of process
modelling value (cost/benefits), cost and triggers (part complexity/size/materials/process) iii)
evaluation of optimization strategies for trade-offs in decisions and effect of model resolution
(uncertainty) on business decisions: Select a model for decision making.
Identify potentially
needed tasks
BDSS
Decide if
additional tasks
to be analyzed
Figure 4: Business decision should influence how the modelling should be done based on cost,
time, fidelity, for example.
Workflow Construction
Typically, a modelling workflow is composed of a number of predefined models. Figure 4 illustrates
the material modelling strategy. At the top level, the properties acting as key performance indicators
2 Model means both the PE and the MR, which is related to specific applications and materials choice. Furthermore,
the choice of the PE is related to the required granularity level, i.e., whether one needs to describe the system on
electronic, atomistic, particles, or continuum levels.
Workflow Selection
Each workflow should define its performance indicators (as workflow metadata). The task
workflows should have some general common inputs-outputs defined. The individual workflows will
depend on additional parameters-properties, which can be obtained by simulation or from
database, however this logic must be implemented into specific workflow. This way the BDSS is able
to freely combine different alternatives without being concerned by i/o dependencies. In parallel, a
business decision mechanism based on a balance between investment (complexity and number of
inputs) and return will be implemented to decide on the type of models (in a chain) namely
electronic, atomistic, mesoscopic and/or continuum
Level n
Model Selec1on Strategy
DB
P: Property
Mn.1 Mn.2 Mn.i
M: Model
DB: Database
Level i
Level 2
Level 1
DB
Figure 5: Each edge in the graph represents a coupling between two partial models at two
different scales. A parameter based on the model complexity and number of input parameters
obtained and used for hierarchal model definition and selection.
DMN
Robustness Uncertainty
DECISION TABLES
KPIs
THE TRANSLATOR
Complexity Sensitivity
• Error and
• Model and Software Uncertainty
complexity
Figure 6: Technical KPIs for model selection: Robustness, Sensitivity, Complexity and Uncertainty
Model Robustness
The robustness is defined as the degree that model is physics based. This represents the capability
of the model to predict the physical properties and represent the physical reality. The following
attributes could be used to “estimate” qualitatively the robustness of a model:
§ The degree to which the model is calibrated: The number of modelling assumptions that can
be relaxed without overturning the conclusions (outputs)
§ The number of modelling assumptions that can be relaxed without overturning the
conclusions (outputs)
§ Documented verification and experimental validations of the model: the physics fidelity
basis with which the model is being extrapolated from its validation and calibration database
to the conditions of the application of interest. The Translator and Modeler have then to
refer to available literature, data bases and on own experience to give an estimate of the
model robustness.
Sensitivity
Based on the sensitivity analysis (estimation of sensitivity indices), it is possible to determine the
most important uncertain input-parameters, their correlation and to quantify their impact on the
predicted output (Figure 7).
The following information for model selection could be obtained from sensitivity analysis:
§ Identification of trends in the model response
§ Qualitative or quantitative ranking of inputs
§ Discrimination of the importance among different inputs
§ Grouping of inputs that are of comparable importance
§ Identification of inputs that are not important
§ Identification of critical limits
§ Identification of inputs and ranges that produce high exposure or risk
Model Complexity
The complexity of model might combine both model and software implementation complexities
(Figure 8 and Figure 9). The following attributes define the complexity of a model:
§ Structure and the level of detail in the model workflow.
§ The number of parameters and state variables
§ The sophistication of the mathematical relationships that describe each process
§ Number of processes in the model
Model complexity measure.
Software complexity measure.
Figure 8: Model complexity and data needs. Less complex models are used for the trends analysis
where more complex model would be used for very specific analysis. The factors that measure
Model MS3
Figure 9. Definition (selection) of Model Granularity/Complexity. M1: number of the individual
models. M2: number of the connections among the workflow. M3: number of attributes
(parameters). M4: indicates the number of the manually changed attributes.
Figure 10: Computational complexity determine the required time duration to fulfil a simulation
run. Attributes for computational complexity could include i) the l algorithmic complexity of the
model and ii) the total number of lines in all the program blocks
Model Uncertainty
Uncertainties are associated to the selection of the models and to the input parameters (operational
and geometrical) of the physical model considered for analysis. For example, in computational fluid
modelling, the physical properties of the fluid or the boundary conditions may not be known exactly.
Geometrical uncertainties mainly arise due to manufacturing tolerances and imprecise geometrical
D4.2: Model Selection 14
definitions in the model. The uncertainties associated to the system of consideration can have
different origins and can be classified as epistemic and aleatory. The epistemic uncertainties are
related to the properties of the model considered. While, the aleatory uncertainties are related to
the properties of the system being considered for the analysis. Epistemic uncertainties are due to
any lack of information in any phase of the modelling process and can be reduced as updated
information is acquired. These uncertainties may arise from mathematical assumptions or
simplifications used in the derivation process of the governing equations. The parameters with
epistemic uncertainty are not associated to any probability density functions and are not well
characterized by stochastic approaches. Typically, they are specified as an interval. In uncertainty
quantification of epistemic uncertainty, the output result is also specified as an interval. Epistemic
uncertainties can be quantified using sampling-based methods (such as random sampling, Latin
hypercube sampling, interval analysis, fuzzy set theory, etc). Aleatory uncertainties are associated
with the physical system or the environment under consideration. Material properties, operating
conditions, geometrical variability, initial conditions, boundary conditions etc. are some of the
examples of aleatory uncertainties. Uncertain parameters of aleatory uncertainty are always
associated to a particular probability distribution function. Using appropriate uncertainty
quantification (UQ) methods, the probability distribution function of the output quantity can be
determined (Figure 11).
Figure 11: Computational framework: Non-deterministic.
A unique set of methods are required to quantify each form of uncertainty. Epistemic uncertainties
are considered as reducible uncertainties. They can be reduced by research advancement. However,
aleatory uncertainties are unavoidable.
Figure 12: Work flow with ModeFrontier: All inputs and outputs are connected with APIs Design
variable is defined as input Objective and constraints are connected to outputs Optimizer is
selected Run the optimization process
Figure 13: A sketch of a graph
• Based on the sensitivity measures, the model robustness and model complexity, we provide
an approach for the model selection.
• The idea is to consider various partial models as nodes in a graph and a combination of partial
models as a path in the graph.
• Each edge in the graph represents a coupling between two partial models at different scales
and/or processing options within the bottom-up approach indicated by the unidirectional
arrows.
• The chain of models will be allowed to transport these uncertainties through the scales, using
robust uncertainty quantification or probabilistic analysis by sampling methods.
• The final model quality is a combination of the qualities and sensitivities of the partial models.
It is important to notice that the data of the problem are only known to a certain degree. In the
best case, we have some knowledge of the uncertainties that we need to take into account (fine
scale features of the structure, material parameters).
• The uncertainties are the key drivers of model adaptivity. Indeed, the accuracy that we require
in our chain of models does not need to exceed the accuracy in the data. In other words, we can
afford to choose less accurate models and that are faster to evaluate, provided that the resulting
error is of the order of the global uncertainties
• we can afford to choose less accurate models and that are faster to evaluate, provided that the
Cost
Model
Resources Expertise
selction
Utility
Figure 15. Business Attributes and KPIs for model selection
The utility of a model is defined considering the problem on hand i.e. the user case and the
attributes to be estimated. The utility of a model is determined according to the information we
have reflecting the specific objective at hand. The decision might include theoretical and
computational considerations. Notice that each application case is translated into a workflow
consisting of materials modelling and materials data components using the MODA (Modelling Data)
Figure 16: Decision Modelling and Notations (DMN) process
Figure 17: A process where decision logic is modeled with process flow elements
The example presented in Figure 18 shows an example where decision logic is separated from the
process logic. The process starts with a task that obtains the required information as in the previous
example. The following task node is a decision logic node, which determines a single output value
from a given set of input values. The whole decision logic is implemented inside this node, hiding
the supporting logic decision internally.
Figure 18: A process where decision logic is implemented in a DMN decision task
As mentioned before, one of the elements from the DMN standard used to represent decision logic
is the so-called decision table. A decision table is a two-dimensional tabular structure in which inputs
and outputs are represented in columns with each row corresponding to a single decision rule. Each
cell has a Boolean condition which evaluates to true or false. If all input conditions are satisfied, the
rule is activated and the output value is selected from the corresponding column.
In our simple example, there are four input variables which guide the decision process: availability
of local license (LocalLicense), availability of cluster license (ClusterLicense), availability of enough
CPU resources in local computer (LocalResources) and availability of enough CPU resources in
cluster computer (ClusterResources), all of them of type Boolean. The decision logic is expected to
produce two output values: an indication if the simulation can be executed (canBeExecuted) and
Figure 19 presents the DMN table that can be used to model the decision logic described before.
There are four columns, one for each of the input variables, and two columns for the output
variables. Each row represents a decision rule. The table hit policy is “First”, identified by the letter
“F” on the top left corner. With this policy, rules are evaluated in the order listed in the table,
selecting the first one that matches, avoiding possible overlap between the rules. The first rule
establishes that if local license is available and local CPU resources are available, execution takes
place locally. Only if the first rule does not match, the second rule is considered. It establishes that
execution will take place in the cluster if a cluster license is available and enough CPU resources are
available in the cluster. In any other case, the simulation cannot be executed. The solution with
decision logic implemented as a decision task (Figure 18) is preferred due to a number of reasons.
In particular, decision logic implemented as a sequence of gateways (as in Figure 17) complicates
unnecessarily the process. But even worse, a small change in decision logic will correspond to a
structural change in the process. With the decision logic extracted from the process and fully
detailed in a separate model, the resulting BPMN model is more clear, simpler and maintainable.
DMN can break down decision logic in maintainable units and reusable modules, with better support
to changes than other approaches. In fact, keeping separated decisions logic from process logic
provides betters consistency across the whole enterprise, making also decision logic auditable and
traceable.
Figure 21: Business process for model selection between atomistic and mesoscopic
Clearly, the process could have been more complex, involving even different human actors with
different level of responsibility (technical or business), supporting even a more complex decision
path.
Figure 22: Workflow selection by using a DMN decision table
The selection of the workflow can be implemented with a DMN table like the one shown in Figure
23, a decision table with seven inputs and a single output.
Figure 24: The enhanced DMN table with default output.
Figure 25 presents a BPMN process that is similar to the one depicted in Figure 22, where a user
task has been added. An expert user is contacted in case no adequate workflow is available. The
expert creates then a new MuPIF workflow satisfying the requirements, which can then be used to
run the requested simulation.
Figure 25: Workflow selection
Note that DMN tables can be used to provide a complete automatic answer if all requirements are
fulfilled or used to start a human interaction in order to provide a solution for an unexpected
situation.
Conclusions
In this Deliverable, it has been model selection methodologies and tools are proposed. A tool based
on graph theory is developed. DMN is used for the selection process within the BDSS. DMN plays a
determinant role in analyzing, representing and documenting the decision logic that drives the
model selection aspects of the modeling approach. Moreover, DMN provides an executable
expression language, such that the decision model can be executed, without the need for