Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Characterisation of Model Error for Charpy Impact Energy of Heat

Treated Steels Using Probabilistic Reasoning and a Gaussian Mixture


Model
Mahdi Mahfouf, Yong Y. Yang, and Qian Zhang

Institute for Microstructural and Mechanical Process Engineering: The University of Sheffield
(IMMPETUS)
Department of Automatic Control and Systems Engineering
The University of Sheffield
Mappin Street, Sheffield S1 3JD, UK
Email: y.y.yang@sheffield.ac.uk, m.mahfouf@sheffield.ac.uk

Abstract: Data-driven modelling has gained much momentum recently, with modelling algorithms being
evolved into more complex structures capable of dealing with highly non-linear multi-dimensional systems.
However, it is widely accepted that data-driven models are typically obtained under the principle of error
minimisation, with the assumption of normal error distribution. The latter assumption is often not valid in
more complex modelling environments, leading to sub-optimal model predictions. In this paper, a new
modelling strategy aimed at exploiting the rich information contained in the model error data using a
Gaussian mixture model (GMM) is proposed. The GMM error model can provide a probability
characterisation of the error distribution, which can then be used complementally with the original data
model. This combination often produces improvements in prediction performances, as will be illustrated in
the case study relating to the hybrid modelling of the Charpy impact energy of heat-treated steels.

Keywords: Data-driven modelling, hybrid modelling, probabilistic reasoning, Gaussian mixture model, EM
algorithm, Charpy impact energy.

2009), while their usage for error characterisation has, to the


1. INTRODUCTION best of our knowledge, not yet been exploited. The GMM
Data-driven modelling has gathered pace and gained much error data model approach is best suited to complement the
popularity due the rapid growth in computing power and to original data model in a hybrid deterministic/stochastic
the wide availability of data/information. Data modelling modelling fashion. This hybrid modelling scheme often leads
approaches can be classified into two main categories of to an enhanced performance on the model prediction, in terms
‘deterministic’ and ‘stochastic’ modelling, depending on the of both accuracy and variations. The proposed GMM error
way the system data is perceived. Modelling tools and modelling framework is applied in this research to a large-
algorithms have also evolved into more complex structures scale Charpy impact energy data set obtained from a real
capable of dealing even with highly non-linear and multi- industrial environment (the steel industry), and the
dimensional input/output mappings. However, many of the preliminary results presented here are very promising.
data models are derived under the principle of error
In view of the rich information that may be contained in the
minimisation, under the explicit or implicit assumption that
error data obtained from a data-driven model, this paper
the modelling errors follow a normal distribution. However,
considers probabilistic modelling of the resulting model
the real world to be modelled is often complex, suffering
errors (obtained via the deterministic model together with the
from different random disturbances, as well as the existence
training data used modelling) under the GMM based
of measurement errors scatter. Hence, the assumption of a
framework. As this type of modelling philosophy is relatively
normal error distribution may not be valid, leading to sub-
new, concepts, issues and ‘rationales’ relating to the
optimal model predictions.
prediction error data modelling using GMM, as well as how
In this paper, a new modelling strategy aimed at exploiting can such a GMM error model to be integrated into the data
the rich information contained in the model error data using a driven model, will be investigated. Section 2 introduces the
Gaussian mixture model (GMM) is proposed to provide a data model of the Charpy impact energy data using a
probability characterization of such a model error. GMM Bayesian neural network (BNN), with analysis of the
models have previously been widely applied in speech and modelling results due to the invalid assumption of error
image classification, as well as other similar pattern Gaussian distribution. Section 3 outlines the motivation of the
recognition problems (Huang and Chau 2008, Kinnunen et al. GMM error model, and describes the cases where such a

IFACMMM 2009. Viña del Mar, Chile, 14 -16 October 2009.


probabilistic modelling approach will be most beneficial. Key where D = {t1, t2, ..., tN }are the output measurements (the
techniques and algorithms for the GMM error modelling targets), U ={u1, u2, ..., uN } is the corresponding inputs, w is
paradigm, with a general guidance of how to implement such the weightings of the BNN, p(w) is the prior probability
reasoning will also be outlined in this Section. In Section 4, a density function (PDF), p(D|w) is the likelihood function,
case study of the Charpy impact energy modelling is given p(w|D) is the posterior PDF, and p(D) is a normalising factor.
using the GMM error modelling paradigm. The resulting
In order to calculate p(w|D), the prior p(w) and the noise
GMM error model is then exploited to improve the BNN data
model p(t|w, u) need to be selected first. It is common
model performance in terms of prediction compensation and
practice that Gaussian distributions, as given in (2), are used
confidence bands. Finally, concluding remarks and further
such that the solution of (1) is analytically tractable.
work are outlined in Section 5.
1 2
p( w) = exp(− α2 w )
2. BNN MODELLING FOR CHARPY IMPACT Z w (α ) (2)
ENERGY DATA 1 2
p(t | w, u ) = exp(− β2 y (u, w) − t )
Z D (β )
The achievement of required levels of toughness properties of
steel products is of paramount importance for many where α and β are hyper-parameters used to control the
applications (such as in ship-building and in nuclear pressure variances of noise (error) and prior weighting, respectively.
vessel construction), and which has also attracted many Zw(α) and ZD(β) are scaling factors. Under the above gaussian
researchers to tackling the toughness of metals via Charpy distribution assumption, the posterior PDF of w can be
impact test modelling. The modelling work presented here simplified as follows:
adopts the data-driven modelling approach, i.e. with the
model structure and parameters being derived from a set of p ( D | w) p( w) 1
p(w | D) = = exp(− S w )
process data related to the Charpy impact energy of heat p ( D) ZS (3)
treated steels, previously provided by Corus-Tata Steel (UK).
β N
α W
The Charpy data set consists of 1661 records, with ∑{ y(u n ; w) − t n }2 + 2 ∑
2
S ( w) = wi
measurements of the steel chemical compositions, heat 2 n=1 i =1

treatment specifications, and Charpy impact energy test


conditions, as summarised in Table 1. where Zs is a scaling factor, and W is the number of weighting
coefficients in w. The PDF of the target can be modelled by
Table 1. Constituents of the Charpy Impact Data the following integration:

Chemical Test Parameters Heat Treatment


p(t | u, D) = ∫ p(t | u, w) p( w | D)dw (4)
Compositions Conditions
Although both the noise and the posterior PDF of w have
been simplified as normal distributions, (4) cannot be solved
C, Si, Mn, S, Cr, Test Depth; Hardening Temp;
analytically. Bishop [1995] used Taylor series
Mo, Ni, Al, V Specimen Size, Cooling Medium;
approximations with 2nd order for S(w) and 1st order for the
Test Site, Test Tempering Temp
target around the most probable weightings wMP, then the
Temp
posterior PDF of the target reduces to a Gaussian distribution
as follows:

Several previous studies within the IMMPETUS Research β (t − y MP ) 2


2 1/ 2 ∫
p(t | u, D) = exp(− ); (5)
Group were conducted using different modelling frameworks ( 2πσ t ) 2σ t2
on the Charpy data, including ensemble neural network model 1
σ t2 = + g T A −1 g ; A = ∇∇S MP = β ∇∇E DMP + αI
(Tenner 1999), neuro-fuzzy model (Chen and Linkens 2001), β
and granular computing based on neuro-fuzzy modelling
(Panoutsos and Mahfouf 2005). It is known that Charpy where A is the Hessian matrix of the total error function, g is
impact energy is difficult to model, due to the existence of a the gradient of output, both evaluated at w = wMP. wMP can be
large scatter in the measurements, and the highly non-uniform obtained via the maximum likelihood (ML) solution of (3),
and sparse data distribution. While a gradual improvement of given as:
model performances has been achieved through more
β N
α W 2 (6)
advanced modelling techniques, better modelling of such a wMP = arg min{
2
∑ { y (u n
; w) − t n }2 + ∑ wi }
2 i =1
property is still required. Here, a BNN model is introduced as w n =1

the ‘data model’, with the modelling framework adapted from


Yang and Linkens (2001). The basics of BNN modelling are This minimisation is similar to the standard multilayer feed-
summarised as follows: forward neural network modelling, with the exception of a
regularisation term. Hence, a back-propagation algorithm can
p ( D | w) p ( w) be employed to obtain the solution for (6). In this paper, the
p(w | D) = (1) double-loop back-propagation training algorithm (Yang et al.
p ( D)
2003) is adapted to train the BNN, while the standard

2
IFACMMM 2009. Viña del Mar, Chile, 14 -16 October 2009.
deviation σt in (5) is estimated by an evidence estimation assumptions of a normal error distribution during the BNN
algorithm (Nabney 2002). modelling is not valid here. It is well known that the Charpy
impact energy measurements will have different scatters
depending on the impact testing temperature, due to the
transition between brittle and ductile cleavage mechanism
(Moskovic and Flewitt 1997). The very sparse and clustered
sample distribution of the Charpy data add to the severity of
the non-Gaussian behaviour.

3. ERROR DATA MODELLING USING GMM


If there are scatters in the data and/or different random
disturbance sources, then the deterministic model
performance is likely to be unsatisfactory, and as a result the
prediction errors will show non-Gaussian behaviour. These
facts formed the basis for seeking to improve model
predictions through the exploitation of the error data via
GMM. Once the GMM error data model is established,
(a) Most probable prediction for testing data probabilistic reasoning can be applied to the joint error PDF
to realise the inference on the conditional PDF, which in turn
can be used to evaluate the conditional mean and standard
deviation. These conditional statistics can then be combined
with the original data model to provide much tighter
confidence bands on the output perdition, as well as the
necessary compensations for prediction bias.
The basic concept of GMM is to use multiple Gaussian
components rather than a single one to approach a complex
non-Gaussian PDF of the error data. The PDF of a GMM is
given as follows:
K
P ( X ) = ∑ π k N ( µ k , Σ k );
k =1 (7)
1 T −1
N (µ k , Σ k ) = ⋅ e [ −1 / 2( X − µ k ) Σ k ( X − µ k )]
(2π ) m / 2 | Σ k |1 / 2
(b) 95% Error bands form the BNN Model
where X is the m-dimensional error data, K is the total number
Fig. 1. Charpy impact energy BNN predictions of Gaussian components in the GMM, µk and Σk (σk) are the
mean and the covariance matrix of the kth component,
The most probable prediction yMP, together with the error respectively, πk is the mixing coefficient representing the
standard deviation σt, will be more than sufficient for the probability of x being generated from the component k. It has
purpose of output prediction. From (5) it is known that the been demonstrated that a GMM is a universal approximator
target follows a normal distribution with a mean of yMP and a for any continuous PDF, provided that a sufficiently large
standard deviation of σt. Hence, it can be deduced that the number of Gaussian components are used.
resulting BNN model can provide both the target prediction
as well as its satndard deviation. However, one needs to be Form (7), it is apparent that if the mixing coefficient πk and
aware of the fact that various assumptions and simplifications the parameters (µk, Σk) for all the components are known, the
made to derive (5). joint PDF of the GMM is fully known. Let us introduce the
GMM parameter set α: as:
The Charpy data were divided into three sets (Training,
Validation and Testing) with a ratio of data 60:25:15 α i = {π i , µi , Σ i }, i = 1,2,...K (8)
respectively. The key BNN model parameters are set as = {α1 , α 2 ,...α K }
follows, Nhid = 3, OutCycle = 100, epoch = 3, InnerCycle =
3 (for evidence), with the Scaled Conjugate Gradient training The GMM modelling of a error data then reduces to first
algorithm. Fig. 1 shows the BNN prediction compared with choosing an appropriate value for K and then finding the
the Charpy impact energy measurements, with a prediction optimal values for α. Suppose that the error data X for GMM
performance of RMSE (T, V, Ts) = [17.31, 20.77, 19.49], is given as follows:
and the average standard deviation for prediction is 120J.
From Fig. 1(b), it is apparent that the 95% confidence bands X = [ x (1), x(2),...x ( N )]T ; x(n) ∈ R m , n = 1,2,... N (9)
are too conservative (and obviously useless), because the

3 IFACMMM 2009. Viña del Mar, Chile, 14 -16 October 2009.


where N is the total number of error data. error data x(n) comes from component k, i.e.
γ ( z nk ) = p ( z nk = 1 | x(n)) .
In order to facilitate the derivation of such algorithm a latent
variable zn (corresponding to x(n)) with 1-to-K representation Similarly, the ML solution of πk and Σk takes the following
is introduced as follows: form:
K Nk 1 N
(15)
z n = [ z n1 , z n 2 ,...z nK ]T , z nk ∈ {0,1}, ∑ zk = 1 (10) πk =
N
; Σk =
Nk
∑ν ( z
n =1
nk )( x n − µ k )( x n − µ k ) T
k =1

Apparently, the K-dimension vector zn has only one element Equations (13-15) are not closed analytical solutions since
with a value of 1, and all the other elements have the value of γ(znk) depends on the GMM parameter set α, hence they
‘zero’. The interpretation of this in the GMM is that the data cannot be used directly to find the ML solution for α.
x(n) is linked to the kth component if znk = 1. If the latent However, such a solution can be found via an iterative search
variables zn (n = 1,2,…, N) are known for the associated error algorithm by computing γ(znk) and α separately, starting from
data X, then the GMM parameter set α can be obtained some initial values of α(0). When computing γ(znk), α is
straightforward by calculating the location and the spreading assumed to be fixed (at its previous calculated values); the
parameters using only the data belonging to the each GMM newly computed values of γ(znk) is then used to update the α.
component, while the mixing coefficients can be calculated This process is repeated until either the algorithm converges
by the proportion of the data belonging to the kth component, or the maximum iteration number tMax has been reached. The
which are given as follows: algorithm based on the above principle is known as the
N Expectation Maximum (EM) algorithm (Bishop 2006), and it
N N ∑z nk ( xn − µ k ) 2 has been widely used to solve various PDF modelling
µ k = ∑ z nk xn / ∑ z nk ; σ = 2 n =1
problems involving latent variables. The flowchart of the EM
k N (11)
n =1 n =1
∑z
n =1
nk algorithm used in this paper is shown in Fig. 2, where the
N initial GMM parameters α(0) are fixed using the k-means
∑z nk clustering in order to speed up the convergence.
πk = n =1
N
; k = 1,2,...K
Initialisation
A difficulty arises in that, in a real GMM modelling scenario, K-means clustering (t = 0)
α(0) = {αi, i = 1,2 …K}, αi= [πi, µi, Σi]
zn (n = 1,2,…, N) are not known, and as a result (11) cannot
be readily used. Also, there may be more than one component
that are responsible for producing the data x(n), due to the
probability nature of the GMM. From (7) it is apparent that E-Step
the likelihood function is given as follows: Computing γ(znk) using (13) given α(t)

N K
P ( X | π , µ , Σ) = ∏∑ π k N ( µ k , Σ k ) M-Step
n =1 k =1
(12) Estimate α(t+1) by maximising the log
N K likelihood L(π, µ, Σ |X) using (14-15)
L (π , µ , Σ | X ) = ∑ log{∑ π k N ( xn | µ k , Σ k )}
n =1 k =1

No t+1 - > t >tMax Or


where L(π,µ,Σ)|X) is the ‘log’ likelihood function. The ML EM converged
solution of α can be obtained by setting the derivative of ?
L(π,µ,Σ)|X) against the corresponding parameter (πk, µk, Σk) to Yes
0. After some manipulations, the ML solution for µk is given End
as follows:
π k N ( xn | µ k , Σ k ) N (13) Fig. 2. Flowchart of EM algorithm for GMM
γ ( znk ) = p ( z nk = 1 | x (n)) = K
; N k = ∑ γ ( z nk )
∑π k N ( xn | µ k , Σ k ) n =1

k =1
4. APPLICATION OF GMM ERROR MODEL TO
∂L (π , µ , Σ | X ) CHARPY IMPACT ENERGY MODELLING
=0 ⇒
∂µ k
N
(14) The Charpy impact energy data provide a good case for
∑ γ ( znk )xn 1 N
GMM error data modelling, due to the fact that there are large
µk = n =1
= ∑γ (z nk )xn , k = 1,2,...K scatters in the impact energy variation, various Charpy impact
N
Nk measurement errors, and highly sparse sampling data
∑γ (z nk ) n =1

n =1 distribution. Analyses of the resulting Charpy impact error


where Nk can be interpreted as the effective number of data resulting from the BNN model confirms that the error
assigned to the kth component, and γ(znk) is the probability of residuals do not follow a uniform distribution, when it break
down to different input regions.

4
IFACMMM 2009. Viña del Mar, Chile, 14 -16 October 2009.
The first step in GMM error data modelling is to form the 0.000 0.814 0.030 0.134 0.407 0.196 0.001 0.473
0.000 0.506 0.019 0.033 0.161 0.129 0.000 0.512 
error data X. The output error ey can be obtained from the 
original modelling data and the data model: 1.515 0.091 0.006 0.004 0.170 0.019 0.034 1.577  (20)
 
0.013 1.062 1.735 0.404 1.620 1.612 0.813 1.479 
e y ( n) = t n − f m (u n , wMP ), n = 1,2,..., N (16) Σ=
0.094 0.033 0.389 2.492 0.568 0.117 0.407 1.347 
 
 2.767 0.803 0.309 0.353 0.615 0.349 1.085 0.461
where fm is the function of the BNN data model. Other 0.150 0.918 0.015 0.012 0.052 0.027 0.002 1.230 
symbols are defined similarly as in Section 2.  
 0.252 1.132 0.008 0.004 0.298 0.103 0.235 1.433 
Apparently, this output error must be a constituent of the
The GMM parameter set α = {π, µ, Σ} completely specifying
error data X form which a GMM is to be developed. The
the joint PDF of the GMM via (7), and a example of the
remaining constituents of the error data X are chosen from the
marginal PDF (obtained from the GMM) for the output error
original inputs U of the data model, assuming that we do not
is shown in Fig. 3.
have any additional information. It is not advisable to use all
the inputs in U as most of the relationships between the inputs
and outputs have already been elicited during the data
modelling stage. The dimension of the error data should be
kept minimum as high dimensional error data will require a
very large number of error data to fix α adequately. Hence, a
heuristic rule proposed here is selecting a minimal subset
from U, which exerts the most relevant influences to the ey.
Following the above heuristic rule, 7 additional inputs, i.e.,
testing depth, specimen size, carbon composition, manganese
composition, sulphur composition, hardening temperature,
and testing temperature, are selected from the 16 inputs to for
the error data X. The resulting error data X is a 8 × N matrix,
given by:

 x (1)   e y (1) u e 2 ... u e18 


1

 x (2)   2 
 =  e y ( 2) u e 2 ... u e28 
X = (17)
 ...   ... ... ... ... 
   N 
 x ( N )  e y ( N ) u e 2 u eN8  Fig. 3. GMM marginal PDF for ey

where ei, i = 2, 3, …8, are the indices corresponding to the The GMM error model (7) contains rich information about
positions U for the ith error data constituent. Only data in the the model output error, hence it can be explored to enhance
training and validation set are used to form X, with N = 1442. the BNN data model. One way of using the GMM relies on
finding the following conditional PDF under given inputs,
After some trial-and-error experiments, it is decided that K =
8 is appropriate for the GMM error data modelling. The EM P (e y , xe 2 ,...xe 7 ) (21)
P(e y | xe 2 ,...xe 7 ) =
P (e y )
= P(e y , xe 2 ,...xe 7 ) / ∫ P(e, X e )dX e
algorithm as shown in Fig. 2 is then initiated to obtain the Ω Xe

optimal α, with tMax = 30. The GMM parameters obtained are where Xe = [xe2, …, xe7]T. The analytical derivation of the
given in (18-20). conditional PDF (21) is challenging because the involvement
π = [0.29, 0.05, 0.10, 0.15, 0.06, 0.11, 0.15, 0.09]T (18) of a high dimensional integration, numerical solutions can be
found. From (21), the associated conditional mean µe | X and
y e
 - 0.577 0.525 0.438 - 0.485 0.692 - 0.208 0.980 - 0.003
 0.317 0.695 0.341 0.542 0.115 1.031 1.093 0.081  standard deviation σ e | X can also be computed.
  (19)
y e

 1.172 - 0.735 0.443 0.445 - 1.183 1.085 - 1.043 0.055 



- 0.604 - 0.542 - 0.589 - 0.588 0.259 - 1.051 0.237 0.088
 The conditional mean and standard deviation have been
µ= exploited to compensate the BNN data model for biased
 0.005 - 0.674 - 2.231 2.235 - 0.280 - 0.405 - 0.579 0.008 
  predictions, as well as for providing prediction confidence
 1.208 - 0.144 0.770 0.620 - 0.204 - 0.959 0.003 - 0.066 
 0.108 0.075 0.392 0.483 0.197 1.049 - 0.988 - 0.105  bands. One of the hybrid model results on the test data is
  shown in Fig. 4.
 - 0.398 0.153 -1.309 -1.408 - 1.279 0.331 - 0.981 0.010 

5 IFACMMM 2009. Viña del Mar, Chile, 14 -16 October 2009.


into the error data is currently mainly based on ‘expert
knowledge’. Also, good criteria and algorithms for
determining the total number of Gaussian components to be
used in the GMM are highly desirable. Indeed, the trial-and-
error method used in this research study may be practical but
it may prove inefficient especially in the case of a high
dimensional input space. The authors will endeavour to tackle
these issues in future research.

ACKNOWLEDGEMENTS
Financial support from the UK-EPSRC is acknowledged
under Grant EP/F023464/1. The authors wish to thank
CORUS-TATA Engineering Steel (UK) for providing the
modelling data used in this research.

REFERENCES
Fig. 4. Compensated predictions and 95% confidence bands
Bishop, C. M. (1995), Neural Networks for Pattern
Fig. 4 shows that the confidence bands fit nicely with the Recognition, Clarendon Press, Oxford.
actual measurements in the test data. Compared to the results Bishop, C. M. (2006), Pattern Recognition and Machine
obtained via BNN model (see Fig. 1(b)), it is a significant Learning, Springer.
improvement. The effect of prediction compensation is less Chen, M. Y., and D. A. Linkens (2001), A systematic neural-
dramatic, which confirms that the BNN model is good as it fuzzy modelling framework with application to material
provides predictions with a small bias. properties of alloy steels, IEEE Transactions on System,
Man and Cybernetics – Part B: Cybernetics, 31(5), 781-
5. CONCLUSIONS 790.
Huang, Z. K., and K. W. Chau (2008). A new image
A novel way of using Gaussian mixture model to capture the thresholding method based on Gaussian mixture model
complicated probabilistic behaviour of model error data has Applied Mathematics and Computation, 205, 899–907.
been developed in this paper. This method is particularly Kinnunen, T., J. Saastamoinen, V. Hautamäki, M. Vinni, and
beneficial in cases where the modelling data are not well- P. Fränti (2009). Comparative evaluation of maximum a
distributed, have a large measurement scatter and include Posteriori vector quantization and Gaussian mixture
multiple sources of random disturbances. Standard models in speaker verification, Pattern Recognition
‘deterministic’ data-driven modelling often fails to deliver Letters, 30, 341–347.
satisfactory results, and the use of GMM error data model can Moskovic, R and P. E. J. Flewitt (1997), An overview of the
provide valuable information about the stochastic behaviour principles of modelling Charpy impact energy data using
of the errors. The GMM error data model can then be statistical analysis, Metallurgical & Material
exploited to provide confidence bands, thus giving a vital Transactions A, 28A, 2609-2623.
information about the reliability of the prediction, which is Nabney, I. T. (2002). NETLAB – Algorithms for Pattern
often critical in real application of model predictions. The Recognition, Springer, London.
GMM error model can also provide prediction compensation Panoutsos, G. and M. Mahfouf (2005), Granular computing
to the original data model in order to reduce the error bias. and evolutionary fuzzy modelling for mechanical
properties of alloy steels, Proceeding of 16th IFAC World
Detailed development of algorithms and probabilistic
Congress , Prague, Czech Republic, 4-8 July.
manipulation of the GMM error data modelling are presented
Tenner, J. (1999), Optimisation of the Heat Treatment of
in this paper, together with the strategies and key concepts of
Steel using Neural Networks, PhD Thesis, The University
GMM implementation. The GMM error model paradigm is
of Sheffield.
applied to the Charpy impact energy data modelling, which is
Yang, Y. Y. and D. A. Linkens (2001). Error Bounds
known to have processed most of the unwanted
Calculation for Steel Tensile Strength Prediction—
characteristics, such as large measurement scatter, sparse
Comparison of Bayesian and Ensemble Model Approach,
sample distribution, high dimensional multiple random noises
The IFAC Symposium on Automation in Mineral, Mining
(Panoutsos and Mahfouf 2005). The GMM error model
and Metal Processing, Tokyo, Japan, Sept 04 - 06.
developed here is generic in nature, and can be coupled with
Yang, Y. Y., D. A. Linkens, M. Mahfouf, and A. J. Rose
other types of data-driven models.
(2003). Grain Growth Modelling for Continuous
Preliminary results of GMM error data modelling are Reheating Process – A Neural Network-Based Approach,
promising, though a few technical challenges remain. The ISIJ International, 43(7), 1048-1056.
formation of the error data matrix deserves more elaborated
techniques, since the selection of the inputs to be included

6
IFACMMM 2009. Viña del Mar, Chile, 14 -16 October 2009.

You might also like