Understanding Response Surfaces: Central Composite Designs Box-Behnken Designs

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Understanding Response Surfaces

In a process of engineering design, it is very important to understand what and/or how many
input variables are contributing factors to the output variables of interest. It is a lengthy process
before a conclusion can be made as to which input variables play a role in influencing, and how,
the output variables. Designed experiments help revolutionize the lengthy process of costly and
time-consuming trial-and-error search to a powerful and cost-effective (in terms of
computational time) statistical method.
A very simple designed experiment is screening design. In this design, a permutation of lower
and upper limits (two levels) of each input variable (factor) is considered to study their effect to
the output variable of interest. While this design is simple and popular in industrial
experimentations, it only provides a linear effect, if any, between the input variables and output
variables. Furthermore, effect of interaction of any two input variables, if any, to the output
variables is not characterizable.
To compensate insufficiency of the screening design, it is enhanced to include center point of
each input variable in experimentations. The center point of each input variable allows a
quadratic effect, minimum or maximum inside explored space, between input variables and
output variables to be identifiable, if one exists. The enhancement is commonly known as
response surface design to provide quadratic response model of responses. The quadratic
response model can be calibrated using full factorial design (all combinations of each level of
input variable) with three or more levels. However, the full factorial designs generally require
more samples than necessary to accurately estimate model parameters. In light of the deficiency,
a statistical procedure is developed to devise much more efficient experiment designs using three
or five levels of each factor but not all combinations of levels, known as fractional factorial
designs. Among these fractional factorial designs, the two most popular response surface designs
are Central Composite designs (CCDs) and Box-Behnken designs (BBMs).
Design of Experiments types are:
Central Composite Designs
Box-Behnken Designs
Response Surfaces are created using:
Standard Response Surface - Full 2nd-Order Polynomial algorithms
Kriging Algorithms
Non-Parametric Regression Algorithms
Sparse Grid Algorithms

Kriging Algorithms

Kriging postulates a combination of a polynomial model plus departures of the form given by :
(7)

where y(x) is the unknown function of interest, f(x) is a polynomial function of x, and Z(x) is the
realization of a normally distributed Gaussian random process with mean zero, variance 2, and
non-zero covariance. The f(x) term in is similar to the polynomial model in a response surface
and provides a global model of the design space.
While f(x) globally approximates the design space, Z(x) creates localized deviations so that
the kriging model interpolates the N sample data points. The covariance matrix of Z(x) is given
by .
(8)

In R is the correlation matrix, and r(xi, xj) is the spatial correlation of the function between any
two of the N sample points xi and xj. R is an N*N symmetric, positive definite matrix with ones
along the diagonal. The correlation function r(xi, xj) is Gaussian correlation function:
(9)

The k in are the unknown parameters used to fit the model, M is the number of design variables,
and

and

are the kth components of sample points xi and xj. In some cases, using

a single correlation parameter gives sufficiently good results; the user can specify the use of a
single correlation parameter, or one correlation parameter for each design variable
(Tools>Options>Design Exploration>Response Surface>Kriging Options>Kernel Variation
Type: Variable or Constant).
Z(x) can be wrote :
(10)

Let the input sample (as generated from a DOE method) be X = {x1 , x2, x3, , xM}, where each
xi is an N-dimensional vector and represents an input variable. The objective is to determine the
equation of the form:
(11)

Where, W is a weighting vector. In case of generic non-parametric case, the Equation 11 is


rewritten as the following:
(12)
Where,

is the kernel map and the quantities Ai and

are Lagrange multipliers

whose derivation will be shown in later sections.


In order to determine the Lagrange multipliers, we start with the assumption that the weight
vector W must be minimized such that all (or most) of the sample points lie within an error zone
around the fitted surface. For a simple demonstration of this concept, please see .
Figure: Fitting a regression line for a group of sample points with a tolerance of which is
characterized by slack variables * and

Goal Driven Optimization Theory

In this section, theoretical aspects of goal driven optimization are briefly discussed.

Shifted Hammersley Sampling Method

Pareto Dominance in Multi-Objective Optimization

MOGA (Multi-Objective Genetic Algorithm)

Decision Support Process

Nonlinear Programming by Quadratic Lagrangian (NLPQL)

You might also like