Types of Analysis

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

Types of analysis

Many different models are used in MVA, each with its own type of analysis:

1) Multivariate analysis of variance (MANOVA) extends the analysis of variance to cover cases
where there is more than one dependent variable to be analyzed simultaneously; see also
Multivariate analysis of covariance (MANCOVA).
2) Multivariate regression attempts to determine a formula that can describe how elements in
a vector of variables respond simultaneously to changes in others. For linear relations,
regression analyses here are based on forms of the general linear model. Some suggest that
multivariate regression is distinct from multivariable regression, however, that is debated
and not consistently true across scientific fields.[2]
3) Principal components analysis (PCA) creates a new set of orthogonal variables that contain
the same information as the original set. It rotates the axes of variation to give a new set of
orthogonal axes, ordered so that they summarize decreasing proportions of the variation.
4) Factor analysis is similar to PCA but allows the user to extract a specified number of
synthetic variables, fewer than the original set, leaving the remaining unexplained variation
as error. The extracted variables are known as latent variables or factors; each one may be
supposed to account for covariation in a group of observed variables.
5) Canonical correlation analysis finds linear relationships among two sets of variables; it is the
generalised (i.e. canonical) version of bivariate[3] correlation.
6) Redundancy analysis (RDA) is similar to canonical correlation analysis but allows the user to
derive a specified number of synthetic variables from one set of (independent) variables that
explain as much variance as possible in another (independent) set. It is a multivariate
analogue of regression.[4]
7) Correspondence analysis (CA), or reciprocal averaging, finds (like PCA) a set of synthetic
variables that summarise the original set. The underlying model assumes chi-squared
dissimilarities among records (cases).
8) Canonical (or "constrained") correspondence analysis (CCA) for summarising the joint
variation in two sets of variables (like redundancy analysis); combination of correspondence
analysis and multivariate regression analysis. The underlying model assumes chi-squared
dissimilarities among records (cases).
9) Multidimensional scaling comprises various algorithms to determine a set of synthetic
variables that best represent the pairwise distances between records. The original method is
principal coordinates analysis (PCoA; based on PCA).
10) Discriminant analysis, or canonical variate analysis, attempts to establish whether a set of
variables can be used to distinguish between two or more groups of cases.
11) Linear discriminant analysis (LDA) computes a linear predictor from two sets of normally
distributed data to allow for classification of new observations.
12) Clustering systems assign objects into groups (called clusters) so that objects (cases) from
the same cluster are more similar to each other than objects from different clusters.
13) Recursive partitioning creates a decision tree that attempts to correctly classify members of
the population based on a dichotomous dependent variable.
14) Artificial neural networks extend regression and clustering methods to non-linear
multivariate models.
15) Statistical graphics such as tours, parallel coordinate plots, scatterplot matrices can be used
to explore multivariate data.
16) Simultaneous equations models involve more than one regression equation, with different
dependent variables, estimated together.
17) Vector autoregression involves simultaneous regressions of various time series variables on
their own and each other's lagged values.
18) Principal response curves analysis (PRC) is a method based on RDA that allows the user to
focus on treatment effects over time by correcting for changes in control treatments over
time.[5]
19) Iconography of correlations consists in replacing a correlation matrix by a diagram where the
“remarkable” correlations are represented by a solid line (positive correlation), or a dotted
line (negative correlation).

You might also like