Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 1

In statistics and applications of statistics, normalization can have a range of meanings.

[1] In the simplest


cases, normalization of ratings means adjusting values measured on different scales to a notionally
common scale, often prior to averaging. In more complicated cases, normalization may refer to more
sophisticated adjustments where the intention is to bring the entire probability distributions of adjusted
values into alignment. In the case of normalization of scores in educational assessment, there may be
an intention to align distributions to a normal distribution. A different approach to normalization of
probability distributions is quantile normalization, where the quantiles of the different measures are
brought into alignment.
In another usage in statistics, normalization refers to the creation of shifted and scaled versions of
statistics, where the intention is that these normalized values allow the comparison of corresponding
normalized values for different datasets in a way that eliminates the effects of certain gross influences, as
in an anomaly time series. Some types of normalization involve only a rescaling, to arrive at values
relative to some size variable. In terms of levels of measurement, such ratios only make sense
for ratio measurements (where ratios of measurements are meaningful), not interval measurements
(where only distances are meaningful, but not ratios).
In theoretical statistics, parametric normalization can often lead to pivotal quantities – functions
whose sampling distribution does not depend on the parameters – and to ancillary statistics – pivotal
quantities that can be computed from observations, without knowing parameters.

You might also like