Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

Quantitative Techniques

Definitions
Mehmood Sultan
5/4/2018

X
Mehmood Sultan
M.Com
Matrices
1. Definition of a Matrix
A rectangular array of numbers is called a matrix.

We shall mostly be concerned with matrices having real numbers as entries.


The horizontal arrays of a matrix are called its rows and the vertical arrays are called its
columns.
A matrix having m rows and n columns is said to have the order m× n.
2 2 3
A matrix A of order m× n can be represented in the following form: A = 4 7 8
9 8 2

A vector is a special type of matrix that has only one row (called a row vector) or
One column (called a column vector).
2. Equality of two Matrices
Two matricesA = [aij ]and B = [bij ] having the same order
m×n are equal if aij= bij for each i = 1, 2, . . . ,m and j = 1, 2, . . . , n.
In other words, two matrices are said to be equal if they have the same order and their
corresponding
Entries are equal.

Special Matrices
3. Zero Matrixes:
A matrix in which each entry is zero is called a zero-matrix, denoted by 0. For
Example.
0 0
A=
0 0
4. Square Matrix:
In a square matrix, A = [aij ], of order n, the entries a11, a22, . . . , ann are called the diagonal
entries
And form the principal diagonal of A.
5. Diagonal Matrix:

6. Symmetric matrix:
A square matrix in which aij= ajifor all i and j is called symmetric matrix.
Or
A matrix for which At=A is called symmetric matrix.
7. Diagonal matrix:
A square matrix where all numbers on principal diagonal are real numbers while all others
are zero is called diagonal matrix.
For example
3 0 0
A=0 2 0
0 0 5
8. Identity matrix
A matrix in which all diagonal elements are 1 while others are zero is called identity matrix.
The
Identity matrix is almost always denoted as I.
For example
1 0 0
A=0 1 0
0 0 1
9. Sub matrix of a Matrix:
A matrix obtained by deleting some of the rows and/or columns of a matrix is said to be
sub matrix of the given matrix.
11 5
For example A = 1 4 5 for which [1], [2], [1 5],
0 1 2 00 2

Statistics
10. Measure of Central Tendency
Measure of central tendency means a single value that represents the whole data. It is
almost in the center of the data Almost half of the values are bigger than it and half of the
values ate lower than it. It is also called as average.
Following are its types:-
a. Arithmetic mean
b. Mode
c. Median
d. Weighted average mean
e. Geometric mean
f. Harmonic mean
11. Arithmetic Mean
The sum of the observation divided by the number of the observation is known as
arithmetic mean. A.M is commonly referred to, as average, or simply as “mean”.
12. Mode
The most frequent/repeated value in the data is called as mode.
The number that appears most often in a set of numbers is known as mode.
13. Median
It is the most/exist central value of the data is called as median. For calculation of
median data must be arranged. Half of the values are above the median and half of the values
are below it.
14. Measure of Dispersion
Measure of dispersion means the extent to which the individual observations (values)
are spread/scatter around their mean.
Followings are the measure of dispersion:-
a. Range
b. Quartile range
c. Semi enter quartile range/quartile range
d. Mean deviation
e. Variance
f. Coefficient of variation
15. What is optimization?
Finding an alternative with the most cost effective or highest achievable performance under
the given constraints, by maximizing desired factors and minimizing undesired ones. In comparison,
maximization means trying to attain the highest or maximum result or outcome without regard to cost
or expense. Practice of optimization is restricted by the lack of full information, and the lack of time
to evaluate what information is available. In computer simulation (modeling) of business problems,
optimization is achieved usually by using linear programming techniques of operations research.

16. What are the applications of chi square distribution?


Chi square distribution has a large number of applications in statistics, some of which are
enumerated below:
1. To test if the hypothetical values of the population variance is σ2 = σ 02
2. To test the goodness of fit
3. To test the independence of attributes
4. To test the homogeneity of independent estimates of the population variance.
5. To combine various probabilities obtained from independent experiments to give a single
test of significance.
6. To test the homogeneity of independent estimates of the population correlation coefficient.

17. What is Discriminant Analysis?


Discriminant Analysis is a statistical tool with an objective to assess the adequacy of a
classification, given the group memberships; or to assign objects to one group among a number of
groups. For any kind of Discriminant Analysis, some group assignments should be known
beforehand.
Discriminant Analysis is quite close to being a graphical version of MANOVA and often used to
complement the findings of Cluster Analysis and Principal Components Analysis.
When Discriminant Analysis is used to separate two groups, it is called Discriminant Function
Analysis (DFA); while when there are more than two groups – the Canonical Varieties Analysis
(CVA) method is used.
In the 1930’s, 3 different people – R.A. Fisher in UK, Hoteling in US and Mahalanob is in India were
trying to solve the same problem via three different approaches. Later their methods of Fisher linear
discriminant function, Hoteling’s T2 test and Mahalanobis D2 distance were combined to devise what
is today called Discriminant Analysis.

18. What is Logistic Regression?

Logistic regression aims to measure the relationship between a categorical dependent variable
and one or more independent variables (usually continuous) by plotting the dependent variables’
probability scores. A categorical variable is a variable that can take values falling in limited categories
instead of being continuous.
Logistic regression uses regression to predict the outcome of a categorical dependent variable on the
basis of predictor variables. The probable outcomes of a single trial are modeled as a function of the
explanatory variable using a logistic function. Logistic modeling is done on categorical data which
may be of various types including binary and nominal. For example, a variable might be binary and
have two possible categories of ‘yes’ and ‘no’; or it may be nominal say hair color maybe black,
brown, red, gold and grey.
Another objective of logistic regression is to check if the probability of getting a particular value of
the dependent variable is related to the independent variable. Multiple logistic regressions are used
when there is more than one independent variable under observation.

19. What is a Chi Square Test?


There are two types of chi-square tests. Both use the chi-square statistic and distribution for
different purposes:
 A chi-square goodness of fit test determines if a sample data matches a population.
 A chi-square test for independence compares two variables in a contingency table to see if
they are related. In a more general sense, it tests to see whether distributions of categorical
variables differ from each another.
 A very small chi square test statistic means that your observed data fits your expected data
extremely well. In other words, there is a relationship.
 A very large chi square test statistic means that the data does not fit very well. In other words,
there isn’t a relationship

20.What is the 'Normal Distribution'?


The normal distribution, also known as the Gaussian distribution, is a probability distribution
that is a symmetric about the mean, showing that data near the mean are more frequent than data far
from the mean.
21.What is 'Skewness'?
Skewness is a term in statistics used to describe symmetry from the normal distribution in a
set of statistical data. Skewness can come in the form of negative Skewness or positive Skewness,
depending on whether data points are skewed to the left and negative, or to the right and positive of
the data average. A dataset that shows this characteristic differs from a normal bell curve.

22.What is a 'Probability Distribution?'


A probability distribution is a statistical function that describes all the possible values and
likelihoods that a random variable can take within a given range. This range will be between the
minimum and maximum statistically possible values, but where the possible value is likely to be
plotted on the probability distribution depends on a number of factors. These factors include the
distribution's mean, standard deviation, Skewness and kurtosis.

23.What is Simplex Method?


Definition:The Simplex Method or Simplex Algorithmic used for calculating the optimal
solution to the linear programming problem. In other words, the simplex algorithm is an iterative
procedure carried systematically to determine the optimal solution from the set of feasible solutions.
Firstly, to apply the simplex method, appropriate variables are introduced in the linear programming
problem, and the primary or the decision variables are equated to zero. The iterative process begins by
assigning values to these defined variables. The value of decision variables is taken as zero since the
evaluation in terms of the graphical approach begins with the origin. Therefore, x1and x2is equal to
zero.
The decision maker will enter appropriate values of the variables in the problem and find out
the variable value that contributes maximum to the objective function and removes those values
which give undesirable results. Thus, the value of the objective function gets improved through this
method. This procedure of substitution of variable value continues until any further improvement in
the value of the objective function is possible.
Following two conditions need to be met before applying the simplex method:
1. The right-hand side of each constraint inequality should be non-negative. In case, any
linear programming problem has a negative resource value, then it should be converted into positive
value by multiplying both the sides of constraint inequality by “-1”.
2. The decision variables in the linear programming problem should be non-negative.

24.What doesSimplex Methodmean?


The simplex method, in mathematical optimization, is a well-known algorithm used for linear
programming. As per the journal computing in Science & Engineering, this method is considered one
of the top 10 algorithms that originated during the twentieth century.
The simplex method presents an organized strategy for evaluating a feasible region's vertices.
This helps to figure out the optimal value of the objective function.
George Danzig developed the simplex method in 1946.

25.What is an estimator?
To estimate means to esteem (to give value to). An estimator is any quantity calculated from
the sample data which is used to give information about an unknown quantity in the population.
For example, the sample mean x̄ is an estimator of the population mean μ.
Now, results of estimation can be expressed as a single value―known as a point estimate, or a range
of values―referred to as a confidence interval.
26.What's a point estimate?
A point estimate of a population parameter is a single value used to estimate the population
parameter.
For example, the sample mean (x̄) is a point estimate of the population mean, μ.
Whenever we use point estimation, we calculate the margin of error associated with that point
estimation.
27.What is a confidence interval?
A confidence interval is the probability that a value will fall between an upper and lower
bound of a probability distribution.
28.What are the Qualities of a Good Estimator?
1. UnBiasedness:An estimate is said to be an unbiased estimate of a given parameter when the
expected value of that estimator can be shown to be equal to the parameter being estimated.
Unbiasedness is a good quality for an estimate, since, in such a case, using weighted average of
several estimates provides a better estimate than each one of those estimates.
For example, if your estimates of the population mean µ are say, 10, and 11.2 from two
independent samples of sizes 20 and 30 respectively, then a better estimate of the population mean µ
based on both samples is [20 (10) + 30 (11.2)]/(20 + 30) = 10.75 (10.72, actually).
Therefore, unbiasedness allows us to upgrade our estimates.
2. Consistency: An estimator is said to be "consistent" if increasing the sample size produces an
estimate with smaller standard error.
Therefore, your estimate isconsistentwith the sample size.
3. Efficiency: An efficientestimate is one which has thesmalleststandard error among all unbiased
estimators.
Also, the "best" estimator is the one which is theclosest to the population parameter being estimated.

29.What is standard error?


The standard deviationof an estimateis called the standard error of that estimate. The larger
the standard error means the more error in estimate.
The standard deviation of an estimate is a commonly used index of the errorentailedin estimating a
population parameter based on the information in a random sample of size n from the entire
population.
30.What is the 'Bayes' Theorem'?
Bayes' theorem, named after 18th-century British mathematician Thomas Bayes, is a
mathematical formula for determining conditional probability. The theorem provides a way to revise
existing predictions or theories given new or additional evidence. In finance, Bayes' theorem can be
used to rate the risk of lending money to potential borrowers.

31.What is type I and type II error?


A type I error (or error of the first kind) is the incorrect rejection of a true null hypothesis.
Usually a type I error leads one to conclude that a supposed effect or relationship exists when in fact it
doesn't. Examples of type I errors include a test that shows a patient to have a disease when in fact the
patient does not have the disease, a fire alarm going on indicating a fire when in fact there is no fire,
or an experiment indicating that a medical treatment should cure a disease when in fact it does not.
A type II error(or error of the second kind) is the failure to reject a false null hypothesis. Examples
of type II errors would be a blood test failing to detect the disease it was designed to detect, in a
patient who really has the disease; a fire breaking out and the fire alarm does not ring; or a clinical
trial of a medical treatment failing to show that the treatment works when really it does.

32.What is a system of equation?


A system of equationsis a set ofsimultaneous equations, usually in several unknowns, for which the
common solutions are sought. Thus asolution to the systemis a set of values for each of the unknowns,
which together form a solution to each equation in the system.

33.Differential equations
Differential equations are equations that involve one or more functions and their derivatives. They are
solved by finding an expression for the function that does not involve derivatives. Differential
equations are used to model processes that involve the rates of change of the variable, and are used in
areas such as physics, chemistry, biology, and economics.

Types of Equations
34.Linear Equations and its properties
A simple linear equation is of the form: y = mx + c
A linear equation looks like a straight line when graphed.
It has a constant slope value.
The degree of a linear equation is always 1.
Superposition principle is applicable to a system characterized by a linear equation.
The output of a linear system is directly proportional to its input.
35.Non-Linear Equations and its properties
A simple non-linear equation is of the form: ax2+ by2= c
A non-linear equation looks like a curve when graphed.
It has a variable slope value.
The degree of a non-linear equation is at least 2 or other higher integer values. With the increase in
the degree of the equation, the curvature of the graph increases.
Superposition principle does not apply to the systems characterized by non-linear equations.
The input and output of a non-linear system is not directly related.

36.Key Differences between Expression and Equation


The points given below summaries important differences between expression and equation:
i. A mathematical phrase that groups together numbers, variables, and operators, to
show the value of something is called expression. An equation is described as a
mathematical statement with two expressions set equal to one another.
ii. An expression is a sentence fragment that stands for a single numerical value. On the
contrary, an equation is a sentence showing equality between two expressions.
iii. The expression is simplified, through evaluation where we substitute values in place
of variables. Conversely, an equation is solved.
iv. An equation is indicated by an equal sign (=). On the other hand, there is no relation
symbol in an expression.
v. An equation is two sided, where an equal sign separates the left and right sides.
Unlike, an expression is one-sided; there is no demarcation like left or right side.
vi. The answer of an expression is either an expression or a numerical value. As
opposed to the equation, this could only be true or false.

37.Definition of Expression
In mathematics, the expression is defined as a phrase that groups together numbers (constant),
letters (variables) or their combination joined by operators (+, -, *, /), to represent the value of
something. An expression can be arithmetic, algebraic, polynomial and analytical.
As it does not contain any equal to (=) sign, so, it does not show any relationship. Hence, it has
nothing like left side or right side. An expression can be simplified by combining the like terms, or it
can be evaluated, inserting values in place of the variables to arrive at a numerical value. Examples:
9x + 2, x – 9, 3p + 5, 4m + 10
38.Definition of Equation
In mathematics, the term equation means a statement of equality. It is a sentence in which two
expressions are placed equal to one another. In order to satisfy an equation, it is important to
determine the value of the variable concerned; this is known as solution or root of the equation.
An equation can be conditional or an identity. If the equation is conditional, then the equality of two
expressions is true for a definite value of variable involved. However, if the equation is an identity,
then the equality is true for all the values held by the variable. There are four types of equation,
discussed below:
*.Simple or Linear Equation: An equation is said to be linear is the highest power of the
variable concerned in 1.
Example: 3x+ 13 = 8x – 2
*.Simultaneous Linear Equation: When there are two or more linear equations,
containing two or more variables.
Example: 3x + 2y = 5, 5x + 3y = 7
*.Quadratic Equation: When in an equation, the highest power is 2; it is called as the
quadratic equation.
Example: 2x2+ 7x + 13 = 0
*.Cubic Equation: As the name suggests, a cubic equation is one which degree 3.
Example: 9x3+ 2x2+ 4x -3 = 13

A mathematical equation which has mathematical expression on both side along with at-least one
variable and an equal sign (=) is known as simple equation. e.g. 5x+3x = 8x
Whereas, a mathematical equation which has mathematical expression on both side along with two
variables and an equal sign (=) is known as linear equation. E.g. x = 3y +6

39.Level of significance
The likelihood, that a statistical test will reject the data and hypothesis, despite the hypothesis
actually being true.

40.Definition of level of significance


The probability of rejecting the null hypothesis in a statistical test when it is true—called also
significance level.

41.What are the Differences between simplex method and graphical


method in linear programming?
Graphical method is applicable only for solving an LPP having two variables in its
constraints, but if more than two variables are used, then it is not possible to use graphical method. In
those cases, simplex method helps to solve such problem.
In simple, in graphical method is used when the constraints contain two variables only.
But simplex method can be used to solve constraints having more than two variables.
42.What are the 2 major computational method of linear programming?
Simplex Method and Interior Point Methods
43.What is graphical method?
The graphical method is a method used to solve algebraically problemsby using graphs.
44.Differentiate between linearity and range in analytical method
validation?
Linearity is the correlation of a regression line. Range is typically derived from linearity.
Range implies that the method developed corresponds to the "range “between 80-120% (ideally) of
the expected sample range (concentration, wt. etc.). For content uniformity, range is expected to be
70-130% of the test concentration.
45.What are the differences between one-tailed and two-tailed tests?
When you conduct a test of statistical significance, whether it is from a correlation, an
ANOVA, a regression or some other kind of test, you are given a p-value somewhere in the output. If
your test statistic is symmetrically distributed, you can select one of three alternative hypotheses. Two
of these correspond to one-tailed tests and one corresponds to a two-tailed test. However, the p-value
presented is (almost always) for a two-tailed test.
46.What is a two-tailed test?
First let’s start with the meaning of a two-tailed test. If you are using a significance level of
0.05, a two-tailed test allots half of your alpha to testing the statistical significance in one direction
and half of your alpha to testing statistical significance in the other direction. This means that .025 is
in each tail of the distribution of your test statistic. When using a two-tailed test, regardless of the
direction of the relationship you hypothesize, you are testing for the possibility of the relationship in
both directions. For example, we may wish to compare the mean of a sample to a given valuexusing a
t-test. Our null hypothesis is that the mean is equal tox. A two-tailed test will test both if the mean is
significantly greater and if the mean significantly less thanx. The mean is considered significantly
different fromxif the test statistic is in the top 2.5% or bottom 2.5% of its probability distribution,
resulting in a p-value less than 0.05.
47.What is a one-tailed test?
First, let’s discuss the meaning of a one-tailed test. If you are using a significance level of .05,
a one-tailed test allots your entire alpha to testing the statistical significance in the one direction of
interest. This means that .05 is in one tail of the distribution of your test statistic. When using a one-
tailed test, you are testing for the possibility of the relationship in one direction and completely
disregarding the possibility of a relationship in the other direction. Let’s return to our example
comparing the mean of a sample to a given valuexusing a t-test. Our null hypothesis is that the mean
is equal tox. A one-tailed test will test either if the mean is significantly greater thanxor if the mean is
significantly less thanx, but not both. Then, depending on the chosen tail, the mean is significantly
greater than or less thanxif the test statistic is in the top 5% of its probability distribution or bottom
5% of its probability distribution, resulting in a p-value less than 0.05. The one-tailed test provides
more power to detect an effect in one direction by not testing the effect in the other direction.

48.Comparison Chart Basis for Comparison Correlation and Regression


Meaning
Correlation is a statistical measure which determines co-relationship or association
oftwo variables.
Regression describes how an independent variable is numerically related to the
dependent variable.
Usage
Correlation is used to represent the linear relationship between two variables.
Regressionis used to fit a best line and estimate one variable on the basis of the other
variables.

Indicates
Correlation coefficient indicates the extent to which two variables move
together.Regression indicates the impact of a unit change in the known variable (x)
on the estimated variable (y).
Objective
Correlationis used to find a numerical value expressing the relationship between
variables.
Regressionis used to estimate values of random variable on the basis of the values of
fixed variable.
49.Definition of Correlation
The term correlation is a combination of two words ‘Co’ (together) and relation (connection)
between two quantities. Correlation is when, at the time of study of two variables, it is observed that a
unit change in one variable is retaliated by an equivalent change in another variable, i.e. direct or
indirect. Or else the variables are said to be uncorrelated when the movement in one variable does not
amount to any movement in another variable in a specific direction. It is a statistical technique that
represents the strength of the connection between pairs of variables.
Correlation can be positive or negative. When the two variables move in the same direction,
i.e. an increase in one variable will result in the corresponding increase in another variable and vice
versa, and then the variables are considered to be positively correlated.For instance: profit and
investment.

50.What is central tendency?


NOTE: (The definition is described in Q # 10. There is only brief detail about this; you can

combine the question for better understanding)

Central tendency refers to and locates the center of the distribution of values. Mean, mode,
and median are the most commonly used indices in describing the central tendency of a data set. If a
data set is symmetric, then both the median and the mean of the data set coincide with each other.
Given a data set, the mean is calculated by taking the sum of all the data values and then
dividing it by the number of data. For example, the weights of 10 people (in kilograms) are measured
to be 70, 62, 65, 72, 80, 70, 63, 72, 77 and 79. Then the mean weight of the ten people (in kilograms)
can be calculated as follows. Sum of the weights is 70 + 62 + 65 + 72 + 80 + 70 + 63 + 72 + 77 + 79 =
710. Mean = (sum) / (number of data) = 710 / 10 = 71 (in kilograms). It is understood that outliers
(data points that deviate from the normal trend) tend to affect the mean. Thus, in the presence of
outliers mean alone will not give a correct picture about the center of the data set.
The median is the data point found at the exact middle of the data set. One way to compute the
median is to order the data points in ascending order, and then locate the data point in the middle. For
example, if once ordered the previous data set looks like, 62, 63, 65, 70, 70, 72, 72, 77, 79, 80.
Therefore, (70+72)/2 = 71 is at the middle. From this, it is seen that median need not be in the data
set. Median is not affected by the presence of the outliers. Hence, median will serve as a better
measure of central tendency in the presence of outliers.
The mode is the most frequently occurring value in the set of data. In the previous example,
the value 70 and 72 both occurs twice and thus, both are modes. This shows that, in some
distributions, there is more than one modal value. If there is only one mode, the data set is said to be
unimodal, in this case, the data set is bimodal.
51.What is dispersion?
NOTE: (The definition is described in Q # 14. There is only brief detail about this; you can

combine the question for better understanding)

Dispersion is the amount of spread of data about the center of the distribution. Range and
standard deviation are the most commonly used measures of dispersion.
The range is simply the highest value minus the lowest value. In the previous example, the
highest value is 80 and the lowest value is 62, so the range is 80-62 = 18. But range does not provide a
sufficient picture about the dispersion.
To calculate the standard deviation, first the deviations of data values from the mean are
calculated. The root square mean of deviations is called the standard deviation. In the previous
example, the respective standard deviation is= 6.05 (in kilograms). Unless the data set is greatly
skewed, from this it can be concluded that the majority of the data is in the interval 71±6.05, and it is
indeed so in this particular example.
52.What is the difference between central tendency and dispersion?
• Central tendency refers to and locates the center of the distribution of values
• Dispersion is the amount of spread of data about the center of a data set.

53.8 Main Limitations of Statistics


The following 8 limitations of statistics are described there:
1.Qualitative Aspect Ignored:
The statistical methods don’t study the nature of phenomenon which cannot be
expressed in quantitative terms.
Such phenomena cannot be a part of the study of statistics. These include health, riches, intelligence
etc. It needs conversion of qualitative data into quantitative data.
So experiments are being undertaken to measure the reactions of a man through data. Now a
days statistics is used in all the aspects of the life as well as universal activities.
2.It does not deal with individual items:
It is clear from the definition given by Prof. Horace Sacrist, “By statistics we mean aggregates of
facts…. and placed in relation to each other”, that statistics deals with only aggregates of facts or
items and it does not recognize any individual item. Thus, individual terms as death of 6 persons in an
accident, 85% results of a class of a school in a particular year, will not amount to statistics as they are
not placed in a group of similar items. It does not deal with the individual items, however, important
they may be.
3.It does not depict entire story of phenomenon:
When even phenomena happen, that is due to many causes, but all these causes cannot be
expressed in terms of data. So we cannot reach at the correct conclusions. Development of a group
depends upon many social factors like, parents’ economic condition, education, culture, region,
administration by government etc. But all these factors cannot be placed in data. So we analyze only
that data we find quantitatively and not qualitatively. So results or conclusion are not 100% correct
because many aspects are ignored.
4.It is liable to be miscued:
As W.I. King points out, “One of the short-comings of statistics is that do not bear on their face the
label of their quality.” So we can say that we can check the data and procedures of its approaching to
conclusions. But these data may have been collected by inexperienced persons or they may have been
dishonest or biased. As it is a delicate science and can be easily misused by an unscrupulous person.
So data must be used with a caution. Otherwise results may prove to be disastrous.
5.Laws are not exact:
As far as two fundamental laws are concerned with statistics:
(i) Law of inertia of large numbers and
(ii) Law of statistical regularity, are not as good as their science laws.
They are based on probability. So these results will not always be as good as of scientific laws. On the
basis of probability or interpolation, we can only estimate the production of paddy in 2008 but cannot
make a claim that it would be exactly 100 %. Here only approximations are made.
6.Results are true only on average:
As discussed above, here the results are interpolated for which time series or regression or
probability can be used. These are not absolutely true. If average of two sections of students in
statistics is same, it does not mean that all the 50 students is section A has got same marks as in B.
There may be much variation between the two. So we get average results.
“Statistics largely deals with averages and these averages may be made up of individual items
radically different from each other.” —W.L King
7.To Many methods to study problems:
In this subject we use so many methods to find a single result. Variation can be found by quartile
deviation, mean deviation or standard deviations and results vary in each case.
“It must not be assumed that the statistics is the only method to use in research, neither should this
method of considered the best attack for the problem.” —Croxten and Cowden
8.Statistical results are not always beyond doubt:
“Statistics deals only with measurable aspects of things and therefore, can seldom give the complete
solution to problem. They provide a basis for judgment but not the whole judgment.” —Prof. L.R.
Connor
Although we use many laws and formulae in statistics but still the results achieved are not final and
conclusive. As they are unable to give complete solution to a problem, the result must be taken and
used with much wisdom.

54.What is Probability analysis?


Definition
A technique used by risk managers for forecasting future events, such as accidental and
business losses. This process involves a review of historical loss data to calculate a probability
distribution that can be used to predict future losses. The probability analyst views past losses as a
range of outcomes of what might be expected for the future and assumes that the environment will
remain fairly stable. This technique is particularly effective for companies that have a large amount of
data on past losses and that have experienced stable operations. This type of analysis is contrasted to
trend analysis.

55.Define system of equation?


A "system" of equations is a set or collection of equations that you deal with all together at
once. Linear equations (ones that graph as straight lines) are simpler than non-linear equations, and
the simplest linear system is one with two equations and two variables.
Think back to linear equations. For instance, consider the linear equation= 3x– 5. A "solution" to this
equation was anyx,y-point that "worked" in the equation. So(2, 1) was a solution because, plugging
in2forx:
3x– 5 = 3(2) – 5 = 6 – 5 = 1 =y
On the other hand, (1, and 2) was not a solution, because, plugging in1forx:
3x– 5 = 3(1) – 5 = 3 – 5 = –2
...this did not equally (which was2, for this point). Of course, in practical terms, you did not find
solutions to an equation by picking random points, plugging them in, and checking to see if they
"work" in the equation. Instead, you pickedx-values and then calculated the correspondingy-values.
And you used this same procedure to graph the equation. This point out an important fact: Every point
on the graph was a solution to the equation, and any solution to the equation was a point on the graph.

56.What is a 'T-Test'?
A t-test is an analysis of two population means through the use of statistical examination; a t-
test with two samples is commonly used with small sample sizes, testing the difference between the
samples when the variancesof two normal distributionsare not known.
A t-test looks at the t-statistic, the t-distribution and degrees of freedomto determine the probability of
difference between populations; the test statistic in the test is known as the t-statistic. To conduct a
test with three or more variables, an analysis of variance (ANOVA)must be used.

57.What is Regression?
A techniquefor determining the statistical relationshipbetween two or more variables where a
changein a dependent variableis associatedwith, and depends on, a change in one or more independent
variables.
For example
*.When designing a scientific experiment, sometimes you have to use statistical analysis
involvingregressioncurves to determine how the variables may have changed.
X
Mehmoodsultan343@gmail.com
C.R (M.com1st)

You might also like