Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

MODULE- 4

SMALL SAMPLE TESTS

Introduction: The testing of hypothesis is vary widely used in business, Research, production
engineering, clinical, pharmaceutical fields. It is a powerful tool in research and quality analysis.  

Population: A population is a group of homogeneous units or observation is called population the


population are two types namely
1. finite population
2. Infinite population 

Finite population: In a population, the number of observations countable number in this type of
population is called finite population. 
Example: In a TVS company the production of bikes is a finite population   

Infinite population: In a population, the number of observations uncountable number in this type of
population is called infinite population  

Example: The number of stars in the sky is an infinite, this type of population is called infinite
population.

Parameter: A parameter is an important characteristic of the entire population which represent as


constant, that constant is called parameter. This is represented by the Greek alfa bets letters.

Example: In normal population µ and  are parameters.

Sample: A sample is a sub part of the population which represents entire population that sub
patrician is called sample. In general representation x 1 , x 2 , x 3 , x 4 … … … … … . x n.

Statistic: A statistics is a real valued function of sample observation this is denoted by


t=t ( x 1 , x 2 , x 3 , x 4 … … … … … . x n) this is estimating the population parameter values.

Example: x=
∑ xi , 2
S=
∑ 2
( x i−x ) are statistics
n n−1
Degree of freedom: any statistics which make free number of observations is called Degree of
freedom this is denoted by ν=n−1.
Sample size: In sample the number of observations is called sample size this is denoted by “n”
Small sample: If the sample the sample size less than 30 i.e. n < 30 this sample is called Small
sample.
Large sample: If the sample the sample size n=30 and more than 30 i.e. n ≥ 30 this sample is called
large sample.
Estimation: In a population the population parameter they can Estimate through the statistic that
value is called estimation
Estimator: In a population the population parameter they can Estimate through the statistic that
statistic is called estimator.

Point estimation: In a population the population parameter value they can Estimate through the
statistic in a single value is called point estimation.
Interval estimation: In a population the population parameter value they can Estimate through the
statistic between the two constants this type of estimation is called Interval estimation.
Sampling Distribution: from the population we draw the sample in different all possible ways with
fixed sample size those samples we compute the statistics, those values from a distribution this type
of distribution is called Sampling Distribution or Sampling Distribution of statistics.

Standard Error: The standard deviation of or Sampling Distribution of statistics is called Standard
Error this is denoted by S.E(t)

STATISTICS LARGE SAMPLE SMALL SAMPLE (S.E(t))


(S.E(t))
x σ S
√n √n

√ √
x− y 2
σ1 σ2
2 2
S1 S 2
2

+ +
n1 n2 n1 n2
S σ S
√2 n √2 n

√ √
s1−s 2 σ 21 σ 22 S 21 S 22
+ +
2 n1 2 n2 2 n1 2 n2
p

PQ
n √ pq
n

√ √
p1− p2 P1 Q 1 P 2 Q 2 p1 q1 p2 q 2
+ +
n1 n2 n1 n2

Inference:
Hypothesis: Hypothesis is a make assumption about on data this type of state meant is called
Hypothesis. The hypothesis may be classified into two types namely
1. Null Hypothesis
2. Alternative Hypothesis
Null Hypothesis: Null Hypothesis is a statement. That statement is there is no significance
difference between them this type of statement is called null hypothesis denoted by H o
Example: a product manufacturing in two method in this case H o : There is no significance
difference between two method this type statement is called Null Hypothesis
Alternative Hypothesis: Alternative Hypothesis is a quit opposite statement of Null hypothesis.
there is significance difference between them this type of statement is called Alternative
hypothesis denoted by H 1
Example: a product manufacturing in two method in this case H 1 : There is significance
difference between two method this type statement is called Alternative Hypothesis

Simple Hypothesis is a statistical Hypothesis which completely specifies an exact parameter.


N.H is always simple hypothesis stated as a equality specifying an exact value of the parameter

H0 :  = 0
VS

H1 : 1 > 2 or 1 < 2 is called Simple VS simple


Composite Hypothesis is stated in terms of several possible values.
H0:  = 0
VS

H1: 1 ≠ 2 is called Simple VS com posit statement

Critical region: In sample space total regions can be divided into two mutually exclusive
regions those are one is acceptance region this denoted by W and another is rejection this
denoted by W. A sample will follow on the acceptance region they can acceptance lot and a
sample will follow on that rejection region they can reject the sample this type of region is
called critical region

Acceptance Region W Reaction Region

Types of errors: In decision making system a product come to the market, that lot checking
through
The Sampling inspection in this situation two types error may be occurs namely
1. Type-I error or producer`s risk
2. Type-II error or consumer`s risk
Type-I error or producer`s risk: In decision making system a sample follow on the rejection
region then reject lot when lot is good this type of error is called Type-I error or producer`s risk in
mathematically XW/ when H0 true

Type-II error or consumer`s risk: In decision making system a sample follow on the acceptance
region then accept that lot when lot is bad this type of error is called Type-II error or consumer`s
risk in mathematically XW / when H0 false

Accept H0 Reject H0

H0 is True Correct Decision Type-I error


H0 is False Type -II error Correct Decision

Level of significance: the probability of type-I error is called Level of significance that is up this
level there is no significance more than that level there is significance this is denoted by α P
(XW/ when H0 true) = α.

One tailed Test: In testing of hypothesis the total critical region can be exits only one side this
type test is called one tailed test. The one tailed test is can be classified two types namely
1. Right tailed test
2. Left tailed test

Right tailed Test: In testing of hypothesis the total critical region can be exits only right-hand
side this type test is called right tailed test. In this case null and alternative as follows
H0:  = 0
VS
H1: 1 > 2

Left tailed Test: In testing of hypothesis the total critical region can be exits only left-hand side
this type test is called left tailed test. In this case null and alternative as follows
H0:  = 0
VS
H1: 1 < 2

Two tailed tests: In testing of hypothesis the total critical region can be exits on both sides left-
hand and right-hand side this type test is called Two tailed test. In this case null and alternative as
follows
H0:  = 0
VS

H1: 1 ≠ 2
Testing Procedure: any data can be testing trough the procedure. The procedure as follows

1. To the given data develop the null and alternative hypothesis as follows

H o :there is no significance difference between them


Vs
H 1 : there is significance difference between them

2. To the given data select appropriate test statistics. In general, the test statistics as follows

|t −E(t)|
Z=
S . E (t )

Where t = is the statistics


E(t) = expected value of t
S.E(t) = standard error
3. Compute the above statistics and compare with critical values, the critical values observed
from concerned tables at α% with n-1 degree of freedom.

4. If |Z|≤ Z ° α we accept H o
2

If |Z|≥ Z ° α we reject H o
2
One sample mean test:

Assumptions: 1. The sample drawn from normal population


2 . the sample size is smaller
Let x 1 , x 2 , x 3 , x 4 … … … … … . x nbe random sample of size n with sample mean x and sample
mean square s2 drawn from normal population. The population mean μ∧σ .we want test they
have any significance difference between sample and population means in this situation we can
apply
one sample mean test the test procedure as follows.

1. To the given data develop the null and alternative hypothesis as follows
H o :there is no significance difference between smple∧ population mean
Vs
H 1 : there is significance difference between smple∧ population mean
Or
H0:  = x
VS
H1: ≠ x
2. To the given data select appropriate test statistics. In small sample case the difference means
that error follows t- distribution in this situation we can apply t-test statistics as follows
| x−μ|
¿ t∨¿
s
√n
Where x = is the sample mean
μ = population mean
σ = population standard deviation
n = sample mean
3. Compute the above statistics and compare with critical values, the critical values observed
from concerned tables at α% with n-1 degree of freedom.

4. If |t |≤t ° α we accept H o
2

If |t |≥t ° α we reject H o
2

Two sample mean test:


Assumptions: 1. The sample drawn from normal population
2. the sample size is smaller
Let 1 2 3 4 … … … … … . x nbe random sample of size n1 with sample mean x and sample mean
x , x , x , x
2
square s1 drawn from normal population. The population mean μ1∧σ 1 . Let
y 1 , y 2 , y 3 , y 4 … … … … … . y nbe another random sample of size n2 with sample mean y and sample
mean square s22 drawn from normal population. The population mean μ2∧σ 2 we want test they have
any significance difference between two sample means in this situation we can apply Two sample
mean test. The test procedure as follows.

1. To the given data develop the null and alternative hypothesis as follows

H o :Thereis no significance difference between smple means


Vs
H 1 :There is significance difference between smple means
Or
H0: x= y
VS

H1: x ≠ y

2. To the given data select appropriate test statistics. In small sample case the difference of two
sample means that error follows t- distribution in this situation we can apply t- test statistics as
follows.
|x− y|
¿ t∨¿


2 2
S 1 S2
+
n 1 n2
3. Compute the above statistics and compare with critical values, the critical values observed
°α
from concerned tables at % with n-1 degree of freedom.
2

4. If |t |≤t ° α we accept H o
2

If |t |≥t ° α we reject H o
2

Paired t-test:

Assumptions: 1. The sample drawn from normal population


2. the sample size is smaller
Let ( x 1 , y 1 ) , ( x 2 , y 2 ) , ¿be the paired random sample of size n q drawn from normal population. The
population mean μ andσ . We want test they have any significance difference between paired sample
we can apply paired sample test the test procedure as follows.

1. To the given data develop the null and alternative hypothesis as follows
H o :Thereis no significance difference between paired sample
Vs
H 1 :There is significance difference between paired sample
Or
H0: x i= y i
VS

H1: x i ≠ y i

2. To the given data select appropriate test statistics. In small sample case the difference
between paired sample that error follows t- distribution in this situation we can apply t- test
statistics as follows
|d|
¿ t∨¿
Sd
√n
Were
x
p= = is the sample proportion
n
P = population proportion
x = number of persons have particular characteristic
n = sample mean
3. Compute the above statistics and compare with critical values, the critical values observed
°α
from concerned tables at % with n-1 degree of freedom.
2

4. If |t |≤t ° α we accept H o
2

If |t |≥t ° α we reject H o
2

Confidence intervals:

The (1-α )% confidence interval for one sample mean given by

( √ √)
2 2
s s
x−t α , x+ t α
2
n 2
n

The (1-α )% confidence interval for two sample mean given by

( √ √ )
2 2 2 2
S1 S2 S1 S2
(x y )−t α + ,( x y)+t α +
2
n1 n2 2
n1 n2
The (1-α )% confidence interval for difference sample given by

( d−t α
2
Sd
√n
, d +t α
2
Sd
√n )
Two sample variance tests:
Assumptions: 1. The sample drawn from normal population
2. the sample size is smaller
Let x 1 , x 2 , x 3 , x 4 … … … … … . x nbe random sample of size n1 with sample mean x and sample mean
square s21 drawn from normal population. The population mean μ1∧σ 1 . Let
y 1 , y 2 , y 3 , y 4 … … … … … . y nbe another random sample of size n2 with sample mean y and sample
mean square s22 drawn from normal population. The population mean μ2∧σ 2 we want test they have
any significance difference between two sample means in this situation we can apply Two sample
mean test. The test procedure as follows.

1. To the given data develop the null and alternative hypothesis as follows
H o :Thereis no significance difference between smple∧ population varience
Vs
H 1 :There is significance difference between smple∧ population varience
Or
2 2
H0: s1=s 2
VS

H1: s21 ≠ s22


2. To the given data select appropriate test statistics. In small sample case the ratio between two
sample mean square error that error follows F-distribution in this situation we can apply F-
test statistics as follows.
2
s2
F= 2
s1
3. Compute the above statistics and compare with critical values, the critical values observed
from concerned tables at α % with (n2 −1, n1 -1) degree of freedom.

4. If |F|≤ F α we accept H o

If |F|≥ F α we reject H o

One sample test

Assumptions: 1. The sample drawn from normal population


2 . the sample size is smaller
Let x 1 , x 2 , x 3 , x 4 … … … … … . x nbe random sample of size n with sample mean x and sample mean
square s2 drawn from normal population. The population mean μ∧σ .we want test they have any
significance difference between sample and population variances in this situation we can apply one
sample mean test the test procedure as follows.

1. To the given data develop the null and alternative hypothesis as follows
H o :there is no significance difference between smple∧ population variances
Vs
H 1 : there is significance difference between smple∧ population variances
Or
H0: σ 2=S 2
VS
H1:σ 2 ≠ S 2
2. To the given data select appropriate test statistics. In small sample case the ratio of variances
that error follows χ 2- distribution in this situation we can apply χ 2−¿test statistics as follows

2 ( n−1 ) S 2
❑= 2
σ
Were
2
n
( x i−x )
S =∑
2
n−1 i =1
x = is the sample mean
σ = population standard deviation
n = sample mean
3. Compute the above statistics and compare with critical values, the critical values observed
from concerned tables at α% with n-1 degree of freedom.

4. If
|❑2|≤❑2 ° α we accept H
o
2
If
|❑2|≥❑2 ° α we reject H
o
2

Test for goodness of fit

Assumptions: 1. The sample drawn from normal population


2. the sample size is smaller

LetO1 ,O2 ,O3 … … Oi … … ..On be the n-observed frequencies their expected frequencies are
E1 , E2 , E3 … … Ei … … .. E n we want test they have any significance difference between
observed and expected frequencies in this situation we apply test for good ness of fit. The test
procedure as follows

1. To the given data develop the null and alternative hypothesis as follows
H o :¿ the given data fitting is good

Vs
H 1 : ¿the given fitting bad
Or
2. To the given data select appropriate test statistics. In small sample case the difference
between observed and expected frequencies that error follows chi-distribution in this situation
we can apply ❑2- test statistics as follows.

(
oi−E i 2
)
n
❑ =∑
2

i=1 Ei
3. Compute the above statistics and compare with critical values, the critical values observed
from concerned tables at α % with (n-1) degree of freedom.

4. If ❑2 ≤❑2α we accept H o
2 2
If ❑ ≥❑ α we reject H o

Test for independence of attribute

Assumptions: 1. The sample drawn from normal population


2. the sample size is smaller

LetO11 , O12 , O13 … …Oij … … ..Omn be the n-observed frequencies their expected frequencies
are E11 , E 12 , E13 … … E ij … … .. E mn we want test they have any significance difference
between observed and expected frequencies in this situation we apply test for independence of
attribute. The test procedure as follows

1. To the given data develop the null and alternative hypothesis as follows
H o :¿ the given datathe two attributes are independent
Vs
H 1 : ¿the giventhe two attributes are dependent
Or
2. To the given data select appropriate test statistics. In small sample case the difference
between observed and expected frequencies that error follows chi-distribution in this situation
we can apply ❑2- test statistics as follows.

( )
n 2
oij −Eij
❑ =∑
2

i=1 Eij
Were
( A i ) (B j )
Eij =E ( A i B j ) = for i= 0,1,2, 3,…m; j = 1,2,3…….n
N

3. Compute the above statistics and compare with critical values, the critical values observed
from concerned tables at α % with (n-1) degree of freedom.

4. If ❑2 ≤❑2α we accept H o

If ❑2 ≥❑2α we reject H o

2x2 – contingency table

When the number of rows and number of columns are equal to 2 it is termed as 2 x 2 contingency
table. It will be in the following form

Attributes B1 B2 Totals
A1 a b a+b
A2 c d c +d
Totals a+c b +d N = a+b+c+d

here a, b, c and d are cell frequencies N is the total number of observations.

1. To the given data develop the null and alternative hypothesis as follows
H o :¿ the given datathe two attributes are independent
Vs
H 1 : ¿the given the two attributes are dependent
2. In case of 2 x 2 contingency table ❑2- test statistics can be directly calculating the test
statistics
as follows
2 N ( ad−bc )2
=
( a+ b )( c+ d )( a+ c )( b+ d )

3. Compute the above statistics and compare with critical values, the critical values observed
from concerned tables at α % with (2-1) (2-1) =1degree of freedom.

4. If ❑2 ≤❑2α we accept H o
If ❑2 ≥❑2α we reject H o

Yate`s correction for continuity

If anyone of the cell frequency is < 5, we use Yates’s correction to make as continuous. The yate`s
correction is made by adding 0.5 to the least cell frequency and adjusting the other cell frequencies so
that the column and row totals remain same. suppose, the first cell frequency is to be corrected then
the contingency table will be as follows:

Attributes B1 B2 Totals
A1 a + 0.5 b -0.5 a+b
A2 c- 0.5 d +0.5 c +d
Totals a+c b +d N=
a+b+c+d

1. To the given data develop the null and alternative hypothesis as follows
H o :¿ the given datathe two attributes are independent
Vs
H 1 : ¿the given the two attributes are dependent
2. In case of 2 x 2 contingency table ❑2- test statistics can be directly calculating the test
statistics
as follows

( )
2
N
N ⌈ ad−bc ⌉−
2 2
=
( a+ b )( c+ d )( a+ c )( b+ d )

3. Compute the above statistics and compare with critical values, the critical values observed
from concerned tables at α % with (n-1) degree of freedom.

4. If ❑2 ≤❑2α we accept H o

If ❑2 ≥❑2α we reject H o

ANOVA:

Introduction: NOVA is a statistical method that stands for analysis of variance. ANOVA was
developed by Ronald Fisher in 1918 and is the extension of the t and the z test. Before the use of
ANOVA, the t -test and z-test were commonly used. But the problem with the T-test is that it cannot
be applied for more than two groups. In 1918, Ronald Fisher developed a test called the analysis of
variance. This test is also called the Fisher analysis of variance, which is used to do the analysis of
variance between and within the groups whenever the groups are more than two.

Definition: analysis of variance is a one group of variances compare with another group of variances
is called analysis of variance

Types of cause: In analysis of variance total variance came from two types causes namely
1. Chance cause
2. Assignable causes
Chance cause: Chance causes came under random manner and also rarely occurrence. Chance cause
provides less variance. These cause like epidemics, earthquakes, storms…...etc. these type causes
cannot control by human hand

Assignable causes: Assignable causes is one of the major causes to provides the variances. We can
assign this type causes to provides lot of variance like fertilizers, feticides…...etc. these type causes
can control by human hand

Experimental unit: In a experimentation which can receives a treatment to a unit is called


Experimental unit
Treatment: In a experimentation a physical substance can be issue to the experimental unites that
substance and process is called Treatment

One- way classification:


Assumptions:
1. The population in which samples are drawn should be normally distributed.
2. Independence of cases: the sample cases should be independent of each other.
3. Homogeneity: Homogeneity means that the variance between the groups should be approximately
equal.

In analysis of variance total assignable cause variance, they can classify into some homogenous rows
according to single factor this type analysis of variance is called One- way classification. The one-
way classification analysis carried out as follows.
Let the total N experimental units can be classified into m-rows and each row sizeni i.e. N=m× ni
The One- way classification tabular form as follows

1 2 3 ……… j ……… ni Totals Means

1 y 11 y 12 y 13 ……… y1 j ……… y 1 n1 T 1. y1 .

2 y 21 y 22 y 23 ……… y2 j ……… y 2 n2 T 2. y2 .

3 y 31 y 32 y 33 ……… y3 j ……… y 3 n2 T 2. y2 .

. . . . ……… . ……… . . .
. . . . . . . .
. . . . . . . .
i yi 1 yi 2 yi 3 ……… y ij ……… y i ni T i. yi .

. . . . ……… . ……… . . .
. . . . . . . .
. . . . . . . .
m ym1 ym2 ym3 ……… y mj ……… y m nm T m. ym .

G. T y. .

The mathematical model of One- way classification: The mathematical model of One- way
classification as follows

y ij =μ+α i +∈ij

Where y ij = yield of ith ∧ j th


μ = be the average yield
th
α i=bethe yeild of i
∈ij =random cause effect

The One-way classification statistical analysis: The One- way classification statistical analysis as
follows
1. To the given data develop the null and alternative hypothesis as follows
H o :α 1=α 2=α 3 … … … … … . α m=0
Vs
H 1 : α1 ≠ α2 ≠ α3 … … … … … . αm ≠ 0

2. To the given data select appropriate test statistics. In analysis of variance the ratio between
mean sum of squares and error sum of square follows F-distribution in this situation we can
apply F- test statistics as follows

Sum of square:
n m n
∑im=1 ∑ ( y ij − y . . )2=∑ ( y i .− y .. ) 2+∑ mi=1 ∑ ( y ij − yi . )2
j=1 i=1 j=1

T.S.S = S.S.R + S.S.E


Degree of freedom:

Degree of freedom of for T.S.S given by N-1


Degree of freedom of for S.S.R given by m-1
Degree of freedom of for S.S.E given by N-m

Mean sum of square


S .S. R
Mean sum of square of for S.S.R given by M . S . R=
m−1
S.S. E
Mean sum of square of for S.S.E given by M . S . E=
N−m

3. Compute the above statistics and compare with critical values, the critical values observed from
concerned tables at α % with (m−1 , N -m) degree of freedom as follows
Anova Table
Source of d.f S. S M.S.S F call Fα
variance
Row’s effect m-1 S. S. R S .S. R
M . S . R=
m−1 M .S .R F(m−1 , N−m )
F call =
Error effect N-m S. S. E S.S. E M .S. S.E
M . S . E=
N−m
totals N-1 T. S. S

4. If |F|≤ F α we accept H o

If |F|≥ F α we reject H o
Data evaluation purpose using following formulae as follows

1. Total number of observations denoted N


by
2. Grand total given by G. T G .T =∑ ∑ ( y ij )

3. Correction factor given by C. F (G . T )2


C . F=
N

4. Raw sum of squares given by R.S.S R . S . S=∑ ∑ y ij


2

5. Total sum of squares given by T.S.S T . S . S=R . S . S−C . F


6. Sum of squares due rows given by 2
∑T i.
S.S.R S . S . R= −C . F
n
7. Sum of squares due Error given by S . S . E=T . S . S−S . S . R
S.S.E

Two- way classification:


Assumptions:
1. The population in which samples are drawn should be normally distributed.
2. Independence of cases: the sample cases should be independent of each other.
3. Homogeneity: Homogeneity means that the variance between the groups should be approximately
equal.

In analysis of variance total assignable cause variance, they can classify into some homogenous rows
and columns according to two factor this type analysis of variance is called two- way classification.
The Two-way classification analysis carried out as follows

Let the total N experimental units can be classified into m-rows and n-columns i.e. N=m× n
The One- way classification tabular form as follows

1 2 3 ……… j ……… n Totals Means

1 y 11 y 12 y 13 ……… y1 j ……… y1n T 1. y1 .


2 y 21 y 22 y 23 ……… y2 j ……… y2n T 2. y2 .

3 y 31 y 32 y 33 ……… y3 j ……… y3 n T 2. y2 .

. . . . ……… . ……… . . .
. . . . . . . .
. . . . . . . .
i yi 1 yi 2 yi 3 ……… y ij ……… y¿ T i. yi .

. . . . ……… . ……… . . .
. . . . . . . .
. . . . . . . .
m ym1 ym2 ym3 ……… y mj ……… y mn T m. ym.

Totals T .1 T .2 T .3 ……… T. j ……… T ..n G.T y. .

Means y .1 y. 2 y .3 ……… y. j ……… y. m y. .

The mathematical model of Two- way classification: The mathematical model of Two- way
classification as follows

y ij =μ+α i + β j +∈ij

Where
y ij = yield of ith ∧ j th
μ = be the average yield
th
α i=bethe yeild of i
β j =bethe yeild of j th
∈ij =random cause effect

The Two-way classification statistical analysis: The Two- way classification statistical analysis as
follows

1. To the given data develop the null and alternative hypothesis as follows
H o :α 1=α 2=α 3 … … … … … . α m=0
β 1=β 2=β 3 … … … … … . β n=0

Vs
H 1 : α1 ≠ α2 ≠ α3 … … … … … . αm ≠ 0
β 1 ≠ β 2 ≠ β 3 … … … … …. β n ≠ 0

2. To the given data select appropriate test statistics. In analysis of variance the ratio between
mean sum of squares and error sum of square follows F-distribution in this situation we can
apply F- test statistics as follows

Sum of square:
n m n n
∑im=1 ∑ ( y ij − y . . )2=∑ ( y i .− y .. ) 2+ ∑ ( y . j− y .. )2 +∑mi=1 ∑ ( y ij − y i . − y . j+ y .. ) 2
j=1 i=1 j=1 j =1

T.S.S = S.S.R + S.S.C + S.S.E

Degree of freedom:

Degree of freedom of for T.S.S given by N-1


Degree of freedom of for S.S.R given by m-1
Degree of freedom of for S.S.C given by n-1
Degree of freedom of for S.S.E given by (m-1)(n-1)

Mean sum of square


S .S. R
Mean sum of square of for S.S.R given by M . S . R=
m−1
S .S.c
Mean sum of square of for S.S.C given by M . S . R=
m−1
S.S. E
Mean sum of square of for S.S.E given by M . S . E=
N−m

3. Compute the above statistics and compare with critical values, the critical values observed from
concerned tables at α % with (m−1 ,(m−1)(n−1)); (n−1 ,(m−1)(n−1)), degree of freedom as
follows

Anova Table

Source of d.f S. S M.S.S F call Fα


variance

Row’s effect m-1 S. S. R S .S. R M .S. R F(m−1 ,(m−1)(n−1))


M . S . R= F Rows=
m−1 M .S.S. E
column’s effect n-1 S.S.C S . S .C
M . S . C=
n−1
Error effect (m−1)(n−1) S. S. E S.S. E M .S.C F(n−1 ,(m −1 )(n−1))
M . S . E= F column=
Totals N-1 T. S. S N−m M .S.S. E

4. If |F|≤ F α we accept H o

If |F|≥ F α we reject H o

Data evaluation purpose using following formulae as follows:

1. Total number of observations denoted by N


2. Grand total given by G .T =∑ ∑ ( y ij )

3. Correction factor given by (G . T )2


C . F=
N
4. Raw sum of squares given by R . S . S=∑∑ y ij
2

5. Total sum of squares given by T . S . S=R . S . S−C . F


6. Sum of squares due rows given by 2
∑T i.
S . S . R= −C . F
n
7. Sum of squares due rows given by 2
∑T . j
S . S . C= −C . F
m
8. Sum of squares due Error given by S . S . E=T . S . S−S . S . R−S . S .C

You might also like