Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

ALLAMA IQBAL OPEN UNIVERSITY, ISLAMABAD

(Department of secondary teacher education)

ASSIHNMENT NO.2

Roll no:

PROGAM: B.ed ½ years

Course: Education statistics (8614)

Semester: autumn, 2020


Q1. Define hypothesis testing and logic behind hypothesis testing.

Hypothesis testing:

Hypothesis testing is a statistical method uses sample data to evaluate a


hypothesis about a population parameter. A hypothesis test is usually used in context of a
research study. Depending on the type of research and the type of data, the details of the
hypothesis test will change from on situation to another. Hypothesis testing is a formulized
procedure that follows a standard series of operations. In this way, a researcher has a
standardized method for evaluating the result of his research study. Other researchers will
recognize and understand exactly how the data were evaluated and how conclusions were
drawn.

Logic behind hypothesis testing:

According to Gravetler the logic of hypothesis testing as follows:

1. First, a researcher states a hypothesis concerns the value of the population mean. For
example, we might hypothesis that mean IQ for registered voters Pakistan is M=100.
2. Before a researcher actually selects a sample, he uses the hypothesis to predict the
characteristics that the sample should have, for example, if he makes hypothesis that
the population mean IQ=100. Then he would predict that the sample should have a
mean around 100. It should be similar to the population but there is always a chance
certain amount of error.
3. Next, the researcher obtains a random sample from the population. For example, he
might select a random sample from the population. For example, he might select a
random sample of n=200 registered voters to compute the mean IQ for the sample.
4. Finally, he compares the obtained data with the prediction that was made from the
hypothesis. If the sample mean consists of prediction, he would conclude that the
hypothesis is reasonable. If there is big difference between the data and the prediction,
he will decide that the hypothesis is wrong.
Q2. Explain types of ANOVA. Describe possible situations in which each type should be
used.

ANOVA:
An ANOVA test is a way to find out if survey or experiment results are
significant. In other words, they help you to figure out if you need to reject the null
hypothesis or accept the alternate hypothesis.

There are two main types of ANOVA

1: One Way ANOVA


A one way ANOVA is used to compare two means from two
independent (unrelated) groups using the F- distribution. The null hypothesis for the
test is that the two mean are equal. Therefore, a significant result means that the two
means are unequal.

Situation 1: You have a group of individuals randomly split into smaller groups and
completing different tasks. For example, you might be studying the effects of tea on
weight loss and from three groups, black tea, green tea, and no tea.

Situation 2: Similar to situation 1, but in case the individual are split into groups based
on an attribute they possess. For example, you might be studying left strength of people
according to weight. You could split participants in weight categories (obese, overweight
and normal) and measure their leg strength on a weight machine.

Two ways ANOVA:


A two way ANOVA is an extension of the one way ANOVA. With a one
way ANOVA, you have one independent variable affecting a dependent variable. With
two way ANOVA, there are two independents. To use two way ANOVA when you have
one measurement variable. (I.e. a quantitative variable) and two nominal variables. In
other words, if your experiment has a quantitative outcome and you have two
categorical explanatory variables. A two way ANOVA is appropriate for example , you
might want to find out if there is an interaction between income and gender for anxiety
level at job interviews. The anxiety level is the outcome, or the variable that can be
measured. Gender and income are two categorical variables. These categorical variables
are also the independent variables, which are called factors in a two way ANOVA. The
factor can be split into levels. In the above example, income level could be split into
three levels, low, middle and high income. Gender could be split into three levels, male,
female and transgender. Treatment groups are all possible combinations of the factors.
In this example there could be 3×3=9 treatments groups.

QNO3: What is range of correction coefficient? Explain strong, moderate and weak
relationship.

Introduction:
The connection coefficient addresses the relatedness of two factors, and
how well the estimation of one can be utilized to foresee the estimation of the other.
The relationship coefficient r runs between - 1 and +1. A positive r esteems shows that
as one variable increment so does the other, and a r of +1 demonstrates that knowing
the estimation of one variable permits ideal forecast of the other. A negative r esteems
demonstrates that as one variable expands the other variable declines, and a r of - 1
shows that knowing the estimation of one variable permits ideal forecast of the other. A
connection coefficient of 0 demonstrates no connection between the factors.

Strong relationship:

A positive connection, the square of the relationship is


frequently just to show the strength of the connection between the two factors. R-squared
reaches from 0 to 1, and since squared qualities under 1 lessening quickly, an enormous
estimation of r-squared suggests a solid relationship.

Weak relationship:

Value somewhere in the range of 0 and 0.3 (0 and −0.3) show a frail positive
(negative) direct relationship through an insecure straight principle.

Moderate relationship:

A relationship of 0 demonstrates no relationship or no fit by any


stretch of the imagination.

Q4. Explain chi square independence test. In what situation should it be applied?

Chi square independence test:

A chi-square (χ2) independence test is the second significant type


of chi-square tests. It is utilized to investigate the connection between two all-out factors. Every
one of these factors can have two of more classes. It decides whether there is a critical
connection between two ostensible (all out) factors. The recurrence of one ostensible variable
is contrasted and various estimations of the second ostensible variable. The information can be
shown in R*C possibility table, where R is the line and C is the section. For instance, the analyst
needs to analyze the connection between sex (male and female) and compassion (high versus
low). The scientist will utilize chi-square trial of freedom. On the off chance that the invalid
speculation is acknowledged there would be no connection among sexual orientation and
sympathy. In the event that the invalid theory is dismissed then the end will be there is a
connection among sexual orientation and compassion (for example say females tent to score
higher on sympathy and guys will in general score lower on sympathy).
Chi square independence test should be applied on following situation:

There are some broad presumptions which ought to be dealt with:

1. Random Sample - Sample ought to be chosen utilizing straightforward arbitrary testing


technique.
2. Variables - Both factors under examination ought to be clear cut.
3. Independent Observations – Each individual or case ought to be tallied just a single time
and none ought to show up in more than one class of gathering. The information from
one subject ought not to impact the information from another subject.

4. If the information is shown in a possibility table, the normal recurrences mean every cell
of the table is in any event 5.

Q5. Correlation is pre requisite of Regression Analysis. Explain.

Introduction:

Correlation measures the degree and bearing to which two factors are connected.
It doesn't fit a line through the information focuses. It doesn't need to consider the reason and
impact. It doesn't matter which of the two factors is called ward and which is called free. Then
again relapse tracks down the best line that predicts subordinate factors from the autonomous
variable. The choice of which variable is calls ward and which calls free is a significant matter in
relapse, as it will get an alternate best-fit line if we trade the two factors, for example ward to
autonomous and free to subordinate. The line that best predicts autonomous variable from
subordinate variable will not be equivalent to the line that predicts subordinate variable from
autonomous variable.
Correlation is pre requisite of Regression Analysis:

Establishing Correlation is a prerequisite for regression:

1. Correlation investigation portrays the present or past circumstance. It utilizes Sample


information to surmise a property of the source Population or Process. There is no
investigating what's to come. The motivation behind Linear Regression, then again, is to
characterize a Model (a direct condition) which can be utilized to foresee the
consequences of Designed Experiment.
2. Correlation fundamentally utilizes the Correlation Coefficient,r. Regression likewise
utilizes r, however utilizes an assortment of different Statistics.
3. Correlation analysis and Linear Regression both endeavor to decide if 2 Variables change
in synchronize. Direct Correlation is restricted to 2 Variables, which can be plotted on a
2-dimensional x-y diagram. Direct Regression can go to at least 3
Variables/measurements.
4. Correlation Analysis does not attempt to identify a Cause-Effect relationship, Regression
does.

You might also like