Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

Uzair Talpur

1811162

BBA 4B

Statistical Inference Assignment

Assignment 1

 -Define inferential statistic and its types testing of hypothesis and


estimation of parameters.

Inferential statistics are used to draw inferences about a population from a sample. Consider
an experiment in which 10 subjects who performed a task after 24 hours of sleep deprivation
scored 12 points lower than 10 subjects who performed after a normal night's sleep. Is the
difference real or could it be due to chance? How much larger could the real difference be
than the 12 points found in the sample? These are the types of questions answered by
inferential statistics. 

There are two main methods used in inferential statistics: estimation and
hypothesis testing. In estimation, the sample is used to estimate a parameter andconfidence
interval  about the estimate is constructed. In the most common use of hypothesis testing, a
"straw man" null hypothesis is put forward and it is determined whether the data are strong
enough to reject it. For the sleep deprivation study, the null hypothesis would be that sleep
deprivation has no effect on performance. 

 -Define unbiasedness, consistency, efficiency, sufficiency

Unbiasedness

A statistic t is said to be an Unbiased Estimator of parameter 𝜃, if the expected value of t is 𝜃.


𝐸 𝑡 = 𝜃 Otherwise, the estimator is said to be ‘biased’. The bias of a statistic in estimating 𝜃 is
given as 𝐵𝑖𝑎𝑠 = 𝐸 𝑡 − 𝜃 Let 𝑥1, 𝑥2,… , 𝑥𝑛 be a random sample drawn from a population with
mean 𝜇 and variance 𝜎 2 .
Then Sample mean 

Sample variance

The sample mean 𝑥ҧis an unbiased estimator of the population mean 𝜇; because 𝐸 𝑥ҧ = 𝜇
The sample variance 𝑆 2 is a biased estimator of the population variance 𝜎 2 ; because

 
An unbiased estimator of the population variance 𝜎 2 is given by 

Because

The distinction between 𝑆 2 and 𝑠 2 in which only he denominators are different. 𝑆 2 is the
variance of the sample observations, but 𝑠 2 is the ‘unbiased estimator’ of the variance (𝜎 2 )
in the population. 

Consistency
A desirable property of good estimator is that its accuracy should increase when the sample
becomes larger. That is, the estimator is expected to come closer to the parameter as the size
of the sample increases.
A statistic 𝑡𝑛 computed from a sample of n observations is said to be a Consistent Estimator
of a parameter 𝜃, if it converges in probability to 𝜃 as n tends to infinity. This means that the
larger the sample size (n), the less is the chance that the difference between 𝑡𝑛 and 𝜃 will
exceed any fixed value. In symbols, given any arbitrary small positive quantity

𝜖,  𝜇 , then the statistic 𝑡𝑛 will be a ‘consistent


estimator’ of 𝜃. Consistency is a limiting property. Moreover, several consistent estimators
may exist for the same parameter. For example, in sampling from a normal population 𝑁 𝜇, 𝜎
2 , both the sample mean and sample median are consistent estimators of 𝜇.

Efficiency

If we confine ourselves to unbiased estimates, there will, in general, exist more than one
consistent estimator of a parameter. For example, in sampling from a normal population 𝑁 𝜇,
𝜎 2 , when 𝜎 2 is known, sample mean 𝑥ҧis an unbiased and consistent estimator of 𝜇. From
symmetry it follows immediately the sample median (Md) is an unbiased estimate of 𝜇.
Which is same as the population median. Also for large n, 

Median is also an unbiased and consistent estimator of 𝜇.


Thus, there is necessity of some further criterion which will enable us to choose between the
estimators with the common property of consistency.
Such a criterion which is based on the variance of the sampling distribution of estimators is
usually known as Efficiency.

Sufficiency

A statistic is said to be a ‘sufficient estimator’ of a parameter 𝜃, if it contains all information


in the sample about 𝜃. If a statistic t exists such that the joint distribution of the sample is
expressible as the product of two factors, one of which is the sampling distribution of t and
contains 𝜃, but the other factor is independent of 𝜃, then t will be a sufficient estimator of 𝜃.
Thus if 𝑥1, 𝑥2,… , 𝑥𝑛 is a random sample from a population whose probability mass function
or probability density function is 𝑓 𝑥, 𝜃 , and t is a sufficient estimator of 𝜃 then we can write 

Where g(t, 𝜃) is the sampling distribution of t and contains 𝜃, but ℎ (𝑥1, 𝑥2,… , 𝑥𝑛) is
independent of 𝜃. 
Since the parameter 𝜃 is occurring in the joint distribution of all the sample observations can
be contained in the distribution of the statistic t, it is said that t alone can provide all
‘information’ about 𝜃 and is therefore “sufficient” for 𝜃. 
Sufficient estimators are the most desirable kind of estimators, but unfortunately they exist in
only relatively few cases. If a sufficient estimator exists, it can be found by the method of
maximum likelihood.
In random sampling from a Normal population 𝑁 (𝜇, 𝜎 2) , the sample mean 𝑥ҧis a sufficient
estimator of 𝜇.

Assignment 2

 -define probability and its types with examples

Probability means possibility. It is a branch of mathematics that deals with the occurrence of


a random event. The value is expressed from zero to one. Probability has been introduced in
Maths to predict how likely events are to happen. The meaning of probability is basically the
extent to which something is likely to happen. This is the basic probability theory, which is
also used in the probability distribution where you will learn the possibility of outcomes for a
random experiment. To find the probability of a single event to occur, first, we should know
the total number of possible outcomes. Probability is a measure of the likelihood of an event
to occur. Many events cannot be predicted with total certainty. We can predict only the
chance of an event to occur i.e. how likely they are to happen, using it. Probability can range
in from 0 to 1, where 0 means the event to be an impossible one and 1 indicates a certain
event. Probability for Class 10 is an important topic for the students which explains all the
basic concepts of this topic. The probability of all the events in a sample space adds up to 1.

For example, when we toss a coin, either we get Head OR Tail, only two possible outcomes
are possible (H, T). But if we toss two coins in the air, there could be three possibilities of
events to occur, such as both the coins show heads or both show tails or one shows heads and
one tail, i.e.(H, H), (H, T),(T, T).

Types

Theoretical Probability
It is based on the possible chances of something to happen. The theoretical probability is
mainly based on the reasoning behind probability. For example, if a coin is tossed, the
theoretical probability of getting a head will be ½.
Experimental Probability
It is based on the basis of the observations of an experiment. The experimental
probability can be calculated based on the number of possible outcomes by the total number
of trials. For example, if a coin is tossed 10 times and heads is recorded 6 times then, the
experimental probability for heads is 6/10 or, 3/5.

Axiomatic Probability
In axiomatic probability, a set of rules or axioms are set which applies to all types. These
axioms are set by Kolmogorov and are known as Kolmogorov’s three axioms. With the
axiomatic approach to probability, the chances of occurrence or non-occurrence of the events
can be quantified. The axiomatic probabilitylesson covers this concept in detail with
Kolmogorov’s three rules (axioms) along with various examples.
Conditional Probability is the likelihood of an event or outcome occurring based on the
occurrence of a previous event or outcome.

 -define events and its types with examples

A probability event can be defined as a set of outcomes of an experiment. In other words, an


event in probability is the subset of the respective sample space. So, what is sample space?
The entire possible set of outcomes of a random experiment is the sample space or the
individual space of that experiment. The likelihood of occurrence of an event is known as 
probability. The probability of occurrence of any event lies between 0 and 1.

Events In Probability
The sample space for the tossing of three coins simultaneously is given by:
S = {(T , T , T) , (T , T , H) , (T , H , T) , (T , H , H ) , (H , T , T ) , (H , T , H) , (H , H, T) ,(H
, H , H)}
Suppose, if we want to find only the outcomes which have at least two heads; then the set of
all such possibilities can be given as:
E = { (H , T , H) , (H , H ,T) , (H , H ,H) , (T , H , H)}
Thus, an event is a subset of the sample space, i.e., E is a subset of S.
There could be a lot of events associated with a given sample space. For any event to occur,
the outcome of the experiment must be an element of the set of event E.

Types
Impossible and Sure Events
If the probability of occurrence of an event is 0, such an event is called an impossible
event and if the probability of occurrence of an event is 1, it is called a sure event. In other
words, the empty set ϕ is an impossible event and the sample space S is a sure event.

Simple Events
Any event consisting of a single point of the sample space is known as a simple event in
probability. For example, if S = {56 , 78 , 96 , 54 , 89} and E = {78} then E is a simple event.
Compound Events
Contrary to the simple event, if any event consists of more than one single point of the
sample space then such an event is called a compound event. Considering the same example
again, if S = {56 ,78 ,96 ,54 ,89}, E1 = {56 ,54 }, E2 = {78 ,56 ,89 } then, E1 and
E2 represent two compound events.

Independent Events and Dependent Events


If the occurrence of any event is completely unaffected by the occurrence of any other event,
such events are known as an independent event in probability and the events which are
affected by other events are known as dependent events.

Mutually Exclusive Events


If the occurrence of one event excludes the occurrence of another event, such events are
mutually exclusive events i.e. two events don’t have any common point. For example, if S =
{1 , 2 , 3 , 4 , 5 , 6} and E1, E2 are two events such that E1 consists of numbers less than 3
and E2 consists of numbers greater than 4.
So, E1 = {1,2} and E2 = {5,6} .
Then, E1 and E2 are mutually exclusive.

Exhaustive Events
A set of events is called exhaustive if all the events together consume the entire sample space.

Complementary Events
For any event E1 there exists another event E1‘ which represents the remaining elements of
the sample space S.
E1 = S − E1‘
If a dice is rolled then the sample space S is given as S = {1 , 2 , 3 , 4 , 5 , 6 }. If event
E1 represents all the outcomes which is greater than 4, then E1 = {5, 6} and E1‘ = {1, 2, 3,
4}.
Thus E1‘ is the complement of the event E1.
Similarly, the complement of E1, E2, E3……….En will be represented as E1‘, E2‘,
E3‘……….En‘

Events Associated with “OR”


If two events E1 and E2 are associated with OR then it means that either E1 or E2 or both.
The union symbol (∪) is used to represent OR in probability.
Thus, the event E1U E2 denotes E1 OR E2.
If we have mutually exhaustive events E1, E2, E3 ………En associated with sample space S
then,
E1 U E2 U E3U ………En = S

Events Associated with “AND”


If two events E1 and E2 are associated with AND then it means the intersection of elements
which is common to both the events. The intersection symbol (∩) is used to represent AND
in probability.
Thus, the event E1 ∩ E2 denotes E1 and E2.

Event E1 but not E2


It represents the difference between both the events. Event E1 but not E2 represents all the
outcomes which are present in E1 but not in E2. Thus, the event E1 but not E2 is represented
as
E1, E2 = E1 – E2

 -make 216 outcomes of 3 dice

Just as one die has six outcomes and two dice have 62 = 36 outcomes, the probability
experiment of rolling three dice has 63 = 216 outcomes. This idea generalizes further for
more dice. If we roll n dice then there are 6n outcomes.
We can also consider the possible sums from rolling several dice. The smallest possible sum
occurs when all of the dice are the smallest, or one each. This gives a sum of three when we
are rolling three dice. The greatest number on a die is six, which means that the greatest
possible sum occurs when all three dice are sixes. The sum of this situation is 18.
When n dice are rolled, the least possible sum is n and the greatest possible sum is 6n.
There is one possible way three dice can total 3
3 ways for 4
6 for 5
10 for 6
15 for 7
21 for 8
25 for 9
27 for 10
27 for 11
25 for 12
21 for 13
15 for 14
10 for 15
6 for 16
3 for 17
1 for 18

As discussed above, for three dice the possible sums include every number from three to 18.
The probabilities can be calculated by using counting strategy and recognizing that we are
looking for ways to partition a number into exactly three whole numbers. For example, the
only way to obtain a sum of three is 3 = 1 + 1 + 1. Since each die is independent from the
others, a sum such as four can be obtained in three different ways:
1+1+2
1+2+1
2+1+1
Further counting arguments can be used to find the number of ways of forming the other
sums. The partitions for each sum follow:
3=1+1+1
4=1+1+2
5=1+1+3=2+2+1
6=1+1+4=1+2+3=2+2+2
7=1+1+5=2+2+3=3+3+1=1+2+4
8=1+1+6=2+3+3=4+3+1=1+2+5=2+2+4
9=6+2+1=4+3+2=3+3+3=2+2+5=1+3+5=1+4+4
10 = 6 + 3 + 1 = 6 + 2 + 2 = 5 + 3 + 2 = 4 + 4 + 2 = 4 + 3 + 3 = 1 + 4 + 5
11 = 6 + 4 + 1 = 1 + 5 + 5 = 5 + 4 + 2 = 3 + 3 + 5 = 4 + 3 + 4 = 6 + 3 + 2
12 = 6 + 5 + 1 = 4 + 3 + 5 = 4 + 4 + 4 = 5 + 2 + 5 = 6 + 4 + 2 = 6 + 3 + 3
13 = 6 + 6 + 1 = 5 + 4 + 4 = 3 + 4 + 6 = 6 + 5 + 2 = 5 + 5 + 3
14 = 6 + 6 + 2 = 5 + 5 + 4 = 4 + 4 + 6 = 6 + 5 + 3
15 = 6 + 6 + 3 = 6 + 5 + 4 = 5 + 5 + 5
16 = 6 + 6 + 4 = 5 + 5 + 6
17 = 6 + 6 + 5
18 = 6 + 6 + 6
When three different numbers form the partition, such as 7 = 1 + 2 + 4, there are 3! (3x2x1)
different ways of permutingthese numbers. So this would count toward three outcomes in the
sample space. When two different numbers form the partition, then there are three different
ways of permuting these numbers.
Specific Probabilities 
We divide the total number of ways to obtain each sum by the total number of outcomes in
the sample space, or 216. The results are:
Probability of a sum of 3: 1/216 = 0.5%
Probability of a sum of 4: 3/216 = 1.4%
Probability of a sum of 5: 6/216 = 2.8%
Probability of a sum of 6: 10/216 = 4.6%
Probability of a sum of 7: 15/216 = 7.0%
Probability of a sum of 8: 21/216 = 9.7%
Probability of a sum of 9: 25/216 = 11.6%
Probability of a sum of 10: 27/216 = 12.5%
Probability of a sum of 11: 27/216 = 12.5%
Probability of a sum of 12: 25/216 = 11.6%
Probability of a sum of 13: 21/216 = 9.7%
Probability of a sum of 14: 15/216 = 7.0%
Probability of a sum of 15: 10/216 = 4.6%
Probability of a sum of 16: 6/216 = 2.8%
Probability of a sum of 17: 3/216 = 1.4%
Probability of a sum of 18: 1/216 = 0.5%
As can be seen, the extreme values of 3 and 18 are least probable. The sums that are exactly
in the middle are the most probable. This corresponds to what was observed when two dice
were rolled.

Assignment 3

 -explain vieen diagrams

The term Venn diagram is not foreign since we all have had Mathematics, especially
Probability and Algebra. Now, for a layman, the Venn diagram is a pictorial exhibition of all
possible real relations between a collection of varying sets of items. It is made up of several
overlapping circles or oval shapes, with each representing a single set or item.
Venn diagrams depict complex and theoretical relationships and ideas for a better and easier
understanding. These diagrams are also professionally utilized to display complex
mathematical concepts by professors, classification in science, and develop sales strategies in
the business industry.

The first Venn diagram example is in Mathematics. They are accessible when covering Sets
Theory and Probability topics. 
In the diagram below, there are two sets, A = {1, 5, 6, 7, 8, 9, 10, 12} and B = {2, 3, 4, 6, 7,
9, 11, 12, 13}. The section where the two sets overlap has the numbers contained in both Set
A and B, referred to as the intersection of A and B. The two sets put together, gives their
union which comprises of all the objects in A, B which are {1 2 3 4 5 6 7 8 9 10 11 12 13}. 

Assignment 4

 -define multiplucation rule of probability

The multiplication rule is a way to find the probability of two events happening at the same
time (this is also one of the AP Statistic formula). There are two multiplication rules. The
general multiplication rule formula is: P(A ∩ B) = P(A) P(B|A) and the specific
multiplication rule is P(A and B) = P(A) * P(B). P(B|A) means “the probability of A
happening given that B has occurred”.

The specific multiplication rule, P(A and B) = P(A) * P(B), is only valid if the two events are
independent. In other words, it only works if one event does not change the probability of the
other event.
Examples of independent events :
Owning a cat and getting a weekly paycheck.
Finding a parking space and having a coin for the meter.
Buying a book and then buying a coffee.

Assignmnt 5

 -Define counting techniques with examples and explanation

Fundamental Counting Principle:


In a sequence of events, the total possible number of ways all events can performed is the
product of the possible number of ways each individual event can be performed.
Factorials
If n is a positive integer, then
n! = n (n-1) (n-2) ... (3)(2)(1)
n! = n (n-1)!
A special case is 0!
0! = 1

Suppose you are ordering a sandwich at the deli. There are 5 choices for bread, 4 choices for
meat, 12 choices for vegetables, and 3 choices for a sauce. How many different sandwiches
can be ordered? If you choose a sandwich at random, what's the probability that you get
turkey and mayonnaise on your sandwich?
In order to answer this probability question you need to know:
The total number of sandwiches that can be ordered.
The number of sandwiches that can be ordered that involve turkey and mayonnaise.
In each case, you can use the fundamental counting principle to help.
A sandwich is made by choosing a bread, a meat, a vegetable, and a sauce. There are 5
outcomes for the event of choosing bread, 4 outcomes for the event of choosing meat, 12
outcomes for the event of choosing vegetables, and 3 outcomes for the event of choosing a
sauce. The total number of sandwiches that can be ordered is: 5⋅4⋅12⋅3=720
A sandwich with turkey and mayonnaise is made by choosing a bread, turkey, a vegetable,
and mayonnaise. There are 5 outcomes for the event of choosing bread, there is 1 outcome for
the event of choosing turkey, there are 12 outcomes for the event of choosing vegetables, and
there is 1 outcome for the event of choosing mayonnaise. The total number of sandwiches
with turkey and mayonnaise that can be ordered is: 5⋅1⋅12⋅1=60
The probability of a sandwich with turkey and mayonnaise is 60720=112.

 -define random variable, two types of random variable

In probability and statistics, a randomvariable is a variable whose value is subject to


variations due to chance (i.e. randomness, in a mathematical sense). As opposed to other
mathematical variables, a random variable conceptually does not have a single, fixed value
(even if unknown); rather, it can take on a set of possible different values, each with an
associated probability. A random variable's possible values might represent the possible
outcomes of a yet-to-be-performed experiment, or the possible outcomes of a past experiment
whose already-existing value is uncertain (for example, as a result of incomplete information
or imprecise measurements). They may also conceptually represent either the results of an
"objectively" random process (such as rolling a die), or the "subjective" randomness that
results from incomplete knowledge of a quantity. Random variables can be classified as
either discrete (that is, taking any of a specified list of exact values) or as continuous (taking
any numerical value in an interval or collection of intervals). The mathematical function
describing the possible values of a random variable and their associated probabilities is
known as a probability distribution.
Types

Discrete Random Variables

Discrete random variables can take on either a finite or at most a countably infinite set of
discrete values (for example, the integers). Their probability distribution is given by
a probability mass function which directly maps each value of the random variable to a
probability. For example, the value of x1x1 takes on the probability p1p1, the value of x2x2
takes on the probability p2p2, and so on. The probabilities pipi must satisfy two
requirements: every probability pipi is a number between 0 and 1, and the sum of all the
probabilities is 1. ($p_1+p_2+\dots + p_k = 1$)

Continuous Random Variables

Continuous random variables, on the other hand, take on values that vary continuously within
one or more real intervals, and have a cumulative distribution function  (CDF) that is
absolutely continuous. As a result, the random variable has an uncountable infinite number of
possible values, all of which have probability 0, though ranges of such values can have
nonzero probability. The resulting probability distribution of the random variable can be
described by a probability density, where the probability is found by taking the area under the
curve.

 -define discrete probability and continous probabilty distribution and its


types(only names)

Discrete probability distributions

Several specialized discrete probability distributions are useful for specific applications. For
business applications, three frequently used discrete distributions are:
 Binomial
 Geometric
 Poisson
You use the binomial distribution to compute probabilities for a process where only one of
two possible outcomes may occur on each trial. The geometric distribution is related to the
binomial distribution; you use the geometric distribution to determine the probability that a
specified number of trials will take place before the first success occurs. You can use
the Poisson distribution to measure the probability that a given number of events will occur
during a given time frame.

Continuous probability distributions


Many continuous distributions may be used for business applications; two of the most widely
used are:
 Uniform
 Normal
The uniform distribution is useful because it represents variables that are evenly distributed
over a given interval. For example, if the length of time until the next defective part arrives
on an assembly line is equally likely to be any value between one and ten minutes, then you
may use the uniform distribution to compute probabilities for the time until the next defective
part arrives.

The normal distribution is useful for a wide array of applications in many disciplines. In


business applications, variables such as stock returns are often assumed to follow the normal
distribution. The normal distribution is characterized by a bell-shaped curve, and areas under
this curve represent probabilities.

Types of Probability Distributions

1. Discrete probability distributions


2. Continuous probability distributions
3. Terminologies
4. Cumulative Distribution Function

Discrete Probability Distributions

1. Bernoulli Distribution
2. Binomial Distribution
3. Hypergeometric Distribution
4. Negative Binomial Distribution
5. Geometric Distribution
6. Poisson Distribution
7. Multinomial Distribution

Assignmnt 6

 -define binomial distribution, its properties, applications

The binomial distribution is a probability distribution that summarizes the likelihood that a


value will take one of two independent values under a given set of parameters or
assumptions. The underlying assumptions of the binomial distribution are that there is only
one outcome for each trial, that each trial has the same probability of success, and that each
trial is mutually exclusive, or independent of each other. Binomial distribution summarizes
the number of trials, or observations when each trial has the same probability of attaining one
particular value. The binomial distribution determines the probability of observing a specified
number of successful outcomes in a specified number of trials.The binomial distribution is
often used in social science statistics as a building block for models for dichotomous outcome
variables, like whether a Republican or Democrat will win an upcoming election or whether
an individual will die within a specified period of time, etc.

Assignment 7

 Arithemtic mean and standard deviation of discrete random


variable(advantages)

The median is known as a measure of location; that is, it tells us where the data are. As stated
in , we do not need to know all the exact values to calculate the median; if we made the
smallest value even smaller or the largest value even larger, it would not change the value of
the median. Thus the median does not use all the information in the data and so it can be
shown to be less efficient than the mean or average, which does use all values of the
data. The range is an important measurement, for figures at the top and bottom of it denote
the findings furthest removed from the generality. However, they do not give much indication
of the spread of observations about the mean. This is where the standard deviation (SD)
comes in. The theoretical basis of the standard deviation is complex and need not trouble the
ordinary user. We will discuss sampling and populations in Chapter 3. A practical point to
note here is that, when the population from which the data arise have a distribution that is
approximately “Normal” (or Gaussian), then the standard deviation provides a useful basis
for interpreting the data in terms of probability. The Normal distribution is represented by a
family of curves defined uniquely by two parameters, which are the mean and the standard
deviation of the population. The curves are always symmetrically bell shaped, but the extent
to which the bell is compressed or flattened out depends on the standard deviation of the
population. However, the mere fact that a curve is bell shaped does not mean that it
represents a Normal distribution, because other distributions may have a similar sort of shape.

 Mathematical expectation(advantages)

Mathematical expectation, also known as the expected value, which is the summation of all


possible values from a random variable. It is also known as the product of the probability of
an event occurring, denoted by P(x), and the value corresponding with the actually observed
occurrence of the event. For a random variable expected value is a useful property. E(X) is
the expected value and can be computed by the summation of the overall distinct values that
is the random variable. The mathematical expectation is denoted by the formula: E(X)= Σ
(x1p1, x2p2, …, xnpn), where, x is a random variable with the probability function, f(x),
p is the probability of the occurrence, and n is the number of all possible values.
The mathematical expectation of an indicator variable can be 0 if there is no occurrence of an
event A, and the mathematical expectation of an indicator variable can be 1 if there is an
occurrence of an event A.

Assignmnt 8
 Take 20 items from a departmental store and its prices then calculate
Arithemtic mean and standard deviation and make a graph plot values.

Assignmnet 9

 -hypothesis

Hypothesis is an assumption that is made on the basis of some evidence. This is the initial
point of any investigation that translates the research questions into a prediction. It includes
components like variables, population and the relation between the variables. A research
hypothesis is a hypothesis that is used to test the relationship between two or more variables.

 -null and alternative hypothesis


The actual test begins by considering two hypotheses. They are called the null hypothesis and
the alternative hypothesis. These hypotheses contain opposing viewpoints.
H0: The null hypothesis: It is a statement of no difference between sample means or
proportions or no difference between a sample mean or proportion and a population mean or
proportion. In other words, the difference equals 0.
Ha: The alternative hypothesis: It is a claim about the population that is contradictory
to H0and what we conclude when we reject H0.
Since the null and alternative hypotheses are contradictory, you must examine evidence to
decide if you have enough evidence to reject the null hypothesis or not. The evidence is in the
form of sample data.
After you have determined which hypothesis the sample supports, you make
a decision. There are two options for a decision. They are “reject H0” if the sample
information favors the alternative hypothesis or “do not reject H0” or “decline to reject H0” if
the sample information is insufficient to reject the null hypothesis.

 -level of significance
The significance level, also known as alpha or α, is a measure of the strength of the evidence
that must be present in your sample before you will reject the null hypothesis and conclude
that the effect is statistically significant. The researcher determines the significance level
before conducting the experiment.
The significance level is the probability of rejecting the null hypothesis when it is true. For
example, a significance level of 0.05 indicates a 5% risk of concluding that a difference exists
when there is no actual difference. Lower significance levels indicate that you require
stronger evidence before you will reject the null hypothesis.

 -T table(define, application, properties


The table control is a container with the basic layout elements of a table window, including a
toolbar with filter and search functionality and icons for previous and next row and page. You
can place a table control within a section, tab, or window control and you can place a table
column control or a section control within a table control.
The table control layout also includes a table column control and a details box where you can
enter a section. The table control is similar to the table window in the List tab of most
applications. The tables enable the user to search for and display records from one or more
tables in a columnar format. You can use the standard filter feature to display only records
that meet specified criteria.
You can insert a section control into the details section of a table control. The details section
is an empty, rectangular box at the bottom of the window.

You might also like