Neyman Pearson Lemma

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

NeymanPearson lemma

From Wikipedia, the free encyclopedia

In statistics, the NeymanPearson lemma, named after Jerzy Neyman and Egon Pearson, states that when performing a hypothesis test between two point hypotheses H0: = 0 and H1: = 1, then the likelihood-ratio test which rejects H0 in favour of H1 when

where

is the most powerful test of size for a threshold . If the test is most powerful for all (UMP) for alternatives in the set .

, it is said to be uniformly most powerful

In practice, the likelihood ratio is often used directly to construct tests see Likelihood-ratio test. However it can also be used to suggest particular test-statistics that might be of interest or to suggest simplified tests for this one considers algebraic manipulation of the ratio to see if there are key statistics in it related to the size of the ratio (i.e. whether a large statistic corresponds to a small ratio or to a large one).

Contents
1 Proof 2 Example 3 See also 4 References 5 External links

Proof
Define the rejection region of the null hypothesis for the NP test as

Any other test will have a different rejection region that we define as given parameter as

. Furthermore, define the probability of the data falling in region R,

For both tests to have size , it must be true that

It will be useful to break these down into integrals over distinct regions:

and

Setting

and equating the above two expression yields that

Comparing the powers of the two tests,

and

, one can see that

Now by the definition of

Hence the inequality holds.

Example
Let be a random sample from the distribution where the mean is known, and suppose that we wish to test for against . The likelihood for this set of normally distributed data is

We can compute the likelihood ratio to find the key statistic in this test and its effect on the test's outcome:

This ratio only depends on the data through hypothesis for this data will depend only on . So we should reject if

. Therefore, by the NeymanPearson lemma, the most powerful test of this type of . Also, by inspection, we can see that if , then is a decreasing function of

is sufficiently large. The rejection threshold depends on the size of the test. In this

example, the test statistic can be shown to be a scaled Chi-square distributed random variable and an exact critical value can be obtained.

See also
Statistical power

References
Neyman, J.; Pearson, E. S. (1933). "On the Problem of the Most Efficient Tests of Statistical Hypotheses". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 231 (694706): 289. doi:10.1098/rsta.1933.0009 (http:// dx.doi.org/10.1098%2Frsta.1933.0009) . cnx.org: NeymanPearson criterion (http://cnx.org/content/m11548/latest/)

External links
Cosma Shalizi, a professor of statistics at Carnegie Mellon University, gives an intuitive derivation of the NeymanPearson Lemma using ideas from economics (http://cscs.umich.edu/~crshalizi/weblog/630.html) Retrieved from "http://en.wikipedia.org/w/index.php?title=NeymanPearson_lemma&oldid=546859364" Categories: Statistical theorems Statistical tests This page was last modified on 25 March 2013 at 05:13. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of Use for details. Wikipedia is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.

You might also like