<< Chapter < Page Chapter >> Page >
This course is a short series of lectures on Introductory Statistics. Topics covered are listed in the Table of Contents. The notes were prepared by EwaPaszek and Marek Kimmel. The development of this course has been supported by NSF 0203396 grant.

Test about proportions

Tests of statistical hypotheses are a very important topic, let introduce it through an illustration.

Suppose a manufacturer of a certain printed circuit observes that about p =0.05 of the circuits fails. An engineer and statistician working together suggest some changes that might improve the design of the product. To test this new procedure, it was agreed that n =100 circuits would be produced using the proposed method and the checked. Let Y equal the number of these 200 circuits that fail. Clearly, if the number of failures, Y , is such that Y /200 is about to 0.05, then it seems that the new procedure has not resulted in an improvement. On the other hand, If Y is small so that Y /200 is about 0.01 or 0.02, we might believe that the new method is better than the old one. On the other hand, if Y /200 is 0.08 or 0.09, the proposed method has perhaps caused a greater proportion of failures. What is needed is to establish a formal rule that tells when to accept the new procedure as an improvement. For example, we could accept the new procedure as an improvement if Y 5 of Y / n 0.025 . We do note, however, that the probability of the failure could still be about p =0.05 even with the new procedure, and yet we could observe 5 of fewer failures in n =200 trials.

That is, we would accept the new method as being an improvement when, in fact, it was not. This decision is a mistake which we call a Type I error . On the other hand, the new procedure might actually improve the product so that p is much smaller, say p =0.02, and yet we could observe y =7 failures so that y /200=0.035. Thus we would not accept the new method as resulting in an improvement when in fact it had. This decision would also be a mistake which we call a Type II error .

If it we believe these trials, using the new procedure, are independent and have about the same probability of failure on each trial, then Y is binomial b ( 200 , p ) . We wish to make a statistical inference about p using the unbiased p ^ = Y / 200 . We could also construct a confidence interval, say one that has 95% confidence, obtaining p ^ ± 1.96 p ^ ( 1 p ^ ) 200 .

This inference is very appropriate and many statisticians simply do this. If the limits of this confidence interval contain 0.05, they would not say the new procedure is necessarily better, al least until more data are taken. If, on the other hand, the upper limit of this confidence interval is less than 0.05, then they fell 95% confident that the true p is now less than 0.05. Here, in this illustration, we are testing whether or not the probability of failure has or has not decreased from 0.05 when the new manufacturing procedure is used.

The no change hypothesis, H 0 : p = 0.05 , is called the null hypothesis . Since H 0 : p = 0.05 completely specifies the distribution it is called a simple hypothesis ; thus H 0 : p = 0.05 is a simple null hypothesis .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Introduction to statistics. OpenStax CNX. Oct 09, 2007 Download for free at http://cnx.org/content/col10343/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Introduction to statistics' conversation and receive update notifications?

Ask