To assess the accuracy of a laboratory scale, a standard weight known to weigh 1 gram is repeatedly weighed a total of n times How large should n be so that a 95% confidence interval for µ has a margin of error of ± 0.0001?

Answer :

Answer:

[tex]n=(\frac{1.960(1)}{0.0001})^2 =384160000[/tex]

So the answer for this case would be n=384160000 rounded up to the nearest integer

Step-by-step explanation:

We know the following info:

[tex] ME = 0.0001[/tex] represent the margin of error desired

[tex] \sigma= 1[/tex] we assume that the population deviation is this value

The margin of error is given by this formula:

[tex] ME=z_{\alpha/2}\frac{\sigma}{\sqrt{n}}[/tex]    (a)

And on this case we have that ME =0.0001 and we are interested in order to find the value of n, if we solve n from equation (a) we got:

[tex]n=(\frac{z_{\alpha/2} \sigma}{ME})^2[/tex]   (b)

The critical value for 95% of confidence interval now can be founded using the normal distribution. If we use the normal standard distribution or excel we got:  [tex]z_{\alpha/2}=1.960[/tex], replacing into formula (b) we got:

[tex]n=(\frac{1.960(1)}{0.0001})^2 =384160000[/tex]

So the answer for this case would be n=384160000 rounded up to the nearest integer

Other Questions