Type I and II error

When testing a hypothesis, the level of significance of the test (\alpha) is the probability that you will reject the null hypothesis if the null hypothesis is true. This is also referred to as the probability of a Type I error (reject H0 when H0 is true), hence the probability of a Type I error is a conditional probability (i.e., conditioned on the null hypothesis being true). The probability of a Type I error has been well defined for all of the tests of hypothesis that we have done.

However, with alternative hypotheses such as \mu [not equal] 72 or \mu [greater than or equal to] 72, we cannot evaluate the probability of a Type II error (fail to reject H0 when the alternative hypothesis is true). Assume we were accepting or rejecting H0 based on whether \bar{x} was less than or greater than 75, respectively. The probability of \bar{x} being less than 75 would be differerent if \mu=74 from the value if \mu=74. In the context of the previous problems we have been doing, we cannot quantify the probability of a Type II error.

There are many situations when we want to distinguish between two populations for which the mean (and standard deviation) are known (and we know that the populations are normally distributed). For example, we might want to judge whetther a person is healthy or sick based on his temperature, whether a coin is genuine or counterfeit based on its weight, whether a person is male or female based on the person's income. If a mean (and standard deviation) is specified for the alternative hypothesis (e.g., \mu0=72, \muA=76), then the probability of a Type II error can be calculated from \muA and the critical value for accepting or rejecting the null hypothesis. (this will of course be a conditional probability, i.e., conditioned on the alternative hypothesis being true.) The (conditional) probability is denoted by \beta, and 1-\beta is called the power of the test. The power of a test is the probability that you will reject the null hypothesis when the alternative hypothesis is true.

Sometimes you are interested in the probability that a randomly selected individual is healthy, but diagnosed as sick. This is readily obtained from the product rule by multiplying the probability of a Type I error (which is the probability of dianosing as sick conditioned on the person being healthy) by the probability that a randomly chosen person is healthy (which must be known -- i.e., given to you in the problem). Other absolute unconditional probabilities can be similarly calculated.