# 10.1 - Bayes Rule and Classification Problem

Printer-friendly version

#### Bayes’ Rule

Consider any two events A and B. To find P(B|A), the probability that B occurs given that A has occurred, Bayes’ Rule states the following:

$P(B|A) = \frac{P(A \text{ and } B)}{P(A)}$

This says that the conditional probability is the probability that both A and B occur divided by the unconditional probability that A occurs. This is a simple algebraic restatement of a rule for finding the probability that two events occur together, which is P(A and B) = P(A)P(B|A).

#### Bayes’ Rule Applied to the Classification Problem

We are interested in Pi | x), the conditional probability that an observation came from population πi given that the observed values of the multivariate vector of variables x. We will classify an observation to the population for which the value of Pi | x) is greatest. This is the most probable group given the observed values of x.

• Suppose that we have g populations (groups) and that the ith population is denoted as πi .
• Let pi = Pi), be the probability that a randomly selected observation is in population πi .
• Let f (x | πi ) be the conditional probability density function of the multivariate set of variables x, given that the observation came from population πi .

Technical Note: We have to be careful about the word probability in conjunction with our observed vector x. A probability density function for continuous variables does not give a probability, but instead gives a measure of “likelihood.”

Using the notation of Bayes’ Rule above, event A = observing the vector x and event B = observation came from population πi . Thus our probability of interest can be found as

$P(\text{ member of } \pi_i | \text{ we observed } \mathbf{x}) = \frac{P(\text{ member of } \pi_i \text{ and we observe } \mathbf{x})}{P(\text{ we observe } \mathbf{x})}$

• The numerator of the expression just given is the likelihood that a randomly selected observation is both from population πi and has the value x. This likelihood = pi f (x | πi ) .
• The denominator is the unconditional likelihood (over all populations) that we could observe x. This likelihood = $\sum_{j=1}^{g} p_j f(\mathbf{x}|\pi_j)$

Thus the posterior probability that an observation is a member of population πi is

$p(\pi_i|\mathbf{x}) = \frac{p_i f(\mathbf{x}|\pi_i)}{\sum_{j=1}^{g}p_j f(\mathbf{x}|\pi_j)}$

The classification rule is to assign observation x to the population for which the posterior probability is the greatest.

The denominator is the same for all posterior probabilities (for the various populations) so it is equivalent to say that we will classify an observation to the population for which pi f (x | πi ) is greatest.

#### Two Populations

With only two populations we can express a classification rule in terms of the ratio of the two posterior probabilities. Specifically we would classify to population 1 when

$\frac{p_1 f(\mathbf{x}|\pi_1)}{p_2 f(\mathbf{x}|\pi_2)} > 1$

This can be rewritten to say the we classify to population 1 when

$\frac{ f(\mathbf{x}|\pi_1)}{ f(\mathbf{x}|\pi_2)} > \frac{p_2}{p_1}$

#### Decision Rule

We are going to classify the sample unit or subject into the population πi that maximizes the posterior probability p(πi). that is the population that maximizes

$f(\mathbf{x|\pi_i})p_i$

We are going to calculate the posterior probabilities for each of the populations. Then we are going to assign the subject or sample unit to that population that has the highest posterior probability. Ideally that posterior probability is going to be greater than a half, the closer to 100% the better!

Equivalently we are going to assign it to the population that maximizes this product:

$\log f(\mathbf{x|\pi_i})p_i$

The denominator that appears above does not depend on the population because it involves summing over all the populations. Equivalently all we really need to do is to assign it to the population that has the largest for this product, or equivalently we can maximize the log of that product. A lot of times it is easier to write the log.