Test For Homogeneity

Printer-friendly versionPrinter-friendly version

As suggested in the introduction to this lesson, the test for homogeneity is a method, based on the chi-square statistic, for testing whether two or more multinomial distributions are equal. Let's start by trying to get a feel for how our data might "look" if we have two equal multinomial distributions.

old mainExample

A university admissions officer was concerned that males and females were accepted at different rates into the four different schools (business, engineering, liberal arts, and science) at her university. She collected the following data on the acceptance of 1200 males and 800 females who applied to the university:

table

Are males and females distributed equally among the various schools?

Solution. Let's start by focusing on the business school. We can see that, of the 1200 males who applied to the university, 300 (or 25%) were accepted into the business school.  Of the 800 females who applied to the university, 200 (or 25%) were accepted into the business school.  So, the business school looks to be in good shape, as an equal percentage of males and females, namely 25%, were accepted into it. 

Now, for the engineering school. We can see that, of the 1200 males who applied to the university, 240 (or 20%) were accepted into the engineering school.  Of the 800 females who applied to the university, 160 (or 20%) were accepted into the engineering school.  So, the engineering school also looks to be in good shape, as an equal percentage of males and females, namely 20%, were accepted into it. 

We probably don't have to drag this out any further. If we look at each column in the table, we see that the proportion of males accepted into each school is the same as the proportion of females accepted into each school... which therefore happens to equal the proportion of students accepted into each school, regardless of gender. Therefore, we can conclude that males and females are distributed equally among the four schools.

university campusExample

A university admissions officer was concerned that males and females were accepted at different rates into the four different schools (business, engineering, liberal arts, and science) at her university. She collected the following data on the acceptance of 1200 males and 800 females who applied to the university:

table

Are males and females distributed equally among the various schools?

Solution. Let's again start by focusing on the business school. In this case, of the 1200 males who applied to the university, 240 (or 20%) were accepted into the business school.  And, of the 800 females who applied to the university, 240 (or 30%) were accepted into the business school.  So, the business school appears to have different rates of acceptance for males and females, 20% compared to 30%. 

Now, for the engineering school. We can see that, of the 1200 males who applied to the university, 480 (or 40%) were accepted into the engineering school.  Of the 800 females who applied to the university, only 80 (or 10%) were accepted into the engineering school.  So, the engineering school also appears to have different rates of acceptance for males and females, 40% compared to 10%. 

Again, there's no need drag this out any further. If we look at each column in the table, we see that the proportion of males accepted into each school is different than the proportion of females accepted into each school... and therefore the proportion of students accepted into each school, regardless of gender, is different than the proportion of males and females accepted into each school. Therefore, we can conclude that males and females are not distributed equally among the four schools.

In the context of the two examples above, it quickly becomes apparent that if we wanted to formally test the hypothesis that males and females are distributed equally among the four schools, we'd want to test the hypotheses:

\(H_0 : p_{MB} =p_{FB} \text{ and } p_{ME} =p_{FE} \text{ and } p_{ML} =p_{FL} \text{ and } p_{MS} =p_{FS}\)
\(H_A : p_{MB} \ne p_{FB} \text{ or } p_{ME} \ne p_{FE} \text{ or } p_{ML} \ne p_{FL} \text{ or } p_{MS} \ne p_{FS}\)

where:

(1)  pMj is the proportion of males accepted into school j = BEL, or S

(2)  pFj is the proportion of females accepted into school j = BEL, or S

In conducting such a hypothesis test, we're comparing the proportions of two multinomial distributions. Before we can develop the method for conducting such a hypothesis test, that is, for comparing the proportions of two multinomial distributions, we first need to define some notation.

hieroglyphicsNotation

We'll use what I think most statisticians would consider standard notation, namely that:

(1) the letter i will index the h row categories, and

(2) the letter j will index the k column categories

(The text reverses the use of the i index and the j index.)  That said, let's use the framework of the previous examples to introduce the notation we'll use. That is, rewrite the tables above using the following generic notation:

table

with:

(1) yij denoting the number falling into the jth category of the ith sample

(2) \(\hat{p}_{ij}=y_{ij}/n_i\)denoting the proportion in the ith sample falling into the jth category

(3) \(n_i=\sum_{j=1}^{k}y_{ij}\)denoting the total number in the ith sample

(4) \( \hat{p}_{j}=(y_{1j}+y_{2j})/(n_1+n_2) \)denoting the (overall) proportion falling into the jth category

With the notation defined as such, we are now ready to formulate the chi-square test statistic for testing the equality of two multinomial distributions.

The Chi-Square Test Statistic

Theorem.  The chi-square test statistic for testing the equality of two multinomial distributions:

\[Q=\sum_{i=1}^{2}\sum_{j=1}^{k}\frac{(y_{ij}- n_i\hat{p}_j)^2}{n_i\hat{p}_j}\]

follows an approximate chi-square distribution with k−1 degrees of freedom. Reject the null hypothesis of equal proportions if Q is large, that is, if:

\[Q \ge \chi_{\alpha, k-1}^{2}\]

Proof. For the sake of concreteness, let's again use the framework of our example above to derive the chi-square test statistic. For one of the samples, say for the males, we know that:

\[\sum_{j=1}^{k}\frac{(\text{observed }-\text{ expected})^2}{\text{expected}}=\sum_{j=1}^{k}\frac{(y_{1j}- n_1p_{1j})^2}{n_1p_{1j}} \]

follows an approximate chi-square distribution with k−1 degrees of freedom. For the other sample, that is, for the females, we know that:

\[\sum_{j=1}^{k}\frac{(\text{observed }-\text{ expected})^2}{\text{expected}}=\sum_{j=1}^{k}\frac{(y_{2j}- n_2p_{2j})^2}{n_2p_{2j}} \]

follows an approximate chi-square distribution with k−1 degrees of freedom. Therefore, by the independence of two samples, we can "add up the chi-squares," that is:

\[\sum_{i=1}^{2}\sum_{j=1}^{k}\frac{(y_{ij}- n_ip_{ij})^2}{n_ip_{ij}}\]

follows an approximate chi-square distribution with k−1+ k−1 = 2(k−1) degrees of freedom.

Oops.... but we have a problem! The pij's are unknown to us. Of course, we know by now that the solution is to estimate the pij's. Now just how to do that? Well, if the null hypothesis is true, the proportions are equal, that is, if:

\[p_{11}=p_{21}, p_{21}=p_{22}, ... , p_{1k}=p_{2k}  \]

we would be best served by using all of the data across the sample categories. That is, the best estimate for each jth category is the pooled estimate:

\[\hat{p}_j=\frac{y_{1j}+y_{2j}}{n_1+n_2}\]

We also know by now that because we are estimating some paremeters, we have to adjust the degrees of freedom. The pooled estimates \(\hat{p}_j\) estimate the true unknown proportions p1j = p2j = pj. Now, if we know the first k−1 estimates, that is, if we know:

\[\hat{p}_1, \hat{p}_2, ... , \hat{p}_{k-1}\]

then the kth one, that is \(\hat{p}_k\), is determined because:

\[\sum_{j=1}^{k}\hat{p}_j=1\]

That is:

\[\hat{p}_k=1-(\hat{p}_1+\hat{p}_2+ ... + \hat{p}_{k-1})\]

So, we are estimating k−1 parameters, and therefore we have to subtract k−1 from the degrees of freedom. Doing so, we get that

\[Q=\sum_{i=1}^{2}\sum_{j=1}^{k}\frac{(y_{ij}- n_i\hat{p}_j)^2}{n_i\hat{p}_j}\]

follows an approximate chi-square distribution with 2(k−1) − (k−1) = k − 1 degrees of freedom. As was to be proved!

Note

Our only example on this page has involved h = 2 samples. If there are more than two samples, that is, if h > 2, then the definition of the chi-square statistic is appropriately modified. That is:

\[Q=\sum_{i=1}^{h}\sum_{j=1}^{k}\frac{(y_{ij}- n_i\hat{p}_j)^2}{n_i\hat{p}_j}\]

follows an approximate chi-square distribution with h(k−1) − (k−1) = (h−1)(k − 1) degrees of freedom.

Let's take a look at another example.

surgeryExample

The head of a surgery department at a univerity medical center was concerned that surgical residents in training applied unnecessary blood transfusions at a different rate than the more experienced attending physicians. Therefore, he ordered a study of the 49 Attending Physicians and 71 Residents in Training with privileges at the hospital. For each of the 120 surgeons, the number of blood transfusions prescribed unnecessarily in a one-year period was recorded. Based on the number recorded, a surgeon was identified as either prescribing unnecessary blood transfusions Frequently, Occasionally, Rarely, or Never. Here's a summary table (or "contingency table") of the resulting data:

table

Are attending physicians and residents in training distributed equally among the various unnecessary blood transfusion categories?

Solution. We are interested in testing the null hypothesis:

\(H_0 : p_{RF} =p_{AF} \text{ and } p_{RO} =p_{AO} \text{ and } p_{RR} =p_{AR} \text{ and } p_{RN} =p_{AN}\)

against the alternative hypothesis:

\(H_A : p_{RF} \ne p_{AF} \text{ or } p_{RO} \ne p_{AO} \text{ or } p_{RR} \ne p_{AR} \text{ or } p_{RN} \ne p_{AN}\)

The observed data were given to us in the table above. So, the next thing we need to do is find the expected counts for each cell of the table:

table

It is in the calculation of the expected values that you can readily see why we have (2−1)(4−1) = 3 degrees of freedom in this case. That's because, we only have to calculate three of the cells directly.

table

Once we do that, the remaining five cells can be calculated by way of subtraction:

table

Now that we have the observed and expected counts, calculating the chi-square statistic is a straightforward exercise:

\[Q=\frac{(2-6.942)^2}{6.942}+ ... +\frac{(5-10.65)^2}{10.65} =31.88  \]

The chi-square test tells us to reject the null hypothesis, at the 0.05 level, if Q is greater than a chi-square random variable with 3 degrees of freedom, that is, if Q > 7.815. Because Q = 31.88 > 7.815, we reject the null hypothesis. There is sufficient evidence at the 0.05 level to conclude that the distribution of unnecessary transfusions differs among attending physicians and residents.

Using Minitab

If you...

(1) ... enter the data (in the inside of the frequency table only) into the columns of the worksheet

(2) ... and select Stat >> Tables >> Chi-square test

then you'll get typical chi-square test output that looks something like this:

minitab output