# 5.4 - Tests for the Scale Parameter

The tests we have considered so far in this chapter deal with testing measure of center or of location. In practice, these tests are used in situations when we are testing if one treatment performs "better" than another. There are studies where the interest is not in comparing treatments to see if one tends to have higher or lower output but rather to compare which treatment is more accurate. In other words, it might be of interest to compare the variabilities (scale or spread) of the two treatments.

We will present some of these tests in this section. The graph below illustrates a two distributions with the same location parameter (in this case the mean) but dierent variances. The red line in the graph is a Normal distribution with mean of 0 and variance of 1, N(0,1). The blue line is also a Normal distribution with mean 0 but it has a higher variance of 9. As you can see, the distributions with higher variances (and higher scale values) will have more extreme values (or heavier tails) than the ones with smaller variances (or smaller scales).

### Example: (Ingots Example)

A sample of 16 pound ingots were taken from two different distributors, A and B. We know they are typically the same weight but what is equally important is that there is as little variability as possible.

**Research Questions**: Are there differences in variability between the two distributors?

We wish to test:

*H*_{0} : the weight distributions are the same *H*_{1} : there is a difference in variability

**Measurement**: Weight of ingot.

A |
B |

15.7 | 15.4 |

16.1 | 16.0 |

15.9 | 15.6 |

16.2 | 15.7 |

15.9 | 16.6 |

16.0 | 16.3 |

15.8 | 16.4 |

16.1 | 16.8 |

16.3 | 15.2 |

16.5 | 16.9 |

15.5 | 15.1 |

**Statistical Model**:

Assume the distributions are normal. In this case, the distributions have the same mean.

\(H_0 : \sigma_{1}^{2} = \sigma_{2}^{2}\)

\(H_1 : \sigma_{1}^{2} \ne \sigma_{2}^{2}\)

\( \sigma_{1}^{2} < \sigma_{2}^{2}\)

\( \sigma_{1}^{2} > \sigma_{2}^{2}\)

Equal variances across samples is called **homogeneity of ****variance**.

First, lets take a look at the data. Below is a comparative box plot. To do this in R, use the following

commands.

> a=c(15.7, 16.1, 15.9, 16.2, 15.9, 16.0, 15.8, 16.1, 16.3, 16.5, 15.5)

> b=c(15.4, 16.0, 15.6, 15.7, 16.6, 16.3, 16.4, 16.8, 15.2, 16.9, 15.1)

> boxplot(a, b, names=c("A", "B"))

The two distributions appear to be symmetric with the same mean or median. The assumptions of Normality and equal means do not seem to be violated here. The second distribution, B, seems to be more variable than distribution A.

One test we can use in the F-test for equal variance.

\(F=\frac{S_{2}^{2}}{S_{1}^{2}}\)

where \(S_{2}^{2}\) is the smaller variance.

Recall that \(S^2 = \sum_{i=1}^{n}(x_i - x)^2/(n-1)\) is the sample variance. Under the assumptions above, *F* ~ *F*_{n-1;m-1}. This means that the test statistic, *F*, follows an *F*-distribution with degrees of freedom *n* - 1 and *m* - 1. We can use the *F*-distribution to get the *p*-value.

It turns out that *F* is horrible if you violate the Normal assumption. It is even horrible if the distribution is not Normal but you have symmetry. By simulations, it is seen that its performs well when the data are from Normal distributions but if you have any other distribution and the test is bad. So, what do we do?

- Printer-friendly version
- Login to post comments