# 8.4.1 - Example - Cheese Tasting

Printer-friendly version

(McCullagh and Nelder, 1989). In this example, subjects were randomly assigned to taste one of four different cheeses. Response categories are 1 = strong dislike to 9 = excellent taste.

 Cheese Response category 1 2 3 4 5 6 7 8 9 A 0 0 1 7 8 8 19 8 1 B 6 9 12 11 7 6 1 0 0 C 1 1 6 8 23 7 5 1 0 D 0 0 0 1 3 7 14 16 11

From the above table that records the number of individuals that rated a particular cheese with a particular score, we can easily see that D is the most preferable, and B is the worst; however between A and C the hierarchy of preference is not quite straight-forward. Let us model these data by a proportional-odds cumulative-logit model with three dummy variables to distinguish among the four cheeses. The model will have 8 intercepts (one for each of the logit equations) and 3 slopes, for a total of 11 free parameters. By comparison, the saturated model, which fits a separate 9-category multinomial distribution to each of the four cheeses, has 4 × (9 − 1) = 32 parameters to be estimated. Therefore, the overall goodness-of-fit test will have 32 − 11 = 21 degrees of freedom.

This model can be fit in SAS using PROC LOGISTIC as follows in the SAS program cheese.sas:

In PROC LOGISTIC, the order=data option tells SAS to arrange the response categories from lowest to highest in the order that they arise in the dataset. The option descending tells SAS to reverse the ordering of the categories, so that 9 becomes the lowest and 1 becomes the highest, and a positive β indicates that a higher value of X leads to greater liking.

Other procedures such as PROC GENMOD can be also used; this is left for your explorations.

This model can be fit in R using polr() function from the package MASS as follows in the R program cheese.R; this program reads in the data file, cheese.dat.

As always, you need change the working directory to where you save cheese.dat locally or change the path to access the data. The data has three columns, the first "Cheese" indicating the type of cheese, the second column "Response" indicating the level of the response and the third column labeled as "N" has the number of individuals giving the certain rating of the certain cheese.

Make sure that you treat the response as ordinal variable, which we do here by using the function factor(cheese\$Response, order=T); option order=T tells R to use the order of the levels of the response as entered in the dataset, which in this case is 1 < 2 <3 ... < 8 < 9. Then, fit the proportional–odds logistic regression model using polr() function. If weights=N are not specified, then R by default assumes that N=1 that is that data are ungrouped. There are other procedures in R that could be used to fit this model; this is left for your exploration. The logit equations would then be \begin{array}{rcl} L_1 &=& \text{log} \dfrac{P(Y \leq 1)}{P(Y > 1)}=\alpha_1+\beta_1X_1+\beta_2X_2+\beta_3X_3\\ L_2 &=& \text{log} \dfrac{P(Y \leq 2)}{P(Y > 2)}=\alpha_2+\beta_1X_1+\beta_2X_2+\beta_3X_3\\ & \vdots & \\ L_8 &=& \text{log} \dfrac{P(Y \leq 8)}{P(Y > 8)}=\alpha_8+\beta_1X_1+\beta_2X_2+\beta_3X_3 \end{array} where X1 = 1 for cheese B and zero otherwise; X2 = 1 for cheese C and zero otherwise; and X3 = 1 for cheese D and zero otherwise. In this case, a positive coefficient β means that increasing the value of X tends to lower the response categories (i.e. produce greater dislike). When we fit this model in SAS, we first see this output section: This reports a test of the proportional-odds assumption, i.e. a test for whether the slopes of the X-variables are equal across logit equations. The null hypothesis is that the current model (fitting one slope for each dummy) is true, and the alternative is that different slopes (8 × 3 = 24 in all) are needed. Therefore, this test has 21 degrees of freedom. The alternative model happens to be saturated, so this is also an overall goodness-of-fit test. The test does not indicate any serious lack of fit. The overall fit statistics, shown below and can be evaluated as before, indicate a good fit. For example, all three statistics from the "Testing Global Null Hypothesis: BETA=0" indicate that at least one of the regression coefficients is different from zero and is statistically significant. From the "Type 3 Analysis of Effects" table above, we can see that the effect of cheese is highly significant. Now let us look at the coefficients: The first parameter, labeled Intercept 9, is the estimated log-odds of falling into category 9 (excellent taste) versus all other categories when all X-variables are zero. Because X1 = X2 = X3 = 0 when cheese=A the reference level, the estimated log-odds of excellent taste for cheese A are exp(−3.1058). From the above output, the first estimated logit equation then is \begin{array}{rcl} l_1 = \text{log} \dfrac{P(Y =9 )}{P(Y < 9)}=\text{log} \dfrac{P(Y >8 )}{P(Y \leq 8)}=-3.1058-3.3517X_1-1.709X_2+1.6128X_3\\ \end{array} which is the estimated equation of the$ - L_8 = -\text{log} \dfrac{P(Y \leq 8)}{P(Y > 8)}=-(\alpha_8+\beta_1X_1+\beta_2X_2+\beta_3X_3)$we stated earlier. The next parameter, labeled Intercept 8, is the estimated log-odds of response 8 or 9 versus 1, . . . , 7 when cheese=A, and so on. Its estimated logit equation is \begin{array}{rcl} l_2 =-L_7= \text{log} \dfrac{P(Y \geq 8)}{P(Y < 8)}=-1.5459-3.3517X_1-1.709X_2+1.6128X_3\\ \end{array} The estimated slope for the first dummy variable, labeled cheese B, is −3.3517. This indicates that cheese B does not taste as good as cheese A. If we were to bifurcate the response scale into "better" and "worse" groups—for example, taking better = 7, 8, 9 and worse = 1, 2, 3, 4, 5, 6 or taking better = 5, 6, 7, 8, 9 and worse=1, 2, 3, 4, — and construct a 2 × 2 table comparing cheese A to cheese B, the estimated odds ratio for this table would be exp(−3.3517) = .035, which PROC LOGISTIC also gives you as a part of the "Odds Ratio Estimates" output along with its confidence interval, (0.015, 0.080) that indicates that there is a significant difference between the preference for cheese A to B. Looking at all three coefficients, \begin{array}{lcl} \hat{\beta}_1=-3.3517\\ \hat{\beta}_2=-1.7098\\ \hat{\beta}_3=1.6128 \end{array} and noting that cheese A is the reference category such that$\beta_2$compares cheese C to A and$\beta_3$cheese D to A, we see that the implied ordering of cheeses in terms of quality is D > A > C > B. Furthermore, D is significantly better preferred than A, but A is not siginficantly better than C. Here is the output for the R program cheese.R: The first part of the output is the coefficient estimates for the three dummy variables. The estimated slope for the first dummy variable, labeled cheese B, is −3.352. This indicates that cheese B does not taste as good as cheese A. If we were to bifurcate the response scale into "better" and "worse" groups—for example, taking better = 7, 8, 9 and worse = 1, 2, 3, 4, 5, 6 or taking better = 5, 6, 7, 8, 9 and worse=1, 2, 3, 4, — and construct a 2 × 2 table comparing cheese A to cheese B, the estimated odds ratio for this table would be exp(−3.352) = .035. The standard error of 0.4287 and the t-value (t-statistic) of -7.819 with df=1, indicate that there is a significant difference between the preference for cheese A to B. Looking at all three coefficients, \begin{array}{lcl} \hat{\beta}_1=-3.352\\ \hat{\beta}_2=-1.710\\ \hat{\beta}_3=1.613 \end{array} and noting that cheese A is the reference category such that$\beta_2$compares cheese C to A and$\beta_3$cheese D to A, we see that the implied ordering of cheeses in terms of quality is D > A > C > B. Furthermore, D is significantly better preferred than A, but A is not significantly better than C.This can been also seen from computing the confidence intervals for the coefficients or the odds ratios more specifically as > exp(confint(result)) Waiting for profiling to be done... Re-fitting to get Hessian 2.5 % 97.5 % CheeseB 0.01479494 0.07962504 CheeseC 0.08621686 0.37084182 CheeseD 2.40947358 10.74767430 The next part of the output is the table with estimated intercepts. The first parameter, labeled Intercept 1|2, is the estimated log-odds of falling into category 1 (excellent taste) versus all other categories when all X-variables are zero. Because X1 = X2 = X3 = 0 when cheese=A, the estimated log-odds of worst taste for cheese A are exp(−5.4674). From the above output, the first estimated logit equation then is \begin{array}{rcl} L_1 = \text{log} \dfrac{P(Y \leq 1 )}{P(Y > 1)}=-5.4674-3.352X_1-1.710X_2+1.613X_3\\ \end{array} which is the estimated equation of the$ L_1 = -\text{log} \dfrac{P(Y \leq 1)}{P(Y > 1)}=-(\alpha_1+\beta_1X_1+\beta_2X_2+\beta_3X_3)\$ we stated earlier.

The next parameter, labeled Intercept 2|3, is the estimated log-odds of response 1 or 2 versus 3, . . . , 9 when cheese=A, and so on. Its estimated logit equation is

\begin{array}{rcl}
L2= \text{log} \dfrac{P(Y \leq 2)}{P(Y > 2)}=-4.4122-3.352X_1-1.710X_2+1.613X_3\\
\end{array}

The last part of the output give you the fit statistics where the "residual deviance" equals to -2loglikelihood value for the current model.  We can use this value to assess how well does this model fits in comparison to other models. For example, to compare it to the intercept only model ("nullmodel" in R code):

> deviance(nullmodel)- deviance(result)
[1] 148.4539
> df.residual(nullmodel)- df.residual(result)
[1] 3

We get the likelihood ratio value of about 148 with 3 df which is highly significant. Thus this model fits much better than the intercept only model, that is at least one of the regression coefficients is not equal to zero. The same can be obtained by calling the anova() function:

> anova(result, nullmodel)
Likelihood ratio tests of ordinal regression models

Response: Response
Model Resid. df Resid. Dev   Test    Df LR stat. Pr(Chi)
1      1       200   859.8018
2 Cheese       197   711.3479 1 vs 2     3 148.4539       0