# 8.3 - Adjacent-Category Logits

Let us suppose that the response categories 1, 2, . . . , *r* are **ordered**, (e.g., nine responses in cheese data). Rather than considering the probability of each category versus a baseline, it now makes sense to consider the probability of

outcome 1 versus 2,

outcome 2 versus 3,

outcome 3 versus 4,

...

outcome *r* − 1 versus *r*.

This comparison of adjacent-categories will make more sense for the mortality data example. For the mortality data, consider the logits of "alive vs. dead", "cancer death vs. non-cancer death", etc.

The adjacent-category logits are defined as:

\begin{array}{rcl}

L_1 &=& \text{log} \left(\dfrac{\pi_1}{\pi_2}\right)\\

L_2 &=& \text{log} \left(\dfrac{\pi_2}{\pi_3}\right)\\

& \vdots & \\

L_{r-1} &=& \text{log} \left(\dfrac{\pi_{r-1}}{\pi_r}\right)

\end{array}

This is similar to a baseline-category logit model, but the baseline changes from one category to the next. Suppose we introduce covariates to the model:

\begin{array}{rcl}

L_1 &=& \beta_{10}+\beta_{11}X_1+\cdots+\beta_{1p}X_p\\

L_2 &=& \beta_{20}+\beta_{21}X_1+\cdots+\beta_{2p}X_p\\

& \vdots & \\

L_{r-1} &=& \beta_{r-1,0}+\beta_{r-1,1}X_1+\cdots+\beta_{r-1,p}X_p\\

\end{array}

It is easy to see that the β-coefficients from this model are linear transformations of the β's from the baseline-category model. To see this, suppose that we create a model in which category 1 is the baseline.

Then

\begin{array}{rcl}

\text{log} \left(\dfrac{\pi_2}{\pi_1}\right)&=& -L_1,\\

\text{log} \left(\dfrac{\pi_3}{\pi_1}\right)&=& -L_2-L_1,\\

& \vdots & \\

\text{log} \left(\dfrac{\pi_r}{\pi_1}\right)&=& -L_{r-1}-\cdots-L_2-L_1

\end{array}

Without further structure, the adjacent-category model is just a reparametrization of the baseline-category model. But now, let's suppose that the effect of a covariate in each of the adjacent-category equations is the same:

\begin{array}{rcl}

L_1 &=& \alpha_1+\beta_1X_1+\cdots+\beta_p X_p\\

L_2 &=& \alpha_2+\beta_1X_1+\cdots+\beta_p X_p\\

& \vdots & \\

L_{r-1} &=& \alpha_{r-1}+\beta_1X_1+\cdots+\beta_p X_p

\end{array}

What does this model mean? Let us consider the interpretation of β_{1}, the coefficient for *X*_{1}. Suppose that we hold all the other *X*'s constant and change the value of *X*_{1}. Think about the 2 × *r* table that shows the probabilities for the outcomes 1, 2, . . . , *r* at a given value of *X*_{1} = *x*, and at a new value *X*_{1} = *x* + 1:

The relationship between *X*_{1} and the response, holding all the other *X*-variables constant, can be described by a set of *r* − 1 odds ratios for each pair of adjacent response categories. The adjacent-category logit model says that each of these adjacent-category odds ratios is equal to exp(β_{1}). That is, β_{1} is the change in the log-odds of falling into category *j* + 1 versus category *j* when *X*_{1} increases by one unit, holding all the other *X*-variables constant.

For example, for the cheese data, j + 1 = 7, and j = 6, for x = cheese A, and x + 1 = cheese B, log-odds ratio = (19 × 6 / 8 × 1)

This adjacent-category logit model can be fit using software for Poisson loglinear regression using a specially coded design matrix, or a log-linear model when all data are categorical. In this model, the association between the *r*-category response variable and *X*_{1} would be included by

- the product of
*r*− 1 dummy indicators for the response variable with - a linear contrast for
*X*_{1}.

This model can be fit in SAS using PROC CATMOD or PROC GENMOD and in R using the vgam() package, for example. However, we will not discuss this model further, because it is not nearly as popular as the proportional-odds cumulative-logit model, for an ordinal response, which we discuss next.