6.2 - Binary Logistic Regression with a Single Categorical Predictor

Key Concepts

  • Binary Logistic Regression for 2 × J tables
  • Model Fit and Parameter Estimation & Interpretation
  • Link to test of independence 

Objectives

  • Understand the basic ideas behind modeling categorical data with binary logistic regression.
  • Understand how to fit the model and interpret the parameter estimates, especially in terms of odds and odd ratios.

Overview

Binary logistic regression estimates the probability that a characteristic is present (e.g. estimate probability of "success") given the values of explanatory variables, in this case a single categorical variable ; π = Pr (Y = 1|X = x). Suppose a physician is interested in estimating the proportion of diabetic persons in a population. Naturally she knows that all sections of the population do not have equal probability of ‘success’, i.e. being diabetic. Older population, population with hypertension, individuals with diabetes incidence in family are more likely to have diabetes. Consider the predictor variable X to be any of the risk factor that might contribute to the disease. Probability of success will depend on levels of the risk factor.

Variables:

Model:

\[\pi_i=Pr(Y_i=1|X_i=x_i)=\dfrac{\text{exp}(\beta_0+\beta_1 x_i)}{1+\text{exp}(\beta_0+\beta_1 x_i)}\]

or,

\[\begin{align}
\text{logit}(\pi_i)&=\text{log}\left(\dfrac{\pi_i}{1-\pi_i}\right)\\
&= \beta_0+\beta_1 x_i\\
&= \beta_0+\beta_1 x_{i1} + \ldots + \beta_k x_{ik}\\
\end{align}\]

Assumptions:

Model Fit:

Parameter Estimation:

The maximum likelihood estimator (MLE) for (β0, β1) is obtained by finding \((\hat{\beta}_0,\hat{\beta}_1)\) that maximizes:

\(L(\beta_0,\beta_1)=\prod\limits_{i=1}^N \pi_i^{y_i}(1-\pi_i)^{n_i-y_i}=\prod\limits_{i=1}^N \dfrac{\text{exp}\{y_i(\beta_0+\beta_1 x_i)\}}{1+\text{exp}(\beta_0+\beta_1 x_i)}\)

In general, there are no closed-form solutions, so the ML estimates are obtained by using iterative algorithms such as Newton-Raphson (NR), or Iteratively re-weighted least squares (IRWLS). In Agresti (2013), see section 4.6.1 for GLMs, and for logistic regression, see sections 5.5.4-5.5.5.

More Advanced material (not required):

Brief overview of Newton-Raphson (NR).  Suppose we want to maximize a loglikelihood $l(\theta)$ with respect to a parameter $\theta=(\theta_1,\ldots,\theta_p)^T$. At each step of NR, the current estimate $\theta^{(t)}$ is updated as

\[
\theta^{(t+1)} \,=\,\theta^{(t)} \,+\,
\left[ -l^{\prime\prime}(\theta^{(t)})\right]^{-1}\,
l^\prime(\theta^{(t)}),
\]
where $l^\prime(\theta)$ is the vector of first derivatives $ l^\prime(\theta)\,=\,\left(\partial l/\partial\theta_1,\ldots,\partial l/\partial\theta_p\right)^T$ (also called the score vector), and $l^{\prime\prime}(\theta)$ is the matrix of second derivatives (also called the Hessian). That is, $l^{\prime\prime}(\theta)$ is the $p\times p$ matrix with $(j,k)$th element equal to $\partial^2 l/\partial\theta_j\partial\theta_k$. We repeat this step until convergence, when $\theta^{(t+1)}\approx\theta^{(t)}$. Upon convergence, the inverse of the Hessian provides an estimated covariance matrix,
\[
\hat{V}(\hat{\theta}) \,=\,
\left[- l^{\prime\prime}(\hat{\theta})\right]^{-1}
\]
(the inverse of the ``observed information''), whose diagonal elements give the estimated variance, that is the standard errors, of the parameter estimates.

Applying NR to logistic regression. For the logit model with $p-1$ predictors $X$, $\beta=(\beta_0, \beta_1, \ldots, \beta_{p-1})$, the likelihood is

\begin{eqnarray}
L(\beta) & = &
\prod_{i=1}^N \frac{n_i!}{y_i!(n_i-y_i)!}\,
\pi_i^{y_i}(1-\pi_i)^{n_i-y_i}\\
& \propto & \prod_{i=1}^N
\left(\frac{\pi_i}{1-\pi_i}\right)^{y_i}\,
(1-\pi_i)^{n_i}\\
& \propto & \prod_{i=1}^N
e^{x_i^T\!\beta y_i}\,
\left(1+e^{x_i^T\!\beta}\right)^{-n_i},
\end{eqnarray}

so the loglikelihood is $
l(\beta) \,=\,
\sum_{i=1}^N
x_i^T\!\beta y_i \, -\,
\sum_{i=1}^N n_i\,\log\left(1 + e^{x_i^T\!\beta}\right).$

The first derivative of $x_i^T\beta$ with respect to $\beta_j$ is $x_{ij}$, thus
\begin{eqnarray}
\frac{\partial l}{\partial\beta_j} & = &
\sum_{i=1}^N y_i x_{ij} \, - \,
\sum_{i=1}^N n_i
\left(\frac{1}{1+e^{x_i^T\!\beta}}\right)
e^{x_i^T\!\beta}x_{ij}
& = &
\sum_{i=1}^N
(y_i - \mu_i)x_{ij},
\end{eqnarray}
where $\mu_i = E(y_i) = n_i\pi_i$.

The second derivatives used in computing the standard errors of the parameter estimates, $\hat{\beta}$, are

\begin{eqnarray} \frac{\partial^2 l}{\partial\beta_j\partial\beta_k}
& = &
-\sum_{i=1}^N n_ix_{ij}\,\frac{\partial}{\partial\beta_k}
\left(\frac{e^{x_i^T\beta}}{1+e^{x_i^T\!\beta}}\right)
& = &
-\sum_{i=1}^N n_i \pi_i(1-\pi_i)x_{ij}x_{ik}\end{eqnarray}.

For reasons including numerical stability and speed, it is generally advisable to avoid computing matrix inverses directly.  Thus in many implementations, clever methods are used to obtain the required information without directly constructing the inverse, or even the Hessian.

 

Interpretation of Parameter Estimates:

\(\dfrac{\text{exp}(\beta_0+\beta_1(x_{i1}+1))}{\text{exp}(\beta_0+\beta_1 x_{i1})}=\text{exp}(\beta_1)\)

In general, the logistic model stipulates that the effect of a covariate on the chance of "success" is linear on the log-odds scale, or multiplicative on the odds scale.

Inference for Logistic Regression:

_________________________

Example - Student Smoking

The table below classifies 5375 high school students according to the smoking behavior of the student (Z) and the smoking behavior of the student's parents (Y ). We saw this example in Lesson 3 (Measuring Associations in I × J tables, smokeindep.sas and smokeindep.R). 

How many parents smoke?
Student smokes?
Yes (Z = 1)
No (Z = 2)
Both (Y = 1)
400
1380
One (Y = 2)
416
1823
Neither (Y = 3)
188
1168

The test for independence yields X2 = 37.6 and G2 = 38.4 with 2 df (p-values are essentially zero), so Y and Z are related. It is natural to think of Z as a response and Y as a predictor in this example. We might be interested in exploring the dependency of student's smoking behavior on neither parent smoking versus at least one parent smoking. Thus we can combine or collapse the first two rows of our 3 × 2 table and look at a new 2 × 2 table:

 
Student
smokes
Student does
not smoke
1–2 parents smoke
816
3203
Neither parent smokes
188
1168

For the chi-square test of independence, this table has X2 = 27.7, G2 = 29.1, p-value ≈ 0, and theta hat = 1.58. Therefore, we estimate that a student is 58% more likely, on the odds scale, to smoke if he or she has at least one smoking parent than none.

But what if:

These are just some of the possibilities of logistic regression, which cannot be handled within a framework of goodness-of-fit only.

Consider the simplest case:

Arrange the data in our running example like this,

 
yi
ni
1–2 parents smoke
816
4019
Neither parent smokes
188
1356

where yi is the number of children who smoke, ni is the number of children for a given level of parents' smoking behaviour, and πi = P(yi = 1|xi) is the probability of a randomly chosen student i smoking given parents' smoking status.  Here i takes values 1 and 2. Thus, we claim that all students who have at least one parent smoking will have the same probability of "success", and all student who have neither parent smoking will have the same probability of "success", though for the two groups success probabilities will be different. Then the response variable Y has a binomial distribution:

\(y_i \sim Bin(n_i,\pi_i)\)

Explanatory variable X is a dummy variable such that

Xi = 0 if neither parent smokes,
Xi = 1 if at least one parent smokes.

Understanding the use of dummy variables is important in logistic regression.

Think about the following question, then click on the icon to the left display an answer.

Can you explain to someone what is meant by a "dummy variable"?

Then the logistic regression model can be expressed as:

\(\text{logit}(\pi_i)=\text{log}\dfrac{\pi_i}{1-\pi_i}=\beta_0+\beta_1 X_i\) (1) or 

\(\pi_i=\dfrac{\text{exp}(\beta_0+\beta_1 x_i)}{1+\text{exp}(\beta_0+\beta_1 x_i)}\)  (2)

and it says that log-odds of a randomly chosen student smoking is β0 for "smoking parents" = neither, and β0 + β1 for "smoking parents" = at least one parent is smoking.

6.2.1 - Fitting the Model in SAS

There are different ways to do this depending on the format of the data. As before, for details you need to refer to SAS or R help. Here are some general guidelines to keep in mind. Please note that we make a distinction about the way the data are entered into SAS (or R).

If data come in a tabular form, i.e., response pattern is with counts (as seen in the previous example).

  • PROC GENMOD: We need a variable that specifies the number of cases that equals marginal frequency counts or number of trials (e.g. n), and the number of events (y)
  • model y/n = x1 x2 /link=logit dist=binomial [any other options you may want];

  • PROC LOGISTIC: We do need a variable that specifies the number of cases that equals marginal frequency counts
  • model y/n = x1 x2 / [put any other options you may want here];

If data come in a matrix form, i.e., subject × variables matrix with one line for each subject, like a database

  • PROC GENMOD: We need a variable that specifies the number of cases that equals marginal frequency counts or number of trials (e.g. n), and the number of events (y)
  • model y/n = x1 x2 / link = logit dist = binomial [put any other options you may want here];

  • PROC LOGISTIC: We do NOT need a variable that specifies the number of cases that equals marginal frequency counts
  • model y = x1 x2 / [any other options you may want];

For both GENMOD and LOGISTIC, as before, include interaction terms with *, and make sure to include all lower order terms.

We will follow both the SAS output through to explain the different parts of model fitting. The outputs from R will be essentially the same.

Example - Student Smoking

Let's begin with collapsed 2x2 table:

 
Student
smokes
Student does
not smoke
1–2 parents smoke
816
3203
Neither parent smokes
188
1168

Let's look at one part of smoke.sas:

SAS program

In the data step, the dollar sign \$ as before indicates that S is a character-string variable.

In the logistic step, the statement:

descending
insures that you are modeling a probability of an "event" which takes value 1, otherwise by default SAS models the probability of "nonevent"
 
class S (ref=first) / param=ref;
says that S should be coded as a categorical variable using the first category as the reference or zero group. (The first category is "nosmoke," because it comes before "smoke" in alphabetical order). You can vary this order by additional options provided by class and/or by entering data in a different order. See SAS online help for details, and the rest of smoke.sas for more options.
 
model y/n = s /scale = none
Because we have grouped data (i.e. multiple trials per line of the data set), the model statement uses the "event/trial" syntax, in which y/n appears on the left-hand side of the equal sign. The predictors go on the right-hand side, separated by spaces if there are more than one. An intercept is added automatically by default.
 
scale=none
option serves two purposes. One, it will give us the overall goodness-of-fit test statistics, deviance G2 and Person X2. It also enables us to specify a value for a dispersion parameter in order to correct for over- or under-dispersion. In this case, we are NOT controlling for either. Overdispersion is an important concept with discrete data. In the context of logistic regression, overdispersion occurs when there are discrepancies between the observed responses yi and their predicted values \(\hat{\mu}_i = n_{i}\hat{\pi}_i\) and these values are larger than what the binomial model would predict.

If yi ~ Bin(ni, πi), the mean is μi = ni πi and the variance is μi(ni − μi)/ni. Overdispersion means that the data show evidence that the variance of the response yi is greater than μi(ni − μi)/ni.

If overdispersion is present in a dataset, the estimated standard errors and test statistics for individual parameters and the overall goodness-of-fit will be distorted and adjustments should be made. We will look at this briefly later when we look into continuous predictors, but you can also now read Agresti (2002), Sec. 4.7, Agresti (1996), Sec. 4.4.4. and SAS online description on dispersion and adjustments at: http://support.sas.com/onlinedoc/913/getDoc/en/statug.hlp/logistic_sect35.htm#stat_logistic_logisticod

Now let's review some of the output from the program smoke.sas that uses PROC LOGISTIC ('jump-down links' below):

Model Information

SAS output

This section, as before, tells us which dataset we are manipulating, the labels of the response and explanatory variables and what type of model we are fitting (e.g. binarly logit), and type of scoring algorithm for parameter estimation. Fisher scoring is a variant of Newton-Raphson method for ML estimation. In logistic regression they are equivalent.

Response Profile

SAS output

This information tells us how many observations were "successful", e.g., how many "event" observations take value 1, that is how many children smoke, 816 + 188 = 1004 and "nonevent", i.e., how many do NOT smoke, (4019 − 816) + (1356 − 188) = 4371. Recall that the model is estimating the probability of the "event".

Class Level Information Design

SAS output

From an explanatory categorical (i.e, class) variable S with 2 levels (0,1), we created one dummy variable (e.g. design variable):

X1 = 1 ("smoke") if parent smoking = at least one,
X1 = 0 ("nosmoke") if parent smoking = neither

parent smoking = nosmoke is equal 0 and is the baseline.

Model Convergence Status

SAS output

Since we are using an iterative procedure to fit the model, that is to find the ML estimates, we need some indication if the algorithm converged.

Overall goodness-of-fit testing

Test: H0 : current model vs. HA : saturated model

SAS output

The goodness-of-fit statistics X2 and G2 from this model are both zero, because the model is saturated. However, suppose that we fit the intercept-only model. This is accomplished by removing the predictor from the model statement, like this:

model y/n = / scale=none;

The goodness-of-fit statistics are shown below.

SAS output

The Pearson statistic X2 = 27.6766 is precisely equal to the usual X2 for testing independence in the 2 × 2 table. And the deviance G2 = 29.1207 is precisely equal to the G2 for testing independence in the 2 × 2 table. Thus by the assumption, the intercept-only model or the null logistic regression model states that student's smoking is unrelated to parents' smoking (e.g., assumes independence, or odds-ratio=1). But clearly, based on the values of the calculated statistics, this model (i.e., independence) does NOT fit well. This example shows that analyzing a 2 × 2 table for association is equivalent to logistic regression with a single dummy variable. Later on we will compare these tests to the loglinear model of independence see smokelog.sas and smokelog.lst.

The goodness-of-fit statistics, X2 and G2, are defined as before in the tests of independence and loglinear models (e.g. compare observed and fitted values). For the χ2 approximation to work well, we need the ni's sufficiently large so that the expected values, \(\hat{\mu}_i\) ≥ 5, and ni − \(\hat{\mu}_i\) ≥ 5 for most of the rows i. We can afford to have about 20% of these values less than 5, but none of them should fall below 1.

With real data, we often find that the ni's are not big enough for an accurate test, and there is no single best solution to handle this but a possible solution may rely strongly on the data and context of the problem.

Analysis of Maximum Likelihood Estimates

Once an appropriate model is fitted, the success probabilities need to be estimated using the model parameters. Note that success probabilities are now NOT simply the ratio of observed number of successes and the number of trials. A model fit introduces a structure on the success probabilities. The estimates will now be functions of model parameters.

What are the parameter estimates? What is the fitted model?

SAS output

The fitted model is \(\text{logit}(\hat{\pi_i})=\text{log}\dfrac{\hat{\pi_i}}{1-\hat{\pi_i}}=\hat{\beta_0}+\hat{\beta_1} X_i=-1.837+0.459smoke\) 

where smoke is a dummy variable (e.g. design variable) that takes value 1 if at least one parents is smoking and 0 if neither is smoking as discussed above.

Estimated $\beta_0=-1.827$ with a standard error of 0.078 is significant and it says that log-odds of a child smoking versus not smoking if neither parents is smoking (the baseline level) is -1.827 and it's statistically significant.

Estimated $\beta_1=0.459$ with a standard error of 0.088 is significant and it says that log-odds-ratio of a child smoking versus not smoking if at least one parent is smoking versus neither parents is smoking (the baseline level) is 0.459 and it's statistically significant. $exp(0.459)=1.58 are the estimated odds-ratios; compare with our analysis in Section 6.2.

Thus there is a strong association between parent's and children smoking behavior, and the model fits well.

And the estimated probability of a student smoking given the explanatory variable (e.g. parent smoking behavior):

\(\hat{\pi}_i=\dfrac{exp(-1.826+0.4592X_i)}{1+exp(-1.826+0.4592X_i)}\)

For example, the predicted probability of a student smoking given that neither parent smokes is

\(P(Y_i=1|X_i=0)=\dfrac{exp(-1.826+0.4592\times 0)}{1+exp(-1.826+0.4592\times 0)}=0.14\)

and the predicted probability of a student being a smoker if at least one parent smokes is

\(P(Y_i=1|X_i=1)=\dfrac{exp(-1.826+0.4592(X_i=1)}{1+exp(-1.826+0.4592(X_i=1))}=0.20\)

By invoking the following option in MODEL, output out=predict pred=prob the PROC LOGISTIC will print the predicted probabilities in the output file:

SAS output

so you do NOT need to do this calculation by hand; but it maybe useful to try it out to see if you understand what's going on. In this model, β0 is the log-odds of children smoking for no-smoking parents (Xi = 0). Thus exp(−1.8266) are the odds that a student smokes when the parents do not smoke.

Looking at the 2 × 2 table, the estimated log-odds for nonsmokers is

\(\text{log}\left(\dfrac{188}{1168}\right)=\text{log}(0.161)=-1.8266\)

which agrees with \(\hat{\beta}_0\) from the logistic model.

From analysis of this example as a 2 × 2 table, exp1), should be the estimate odds ratio. The estimated coefficient of the dummy variable,

\(\hat{\beta}_1=0.4592\)

agrees exactly with the log-odds ratio from the 2 × 2 table (e.g. ln(1.58) = (816×1168) / (188×3203) = 0.459). That is exp(0.459) = 1.58 which is the estimated odds ratio of a student smoking.

To relate this to interpretation of the coefficients in a linear regression, you could say that for every one-unit increase in the explanatory variable X1 (e.g. changing from no smoking parents to smoking parents), the odds of "success" πi / (1 − πi) will be multiplied by exp1), given that all the other variables are held constant.

This is not surprising, because in the logistic regression model β1 is the difference in the log-odds of children smoking as we move from "nosmoke" (i.e. neither parent smokes) (Xi = 0) to "smoke" (i.e. at least one parent smokes) (Xi = 1), and the difference in log-odds is a log-odds ratio.

Testing Individual Parameters

Testing the hypothesis that the probability of the characteristic depends on the value of the jth variable.

Testing H0 : βj = 0     versus     H1 : βj ≠ 0.

The Wald chi-squared statistics \(z^2=(\hat{\beta}_j/\text{SE}(\hat{\beta}_k))^2\) for these tests are displayed along with the estimated coefficients in the "Analysis of Maximum Likelihood Estimates" section.

The standard error for \(\hat{\beta}_1\), 0.0878, agrees exactly with the standard error that you can calculate from the corresponding 2 × 2 table.

SAS output

The values indicate the significant relationship between the logit of the odds of student smoking in parents' smoking behavior.

Or, we can use the information from "Type III Analysis of Effects" section.

SAS output

Again, this information indicates that parents' smoking behavior is a significant factor in the model. We could compare z2 to a chisquare with one degree of freedom; the p-value would then be the area to the right of z2 under the \(\chi^2_1\) density curve.

A value of z2 (Wald statistic) bigger than 3.84 indicates that we can reject the null hypothesis βj = 0 at the .05-level.

\(\beta_1:\left(\dfrac{0.4592}{0.0878}\right)^2=27.354\)

Confidence Intervals of Individual Parameters:

An approximate (1 − α) × 100% confidence interval for βj is given by

\(\hat{\beta}_j \pm z_{(1-\alpha/2)} \times SE(\hat{\beta}_j)\)

For example, a 95% confidence interval for β1

0.4592 ± 1.96 × 0.0878 = (0.287112, 0.63128)

where 0.0878 is the "Standard Error"' from the "Analysis of Maximum Likelihood Estimates" section. Then, the 95% confidence interval for the odds-ratio of a child smoking if one parent is smoking in comparison to neither smoking is

(exp(0.287112), exp(0.63128)) = (1.3325, 1.880)

Compare this with the output we get from PROC LOGISTIC:

SAS output

When fitting logistic regression, we need to evaluate the overall fit of the model, significance of individual parameter estimates and consider their interpretation. For assessing the fit of the model, we also need to consider the analysis of residuals. Definition of Pearson, deviance and adjusted residuals is as before, and you should be able to carry this analysis.

If we include the statement

output out=predict pred=phat reschi=pearson resdev=deviance;

in the PROC LOGISTIC call, then SAS creates a new dataset called "results" that includes all of the variables in the original dataset, the predicted probabilities \(\hat{\pi}_i\), the Pearson residuals and the deviance residuals. Then we can add some code to calculate and print out the estimated expected number of successes  \(\hat{\mu}_i=n_i\hat{\pi}_i\) and failures \(n_i-\hat{\mu}_i=n_i(1-\hat{\pi}_i)\).

For now you can try this on your own, or see the next sections for more analysis and code.

6.2.2 - Fitting the Model in R

There are different ways to run logistic regression depending on the format of the data. As before, since there are many different options, for details you need to refer to R help. Here are some general guidelines to keep in mind with a simple example outlined in dataformats.R where we created two binary random variables with n number of trials, e.g., n=100,

##create two binary vectors of length 100
x=sample(c(0,1),100, replace=T)
y=sample(c(0,1),100, replace=T)

If data come in a tabular form, i.e., response pattern is with counts (as seen in the previous example), that is data are grouped.

> ##create a 2x2 table with counts
> xytab=table(x,y)
> xytab
   y
x    0  1
  0 24 29
  1 26 21

glm(): Let y be the response variable capturing the number of events with the number of success (y = 1) and failures (y = 0). We need to create a response table (e.g., count) that has count for both the "success" and "failure" out of n number of trials in its columns. Notice that count table below, could be also the number of success y = 1, and then a column computed as n-y

> count=cbind(xytab[,2],xytab[,1])
> count
  [,1] [,2]
0   29   24
1   21   26

We also need a categorical predictor variable.

    > xfactor=factor(c("0","1"))
    > xfactor
    [1] 0 1
    Levels: 0 1

We need to specify the the response distribution and a link, which in this case is done by specifying family=binomial("logit") 

    > tmp3=glm(count~xfactor, family=binomial("logit"))
    > tmp3

    Call:  glm(formula = count ~ xfactor, family = binomial("logit"))

    Coefficients:
    (Intercept)     xfactor1 
         0.1892      -0.4028 

    Degrees of Freedom: 1 Total (i.e. Null);  0 Residual
    Null Deviance:        1.005
    Residual Deviance: -4.441e-15     AIC: 12.72

If data come in a matrix form, i.e., subject × variables matrix with one line for each subject, like a database, where data are ungrouped

> xydata=cbind(x,y)
> xydata ## 100 rows, we are showing first 7
       x y
  [1,] 0 1
  [2,] 0 1
  [3,] 0 0
  [4,] 0 1
  [5,] 1 0
  [6,] 0 1
  [7,] 0 0
.....

glm(): We need a binary response variable y and a predictor variable x, which in this case was also binary. We need to specify the the response distribution and a link, which in this case is done by specifying family=binomial("logit")

    > tmp1=glm(y~x, family=binomial("logit"))
    > tmp1

    Call:  glm(formula = y ~ x, family = binomial("logit"))

    Coefficients:
    (Intercept)            x 
         0.1892      -0.4028 

    Degrees of Freedom: 99 Total (i.e. Null);  98 Residual
    Null Deviance:        138.6
    Residual Deviance: 137.6     AIC: 141.6

For more options see the course examples and homework. We will follow the R output through to explain the different parts of model fitting.  The outputs from SAS (or from many other software) will be essentially the same.

Example - Student Smoking

Let's begin with the collapsed 2 × 2 table:

 
Student
smokes
Student does
not smoke
1–2 parents smoke
816
3203
Neither parent smokes
188
1168

In R, we can use the glm()function and specify the family = binomial(link = logit). See the files smoke.R and the output generated in smoke.out.  That R code corresponds to SAS code discussed in the previous section:

smoke.R program code

Here is the R output for the 2 × 2 table that we will use in R for logistics regression:

smoke.R output

Please Note: the table above is different from the one given from the SAS program. We input y and n as data columns in SAS; here we just input data columns as yes and no.

summary(smoke.logistic) gives the following model information:

smoke.R output

Model Information & model convergence status

Call:
glm(formula = response ~ parentsmoke, family = binomial(link = logit))

(Dispersion parameter for binomial family taken to be 1)

 Number of Fisher Scoring iterations: 2

 These sections tells us which dataset we are manipulating, the labels of the response and explanatory variables and what type of model we are fitting (e.g. binary logit), and type of scoring algorithm for parameter estimation. Fisher scoring is a variant of Newton-Raphson method for ML estimation. In logistic regression they are equivalent. Since we are using an iterative procedure to fit the model, that is to find the ML estimates, we need some indication if the algorithm converged.

Overdispersion is an important concept with discrete data. In the context of logistic regression, overdispersion occurs when there are discrepancies between the observed responses yi and their predicted values \(\hat{\mu}_i = n_{i}\hat{\pi}_i\) and these values are larger than what the binomial model would predict.

If yi ~ Bin(ni, πi), the mean is μi = ni πi and the variance is μi(ni − μi)/ni. Overdispersion means that the data show evidence that the variance of the response yi is greater than μi(ni − μi)/ni.

If overdispersion is present in a dataset, the estimated standard errors and test statistics for individual parameters and the overall goodness-of-fit will be distorted and adjustments should be made. We will look at this briefly later when we look into continuous predictors, but you can also now read Agresti (2013), Sec. 4.7, Agresti (2007), Sec. 4.4.4. and help(summary.glm()) 

Table of Coefficients:

Coefficients:
             Estimate Std. Error z value Pr(>|z|)   
(Intercept)  -1.82661    0.07858 -23.244  < 2e-16 ***
parentsmoke1  0.45918    0.08782   5.228 1.71e-07 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

 This information gives us the fitted model:

\(\text{logit}(\hat{\pi_i})=\text{log}\dfrac{\hat{\pi_i}}{1-\hat{\pi_i}}=\hat{\beta_0}+\hat{\beta_1} X_i=-1.837+0.459parentsmoke1\) 

where parentsmoke1 is a dummy variable ((e.g. design variable) that takes value 1 if at least one parents is smoking and 0 if neither is smoking.  

X1 = 1 ("parentsmoke1") if parent smoking = at least one,
X1 = 0 ("parensmoke0") if parent smoking = neither

parent smoking =  0 is the baseline. We are modeling here probability of "children smoking". 

Estimated $\beta_0=-1.827$ with a standard error of 0.078 is significant and it says that log-odds of a child smoking versus not smoking if neither parents is smoking (the baseline level) is -1.827 and it's statistically significant.

Estimated $\beta_1=0.459$ with a standard error of 0.088 is significant and it says that log-odds-ratio of a child smoking versus not smoking if at least one parent is smoking versus neither parents is smoking (the baseline level) is 0.459 and it's statistically significant. $exp(0.459)=1.58 are the estimated odds-ratios; compare with our analysis in Section 6.2.

Testing Individual Parameters

Testing the hypothesis that the probability of the characteristic depends on the value of the jth variable.

Testing H0 : βj = 0     versus     H1 : βj ≠ 0.

The Wald chi-squared statistics \(z^2=(\hat{\beta}_j/\text{SE}(\hat{\beta}_k))^2\) for these tests are displayed along with the estimated coefficients in the "Coefficients" section.

The standard error for \(\hat{\beta}_1\), 0.0878, agrees exactly with the standard error that you can calculate from the corresponding 2 × 2 table. The values indicate the significant relationship between the logit of the odds of student smoking in parents' smoking behavior. This information indicates that parents' smoking behavior is a significant factor in the model. We could compare z2 to a chisquare with one degree of freedom; the p-value would then be the area to the right of z2 under the \(\chi^2_1\) density curve.

A value of z2 (Wald statistic) bigger than 3.84 indicates that we can reject the null hypothesis βj = 0 at the .05-level.

\(\beta_1:\left(\dfrac{0.4592}{0.0878}\right)^2=27.354\)

Confidence Intervals of Individual Parameters:

An approximate (1 − α) × 100% confidence interval for βj is given by

\(\hat{\beta}_j \pm z_{(1-\alpha/2)} \times SE(\hat{\beta}_j)\)

For example, a 95% confidence interval for β1

0.4592 ± 1.96 × 0.0878 = (0.287112, 0.63128)

where 0.0878 is the "Standard Error"' from the "Coefficients" section. Then, the 95% confidence interval for the odds-ratio of a child smoking if one parent is smoking in comparison to neither smoking is

(exp(0.287112), exp(0.63128)) = (1.3325, 1.880)

 

Thus there is a strong association between parent's and children smoking behavior. But does this model fit?

Overall goodness-of-fit testing

Test: H0 : current model vs. HA : saturated model

Residual deviance: -3.7170e-13  on 0  degrees of freedom
AIC: 19.242

The goodness-of-fit statistics, deviance, G2 from this model is zero, because the model is saturated. If we want to know the fit of the intercept only model that is provided by

Test: H0 : intercept only model vs. HA : saturated model

Null deviance:  2.9121e+01  on 1  degrees of freedom

Suppose that we fit the intercept-only model. This is accomplished by removing the predictor from the model statement, like this:

glm(response~1, family=binomial(link=logit)

The goodness-of-fit statistics are shown below.

Null deviance: 29.121  on 1  degrees of freedom
Residual deviance: 29.121  on 1  degrees of freedom
AIC: 46.362

The deviance G2 = 29.1207 is precisely equal to the G2 for testing independence in this 2 × 2 table. Thus by the assumption, the intercept-only model or the null logistic regression model states that student's smoking is unrelated to parents' smoking (e.g., assumes independence, or odds-ratio=1). But clearly, based on the values of the calculated statistics, this model (i.e., independence) does NOT fit well.

Analysis of deviance table: In R, we can test factors’ effects with the anova function to give an analysis of deviance table. We only include one factor in this model. So R dropped this factor (parentsmoke) and fit the intercept-only model to get the same statistics as above, i.e., the deviance G2 = 29.121.

smoke.R output

This example shows that analyzing a 2 × 2 table for association is equivalent to logistic regression with a single dummy variable. We can further compare these tests to the loglinear model of independence, e.g. see smokelog.R in Lesson 10.

The goodness-of-fit statistics, X2 and G2, are defined as before in the tests of independence and loglinear models (e.g. compare observed and fitted values). For the χ2 approximation to work well, we need the ni's sufficiently large so that the expected values, \(\hat{\mu}_i\) ≥ 5, and ni − \(\hat{\mu}_i\) ≥ 5 for most of the rows i. We can afford to have about 20% of these values less than 5, but none of them should fall below 1.

With real data, we often find that the ni's are not big enough for an accurate test, and there is no single best solution to handle this but a possible solution may rely strongly on the data and context of the problem.

6.2.3 - More on Goodness-of-Fit and Likelihood ratio tests

Suppose two alternative models are under consideration, one model is simpler or more parsimonious than the other. ore often than not, one of the models is the saturated model. Another common situation is to consider ‘nested’ models, where one model is obtained from the other one by putting some of the parameters to be zero. Suppose now we test

 H0: reduced model is true vs. HA: current model is true

Notice the difference in the null and alternative hypothesis from the section above. Here to test the null hypothesis that an arbitrary group of k coefficients from the model is set equal to zero (e.g. no relationship with the response), we need to fit two models:

  • the reduced model which omits the k predictors in question, and
  • the current model which includes them.

The likelihood-ratio statistic is

ΔG2 = −2 log L from reduced model
             −(−2 log L from current model)

and the degrees of freedom is k (the number of coefficients in question). The p-value is \(P(\chi^2_k \geq \Delta G^2)\).

To perform the test, we must look at the "Model Fit Statistics" section and examine the value of "−2 Log L" for "Intercept and Covariates." Here, the reduced model is the "intercept-only" model (e.g. no predictors) and "intercept and covariates" is the current model we fitted. For our running example, this would be equivalent to testing "intercept-only" model vs. full (saturated) model (since we have only one covariate).

SAS output

Larger values of ΔG2 ("−2 Log L" ) lead to small p-values, which provide evidence against the reduced model in favor of the current model; you can explore AIC (Akaike Information Criterion) and SC (Schwarz Criterion) on your own through SAS help files or see Lesson 5 for AIC.

For our example, ΔG2 = 5176.510 − 5147.390 = 29.1207 with df = 2 − 1 = 1. Notice that this matches Deviance we got in the earlier text above.

Another way to calculate the test statistic is

ΔG2 = G2 from reduced model
            −G2 from current model,

where the G2's are the overall goodness-of-fit statistics.

This value of -2 Log L is useful to compare two nested models which differ by an arbitrary set of coefficients.

Also notice that the ΔG2 we calculated for this example equals to

Likelihood Ratio     29.1207    1    <.0001

from "Testing Global Hypothesis: BETA=0" section (the next part of the output, see below).

 

Testing the Joint Significance of All Predictors.

Testing the null hypothesis that the set of coefficients is simultaneously zero. For example,

\(\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1+\beta_2 X_2+\ldots+\beta_k X_k\)

test H0 : β1 = β2 = ... = 0 versus the alternative that at least one of the coefficients β1, . . . , βk is not zero.

This is like the overall F−test in linear regression. In other words, this is testing the null hypothesis that an intercept-only model is correct,

\(\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0\)

versus the alternative that the current model is correct

\(\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1+\beta_2 X_2+\ldots+\beta_k X_k\)

In our example, we are testing the null hypothesis that an intercept-only model is correct,

\(\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0\)

versus the alternative that the current model (in this case saturated model) is correct

\(\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1\)

SAS logoIn the SAS output, three different chisquare statistics for this test are displayed in the section "Testing Global Null Hypothesis: Beta=0," corresponding to the likelihood ratio, score and Wald tests. Recall their definitions from the very first lessons.

SAS output

This test has k degrees of freedom (e.g. the number of dummy indicators (design variables), that is the number of β-parameters (except the intercept)).

Large chisquare statistics lead to small p-values and provide evidence against the intercept-only model in favor of the current model.

The Wald test is based on asymptotic normality of ML estimates of β's. Rather than using the Wald, most statisticians would prefer the LR test.

If these three tests agree, that is evidence that the large-sample approximations are working well and the results are trustworthy. If the results from the three tests disagree, most statisticians would tend to trust the likelihood-ratio test more than the other two.

In our example, the "intercept only" model or the null model says that student's smoking is unrelated to parents' smoking habits. Thus the test of the global null hypothesis β1 = 0 is equivalent to the usual test for independence in the 2 × 2 table. We will see that the estimated coefficients and SE's are as we predicted before, as well as the estimated odds and odds ratios.

R logo  Residual deviance is the difference in G2 = −2 logL between a saturated model and the built model. The high residual deviance shows that the model cannot be accepted.

The null deviance is the difference in G2 = −2 logL between a saturated model and the intercept-only model. The high residual deviance shows that the intercept-only model does not fit.

R output

In our 2 × 2 table smoking example, the residual deviance is almost 0 because the model we built is the saturated model. And notice that the degree of freedom is 0, too. Regarding the null deviance, we could see it equivalent to the section "Testing Global Null Hypothesis: Beta=0," by likelihood ratio in SAS output.

For our example, Null deviance = 29.1207 with df = 1. Notice that this matches Deviance we got in the earlier text above.

The Homer-Lemeshow Statistic

An alternative statistic for measuring overall goodness-of-fit is Hosmer-Lemeshow statistic.

Note: we use one predictor model here, that is, at least one parent smokes.

This is a Pearson-like χ2 that is computed after data are grouped by having similar predicted probabilities. It is more useful when there is more than one predictor and/or continuous predictors in the model too. We will see more on this later, and in your homework.

H0 : the current model fits well
HA : the current model does not fit well

To calculate this statistic:

  1. Group the observations according to model-predicted probabilities ( \(\hat{\pi}_i\))
  2. The number of groups is typically determined such that there is roughly an equal number of observations per group
  3. Hosmer-Lemeshow (HL) statistic, a Pearson-like chi-square statistic, is computed on the grouped data, but does NOT have a limiting chi-square distribution because the observations in groups are not from identical trials. Simulations have shown, that this statistic can be approximated by chi-squared distribution with df = g − 2 where g is the number of groups.

Warning about Hosmer-Lemeshow goodness-of-fit test:

  • It is a conservative statistic, i.e. its value is smaller than what it aught to be and therefore rejection probability of the null hypothesis is smaller.
  • It has low power in predicting a certain types of lack of fit such as nonlinearity in explanatory variable
  • It is highly dependent on how the observations are grouped
  • If too few groups are used (e.g. 5 or less) it almost always indicates that the model fits the data; this means that it's usually not a good measure if you only have one or two categorical predictor variables, and it's best used for continuous predictors.

In the model statement, the option lackfit tells SAS to compute HL statistics and print the partitioning.

For our example, because we have small number of groups (e.g, 2), this statistic gives a perfect fit (HL = 0, p-value = 1).

______

Instead of deriving the diagnostics we will look at them from a purely applied viewpoint. Recall the definitions and introductions to the regression residuals and Pearson and Deviance residuals.

Residuals

The Pearson residuals are defined as

\(r_i=\dfrac{y_i-\hat{\mu}_i}{\sqrt{\hat{V}(\hat{\mu}_i)}}=\dfrac{y_i-n_i\hat{\pi}_i}{\sqrt{n_i\hat{\pi}_i(1-\hat{\pi}_i)}}\)

The contribution of the ith row to the Pearson statistic X2 is

\(\dfrac{(y_i-\hat{\mu}_i)^2}{\hat{\mu}_i}+\dfrac{((n_i-y_i)-(n_i-\hat{\mu}_i))^2}{n_i-\hat{\mu}_i}=r^2_i\)

and the Pearson goodness-of fit statistic is

\(X^2=\sum\limits_{i=1}^N r^2_i\)

which we would compare to a  \(\chi^2_{N-p}\) distribution. The deviance test statistic is

\(G^2=2\sum\limits_{i=1}^N \left\{ y_i\text{log}\left(\dfrac{y_i}{\hat{\mu}_i}\right)+(n_i-y_i)\text{log}\left(\dfrac{n_i-y_i}{n_i-\hat{\mu}_i}\right)\right\}\)

which we would again compare to \(\chi^2_{N-p}\), and the contribution of the ith row to the deviance is

\(2\left\{ y_i\text{log}\left(\dfrac{y_i}{\hat{\mu}_i}\right)+(n_i-y_i)\text{log}\left(\dfrac{n_i-y_i}{n_i-\hat{\mu}_i}\right)\right\}\)

We will note how these quantities are derived through appropriate software and how they provide useful information to understand and interpret the models.  For an example see the SAS or R analysis in the next section.

6.2.4 - Explanatory Variable with Multiple Levels

The concepts discussed with binary predictors extend to predictors with multiple levels. In this lesson we consider,

  • Yi binary response, Xi discrete explanatory variable (with k = 3 levels), and
  • link to the analysis of 2 × 3 tables

but the basic ideas easily extend to any 2 × J table.

We begin by replicating the analysis of the original 3 × 2 tables with logistic regression.

How many parents smoke?
Student smokes?
Yes (Z = 1)
No (Z = 2)
Both (Y = 1)
400
1380
One (Y = 2)
416
1823
Neither (Y = 3)
188
1168

First, we re-express the data in terms of yi = number of smoking students, and ni = number of students for the three groups based on parents behavior:

How many parents smoke?
Student smokes?
yi
ni
Both
400
1780
One
416
2239
Neither
188
1356

Then we decide on a baseline level for the explanatory variable X, and create k − 1 dummy indicators if X is a categorical variable with k levels.

For our example, let's parent smoking = Neither be a baseline, and define a pair of dummy indicators (or design variables) that takes one of two values,

X1 = 1 if parent smoking = One ,
X1 = 0 if otherwise,

X2 = 1 if parent smoking = Both,
X2 = 0 if otherwise.

Let \(\dfrac{\pi}{1-\pi}=\) odds of student smoking; notice that we dropped the index i denoting each individual student to simplify the notation --- we are still modeling each student or rather a randomly chosen student from the same classification cell. Then the model

\(\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1+\beta_2 X_2\)

says that the log-odds of a student smoking is β0 for parents smoking = neither, β0 + β1 for parents smoking = one and β0 + β2 for parents smoking = both, and expj) = conditional odds ratio for level j of predictor X versus the baseline. Therefore,

\begin{array}{rcl}
\beta_1 &=& \text{log_odds for }one - \text{log_odds for }neither \\
exp(\beta_1) &=& \dfrac{\text{odds smoking}|one}{\text{odds smoking}|neither} \\
\beta_2 &=& \text{log_odds for }both - \text{log_odds for }neither \\
exp(\beta_2) &=& \dfrac{\text{odds smoking}|both}{\text{odds smoking}|neither} \\
\dfrac{exp(\beta_1)}{exp(\beta_2)}&=& exp(\beta_1-\beta_2)=\dfrac{\text{odds smoking}|one}{\text{odds smoking}|both}
\end{array}

and we expect to get \(\hat{\beta}_1=ln(1.42)=0.351\), e.g. 1.42=(416 × 1168)/(188 × 1823), and \(\hat{\beta}_2=ln(1.80)=0.588\), e.g., 1.80=400 × 1168/188 × 1380. The estimated intercept should be \(\hat{\beta}_0=ln(188/1168)=-1.826\).

Fitting A Multi-level Predictor Model in SAS and R

SAS logo  There are different versions of SAS program on the course web site to fit this data. Here is one from smoke.sas where '0' = neither parent smokes, '1' = one smokes, and '2' = both smoke, and we use PROC LOGISTIC; notice we could use proc GENMOD too.

SAS program

The option param=ref tells SAS to create a set of two dummy variables to distinguish among the three categories, where '0'=neither is a baseline because of option descending and ref=first (see the previous section for details).

Another way of doing the same in smoke.sas but using character labels (e..g, "both", "one", "neither") rather than numbers for the categories:

SAS program

In the class statement, the option order=data tells SAS to sort the categories of S by the order in which they appear in the dataset rather than alphabetical order. The option ref='neither' makes neither the reference group (i.e. the group for which both dummy variables are zero). Let's look at some relevant portions of the output of smoke.lst that differ from the analysis of the corresponding 2 × 2 table from the previous section of the notes.

SAS output

Fisher scoring is a variant of Newton-Raphson method for ML estimation. In logistic regression they are equivalent. Notice now there are 3 observations since we have 3 groupings by the levels of the explanatory variable.

SAS output

From an explanatory variable S with 3 levels (0,1,2), we created two dummy variables, i.e., design variables:

X1 = 1 if parent smoking = One,
X1 = 0 otherwise,

X2 = 1 if parent smoking=Both,
X
2 = 0 otherwise.

Since parent smoking = Neither is equal 0 for both dummy variables, that's the baseline.

SAS output

Here, we specified neither to be a reference variable. We used option data=order so that both takes values (1,0), and one (0,1). If we didn't use that option, the other levels would be set based on the alphabetical order, but in this case they coincide.

What are the parameter estimates? What is the fitted model?

SAS output

Here the s1 and s2 correspond to x1 and x2 dummy variables. The saturated model is,

\(\text{logit}(\pi)=-1.8266+0.3491 X_1+0.5882 X_2\) 

and the estimated probability of a child smoking given the explanatory variable:

\(\hat{\pi}_i=\dfrac{\text{exp}(-1.8266+0.3491 X_1+0.5882 X_2)}{1+\text{exp}(-1.8266+0.3491 X_1+0.5882 X_2)}\)

For example, the predicted probability of a student smoking given that only one parent is smoking is:

\begin{align}
P(Y_i=1|neither=0,one=1,both=0) &= P(Y_i=1|X_1=1,X_2=0)\\
&= \dfrac{\text{exp}(-1.8266+0.3491)}{1+\text{exp}(-1.8266+0.3491)}\\
&= 0.1858\\
\end{align}

Residuals

In SAS, if we include the statement

output out=predict pred=phat reschi=pearson resdev=deviance;

then SAS creates a new dataset called "results" that includes all of the variables in the original dataset, the predicted probabilities \(\hat{\pi}_i\), and the Pearson and deviance residuals. We can add some code to calculate and print out the estimated expected number of successes \(\hat{\mu}_i=n_i\hat{\pi}_i\) (e.g., "shat") and failures \(n_i-\hat{\mu}_i=n_i(1-\hat{\pi}_i)\) (e.g., "fhat").

SASprogram

Running this program gives a new output section:

SAS output

Here "s" are the levels of the categorical predictor for parents' smoking behavior, "y" as before the number of students smoking for each level of the predictor, "n" the marginal counts for each level of the predictor", "prob" is the estimated probability of "success" (e.g. a student smoking given the level of the predictor), "shat" and "fhat" expected number of successes and failures respectively, and "pearson" and "deviance" are Pearson and Deviance residuals.

All of the "shat" and "fhat" values, that is predicted number of success and failures are greater than 5.0, so the χ2 approximation is trustworthy.

R logo  

Below is the R code (smoke.R) that replicates the analysis of the original 2 × 3 table with logistic regression.

r code

 First, let’s see the table we created for the analysis.

 R output

You should again notice the difference in data input. In SAS, we input y and n as data columns; here we just input data columns as yes and no.

Next, the model information is displayed.

R output

If you don’t specify the baseline using the “class” option, R will set the first level as the default. Here, it’s “0”. the parentsmoke1 and parentsmoke2 correspond to the X1 and X2 dummy variables. The saturated model is:

\(\text{logit}(\pi)=-1.8266+0.3491 X_1+0.5882 X_2\) 

and the estimated probability of a child smoking given the explanatory variable:

\(\hat{\pi}_i=\dfrac{\text{exp}(-1.8266+0.3491 X_1+0.5882 X_2)}{1+\text{exp}(-1.8266+0.3491 X_1+0.5882 X_2)}\)

For example, the predicted probability of a student smoking given that only one parent is smoking is:

\begin{align}
P(Y_i=1|neither=0,one=1,both=0) &= P(Y_i=1|X_1=1,X_2=0)\\
&= \dfrac{\text{exp}(-1.8266+0.3491)}{1+\text{exp}(-1.8266+0.3491)}\\
&= 0.1858\\
\end{align}

Residuals

In R, deviance residuals are given directly in the output by summary() function.                          

> smoke.logistic<-glm(response~parentsmoke, family=binomial(link=logit))

> summary(smoke.logistic)

Call:
glm(formula = response ~ parentsmoke, family = binomial(link = logit))

Deviance Residuals:
[1]  0  0  0


Coefficients:
             Estimate Std. Error z value Pr(>|z|)   
(Intercept)  -1.82661    0.07858 -23.244  < 2e-16 ***
parentsmoke1  0.34905    0.09554   3.654 0.000259 ***
parentsmoke2  0.58823    0.09695   6.067  1.3e-09 ***

---
We can also obtain the deviance and Pearson residuals by using residulas.glm() function. Do help(residuals.glm()) to study this object more.

To obtain deviance residuals

> residuals(smoke.logistic)
[1] 0 0 0

To obtain Person residuals:

> residuals(smoke.logistic, type="pearson")
            1             2             3
-2.440787e-13 -4.355932e-13 -1.477321e-11

To obtain the predicted values, use function predict.glm() with which we can specify the type of predicted values we want.

type    is the type of prediction required. The default is on the scale of the linear predictors; the alternative "response" is on the scale of the response variable. Thus for a default binomial model the default predictions are of log-odds (probabilities on logit scale) and type = "response" gives the predicted probabilities. The "terms" option returns a matrix giving the fitted values of each term in the model formula on the linear predictor scale.

For example, the code below gives predicted probabilities. 

> predict(smoke.logistic, type="response")
        1         2         3
0.2247191 0.1857972 0.1386431

All of the predicted number of success and failures are greater than 5.0 so the χ2 approximation is trustworthy.

Overall goodness-of-fit

The goodness-of-fit statistics Χ2 and G2 from this model are both zero, because the model is saturated. Suppose that we fit the intercept-only model as before by removing the predictors from the model statement:

SAS logo  The goodness-of-fit statistics are shown below.

SAS output

The Pearson statistic Χ2 = 37.5663 and the deviance G2 = 38.3658 are precisely equal to the usual Χ2 and G2 for testing independence in the 2 × 3 table.

Clearly, in this example, the saturated model fits perfectly and the independence model does not fit well.

We can use the Hosmer-Lemeshow test to assess the overall fit of the model. As in the previous example, the Hosmer and Lemeshow statistic is not very meaningful since the number of groups is small.

SAS output

R logo 

R output

The residual deviance is almost 0 because the model is saturated. We can find G2 = 38.366 by null deviance, which are precisely equal to the usual G2 for testing independence in the 2 by 3 table. Since this statistics is large which leads to small p-values, it provides evidence against the intercept-only model in favor of the current model. You can also find the same result in the output from the anova() part.

Clearly, in this example, the saturated model fits perfectly and the independence model does not fit well.

We can use the Hosmer-Lemeshow test to assess the overall fit of the model. As in the previous example, the Hosmer and Lemeshow statistic is not very meaningful since the number of groups is small.

Here is a link to a program in R that produces this Hosmer and Lemeshow statistic (ROCandHL.R)

> ### TO use Homser-Lemeshow statistic (although not relevant here since the number of groups is small)
> ## First run ROCandHL.R script which has function hosmerlem() in it
> ## hosmerlem() takes the vector of successes, predicted vector of success and g=# of groups as input
> ## produce the vector of predicted success "yhat"
> yhat=rowSums(response)*predict(smoke.logistic, type="response")
> yhat
  1   2   3
400 416 188
>
> hosmerlem(response[,1], yhat, g=3) ## here run 3 groups
                    X^2                      Df                 P(>Chi)
"-1.00593633284243e-24"                     "1"                     "."

 

We have shown that analyzing a 2 × 3 table for associations is equivalent to a binary logistic regression with two dummy variables as predictors. For 2 × J tables, we would fit a binary logistic regression with J − 1 dummy variables.

Testing the Joint Significance of All Predictors.

In our model

\(\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1+\beta_2 X_2\)

Test H0 : β1 = β2 = 0 versus the alternative that at least one of the coefficients β1, . . . , βk is not zero.

In other words, this is testing the null hypothesis that an intercept-only model is correct,

\(\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0\)

versus the alternative that the current model is correct

\(\text{log}\left(\dfrac{\pi}{1-\pi}\right)=\beta_0+\beta_1 X_1+\beta_2 X_2\)

SAS output

This test has k degrees of freedom (e.g. the number of dummy or indicator variables, that is the number of β-parameters (except the intercept).

Large chisquare statistics lead to small p-values and provide evidence against the intercept-only model in favor of the current model.

Testing for an Arbitrary Group of Coefficients

For the previous test this would be equivalent to testing "intercept-only" model vs. full (saturated) model.

The likelihood-ratio statistic is:

ΔG2 = −2 log L from reduced model
             −(−2 log L from current model)

and the degrees of freedom is k (the number of coefficients in question). The p-value is \(P(\chi^2_k \geq \Delta G^2)\).

SAS output

For our example, ΔG2 = 5176.510 − 5138.144 = 38.3658 with df = 3 − 1 = 2.

Notice that this matches:

Likelihood Ratio      38.3658      2      <.0001

from the "Testing Global Hypothesis: BETA=0" section.

Discuss   How would you test only for X2? Which two models can you compare?    How do you interpret this case?

For example, here is the saturated model we just looked at in SAS.

SAS program

Compare this to

model y/n = s1 /scale=none lackfit;

 

Testing Individual Parameters

Testing a hypothesis that the probability of the characteristic depends on the value of the jth variable.

Testing H0 : βj = 0 versus H1 : βj ≠ 0.

A value of z2 bigger than 3.84 indicates that we can reject the null hypothesis βj = 0 at the .05-level.

SAS logoFrom the second row of this part of the ouput:

SAS output

\(\beta_1:\left(\dfrac{0.3491}{0.0955}\right)^2=13.3481\)

R logo

R output

A value of z bigger than 1.96 or small p-value indicates that we can reject the null hypothesis βj = 0 at the .05-level.

Confidence Intervals: An approximate (1 − α) × 100% confidence interval for j is given by

\(\hat{\beta}_j \pm z_{(1-\alpha/2)} \times SE(\hat{\beta}_j)\)

For example, a 95% CI for β1

0.3491 ± 1.96 × 0.0955 = (0.16192, 0.5368)

Then, the 95% CI for the odds-ratio of a student smoking if one parent is smoking in comparison to neither smoking is:

(exp(0.16192), exp(0.5368)) = (1.176, 1.710)

Student and parents' smoking behaviors are dependent. Furthermore,

SAS output]

  • estimated conditional odds ratio of a student smoking between one parent smoking and neither smoking: exp1) = exp(0.3491) = 1.418,
  • estimated conditional odds ratio of a student smoking between both parents smoking and neither smoking: exp2) = exp(0.5882) = 1.801, and
  • estimated conditional odds ratio of a student smoking between both parents smoking and one smoking: exp2)/exp1) = 1.801/1.418 = 1.27 = exp2 − β1) = exp(0.5882 − 0.3491) = 1.27, i.e., a student who has both parents smoking is 27% more likely (on the odds scale) to smoke than the a student who has only one parent smoking.