# Unbiased Estimation

On the previous page, we showed that if *X*_{i} are Bernoulli random variables with parameter *p*, then:

\(\hat{p}=\dfrac{1}{n}\sum\limits_{i=1}^n X_i\)

is the maximum likelihood estimator of *p*. And, if *X _{i}* are normally distributed random variables with mean

*μ*and variance

*σ*

^{2}, then:

\(\hat{\mu}=\dfrac{\sum X_i}{n}=\bar{X}\) and \(\hat{\sigma}^2=\dfrac{\sum(X_i-\bar{X})^2}{n}\)

are the maximum likelihood estimators of *μ *and *σ*^{2}, respectively. A natural question then is whether or not these estimators are "good" in any sense. One measure of "good" is "unbiasedness."

\(E[u(X_1,X_2,\ldots,X_n)]=\theta\) then the statistic \(u(X_1,X_2,\ldots,X_n)\) is an |

### Example

If *X*_{i} is a Bernoulli random variable with parameter *p*, then:

\(\hat{p}=\dfrac{1}{n}\sum\limits_{i=1}^nX_i\)

is the maximum likelihood estimator (MLE) of *p*. Is the MLE of *p* an unbiased estimator of *p*?

**Solution.** Recall that if *X*_{i} is a Bernoulli random variable with parameter *p*, then *E*(*X*_{i}) = *p*. Therefore:

\(E(\hat{p})=E\left(\dfrac{1}{n}\sum\limits_{i=1}^nX_i\right)=\dfrac{1}{n}\sum\limits_{i=1}^nE(X_i)=\dfrac{1}{n}\sum\limits_{i=1}^np=\dfrac{1}{n}(np)=p\)

The first equality holds because we've merely replaced p-hat with its definition. The second equality holds by the rules of expectation for a linear combination. The third equality holds because *E*(*X*_{i}) = *p. *The fourth equality holds because when you add the value *p* up *n* times, you get *np*. And, of course, the last equality is simple algebra.

In summary, we have shown that:

\(E(\hat{p})=p\)

Therefore, the maximum likelihood estimator is an unbiased estimator of *p*.

### Example

If *X _{i}* are normally distributed random variables with mean

*μ*and variance

*σ*

^{2}, then:

\(\hat{\mu}=\dfrac{\sum X_i}{n}=\bar{X}\) and \(\hat{\sigma}^2=\dfrac{\sum(X_i-\bar{X})^2}{n}\)

are the maximum likelihood estimators of *μ *and *σ*^{2}, respectively. Are the MLEs unbiased for their respective parameters?

**Solution.** Recall that if *X*_{i} is a normally distributed random variable with mean *μ *and variance *σ*^{2}, then *E*(*X*_{i}) = *μ *and *Var*(*X*_{i}) = *σ*^{2} . Therefore:

\(E(\bar{X})=E\left(\dfrac{1}{n}\sum\limits_{i=1}^nX_i\right)=\dfrac{1}{n}\sum\limits_{i=1}^nE(X_i)=\dfrac{1}{n}\sum\limits_{i=1}\mu=\dfrac{1}{n}(n\mu)=\mu\)

The first equality holds because we've merely replaced *X*-bar with its definition. Again, the second equality holds by the rules of expectation for a linear combination. The third equality holds because *E*(*X*_{i}) = *μ. *The fourth equality holds because when you add the value *μ* up *n* times, you get *n**μ*. And, of course, the last equality is simple algebra.

In summary, we have shown that:

\(E(\bar{X})=\mu\)

Therefore, the maximum likelihood estimator of *μ *is unbiased. Now, let's check the maximum likelihood estimator of *σ*^{2}. First, note that we can rewrite the formula for the MLE as:

\(\hat{\sigma}^2=\left(\dfrac{1}{n}\sum\limits_{i=1}^nX_i^2\right)-\bar{X}^2\)

because:

Then, taking the expectation of the MLE, we get:

\(E(\hat{\sigma}^2)=\dfrac{(n-1)\sigma^2}{n}\)

as illustrated here:

\begin{align}

E(\hat{\sigma}^2) &= E\left[\dfrac{1}{n}\sum\limits_{i=1}^nX_i^2-\bar{X}^2\right]=\left[\dfrac{1}{n}\sum\limits_{i=1}^nE(X_i^2)\right]-E(\bar{X}^2)\\

&= \dfrac{1}{n}\sum\limits_{i=1}^n(\sigma^2+\mu^2)-\left(\dfrac{\sigma^2}{n}+\mu^2\right)\\

&= \dfrac{1}{n}(n\sigma^2+n\mu^2)-\dfrac{\sigma^2}{n}-\mu^2\\

&= \sigma^2-\dfrac{\sigma^2}{n}=\dfrac{n\sigma^2-\sigma^2}{n}=\dfrac{(n-1)\sigma^2}{n}\\

\end{align}

The first equality holds from the rewritten form of the MLE. The second equality holds from the properties of expectation. The third equality holds from manipulating the alternative formulas for the variance, namely:

\(Var(X)=\sigma^2=E(X^2)-\mu^2\) and \(Var(\bar{X})=\dfrac{\sigma^2}{n}=E(\bar{X}^2)-\mu^2\)

The remaining equalities hold from simple algebraic manipulation. Now, because we have shown:

\(E(\hat{\sigma}^2) \neq \sigma^2\)

the maximum likelihood estimator of *σ*^{2 }is a biased estimator.

### Example

If *X _{i}* are normally distributed random variables with mean

*μ*and variance

*σ*

^{2}, what is an unbiased estimator of

*σ*

^{2}? Is

*S*

^{2}unbiased?

**Solution.** Recall that if *X*_{i} is a normally distributed random variable with mean *μ *and variance *σ*^{2}, then:

\(\dfrac{(n-1)S^2}{\sigma^2}\sim \chi^2_{n-1}\)

Also, recall that the expected value of a chi-square random variable is its degrees of freedom. That is, if:

\(X \sim \chi^2_{(r)}\)

then *E*(*X*) = *r*. Therefore:

\(E(S^2)=E\left[\dfrac{\sigma^2}{n-1}\cdot \dfrac{(n-1)S^2}{\sigma^2}\right]=\dfrac{\sigma^2}{n-1} E\left[\dfrac{(n-1)S^2}{\sigma^2}\right]=\dfrac{\sigma^2}{n-1}\cdot (n-1)=\sigma^2\)

The first equality holds because we effectively multiplied the sample variance by 1. The second equality holds by the law of expectation that tells us we can pull a constant through the expectation. The third equality holds because of the two facts we recalled above. That is:

\(E\left[\dfrac{(n-1)S^2}{\sigma^2}\right]=n-1\)

And, the last equality is again simple algebra.

In summary, we have shown that, if *X*_{i} is a normally distributed random variable with mean *μ *and variance *σ*^{2}, then *S*^{2} is an unbiased estimator of *σ*^{2}. It turns out, however, that *S*^{2} is *always* an unbiased estimator of *σ*^{2}, that is, for *any* model, not just the normal model. (You'll be asked to show this in the homework.) And, although *S*^{2} is always an unbiased estimator of *σ*^{2}, *S* is *not* an unbiased estimator of *σ*. (You'll be asked to show this in the homework, too.)

Sometimes it is impossible to find maximum likelihood estimators in a convenient closed form. Instead, numerical methods must be used to maximize the likelihood function. In such cases, we might consider using an alternative method of finding estimators, such as the "method of moments." Let's go take a look at that method now.