Spectral Density Estimation / Spectral Analysis

Printer-friendly versionPrinter-friendly version

Introduction to Spectral Analysis -- Spectral Density

So far, we've been looking at time domain methods. Specifically, we've discussed models which give an explicit formula for the current observation in terms of past observations and past white noise terms. (As Shumway and Stoffer say, this is like a regression of the present on the past.) This is a dynamic definition, a rule on how to move through time. Another way we can look at stationary process is to look at the current observation as a combination of waves -- of a mixture of objects that are defined for all time. (As Shumway and Stoffer say, a regression of the current time on sines and cosines of various frequencies.)

We would like to look at the spectrum to give us information about the underlying system. We want to identify dominant frequencies within the data. For example, in weather data, the yearly cycle is very important, but there are also multi-year cycles like El Niño that are difficult to spot without using these tools. Similarly business data has yearly cycles, but also multi-year business cycles.

One thing that we've already discussed is the periodogram, which gives the total variation at a given sequence of frequencies (which are natural to the sampling rate). We will later discuss the spectral density which is the population version of the periodogram. The periodogram (sample) estimates the spectral density (population).

Recall that period and frequency are inversely related. If we have quarterly data, then there are four data points per year (cycle). This corresponds to 0.25 cycles per data point. The notation we've used would then yield

formula

In chapter 2, we discussed periodic functions on the integers that have the following form

formula

for t = 0, ±1, ±1 where ω is the frequency, A is the amplitude, and φ is the phase. As we discussed before having the φ inside the cosine function can be problematic. If we want to do a regression it would be non-linear.

We solved this by using a trig identity

formula

to rewrite the above as

formula

which we may rewrite as

formula

Here's where things are very different from before; we're going to assume that U1 and U2 are iid Gaussian with ZERO mean and fixed variance, σ2. This is different than a periodic trend. For example, in temperature data (in the northern hemisphere), January would be cold, and July would be hot. But if these are zero mean random variables, the the opposite is just as likely.

We can generalize this model to include multiple frequencies with

formula

where the Uk1 and the Uk2 are independent and N(0, σk2) where the ωk are distinct frequencies.

So, these xt could be our model for our observed time series instead of some type of ARIMA model. Let's look at the moments of these models. One thing to notice is that all of the x(t) would be Gaussian since they are the sum of Gaussian random variables. Also,

E(x(t)) = 0

and

formula

All of the cross terms disappear to yield

formula

How about covariances?

formula

This yields

formula

We again use the trig identity to obtain

formula

This means that we have stationarity!! In spectral analysis we will be viewing stationary time series by the strength of the variance ( σk2 ) corresponding to sinusoidal components ( cos(2πωkt), sin(2πωkt) ).


Spectral Density

The spectral density is a transformation of the autocovariance function. What's a transformation? A very simple transformation is the one from celsius to fahrenheit. To move from celsius to fahrenheit we use the transform 1.8C + 32 to move back (the inverse transform) we use 1 = 1.8(F - 32). Notice that nothing is destroyed in the transform - we can move back and forth at will. In fact, we can move to Fahrenheit, take 10 degrees off - then convert back.

We can also move between a function on the integers ..., -2, -1, 0, 1, 2, ... and a function in frequency space. So, for a function on the integers at we can move to the frequency space by taking the Fourier transform formula which we will call A(ω) with - 0.5 ≤ ω ≤ 0.5. We can then go backwards using formula - for each t. Also note that e-2πiωt = cos(-2πωt) + i sin(-2πωt).

Now, what is the spectral density? The spectral density is the Fourier transform of the autocovariance function γ(h) for ..., -1, 0, 1, .... We denote this by formula. This gives the oscillations in the autocovariance function. This is also sometimes called the power spectrum. We can do this when formula. Remember that γ(h) completely determines the distribution for a stationary Gaussian process. So, the f(ω) also completely determines the distribution of x(t).

Here are a few other properties.

1. f(ω) ≥ 0 (because of positive definiteness).

2. Symmetry f(ω) = f(-ω) Why?

formula

3. f(ω) = f(ω + 1). So we only need (-0.5 to 0.5). Note that

formula

Also, note that e-2πit = cos(-2πt) + i sin(-2πt) = cos(2πt) - i sin(2πt) = 1.

Remember when t is an integer cos(-2πt) = 1 and sin(-2πt) = 0.

Let's look at some spectral densities. A white noise is very simple. Recall that for a white noise process the autocovariance function is

γ(h) = σ2 , h = 0
γ(h) = 0 , h ≠ 0

Therefore,

formula

f(ω) = σ2

So, it is a constant function. This means that all frequencies are represented equally. If this were light, this means that all colors are in equal power - this is the same as "white".

Here's another simple example. How about an MA(1). Recall that the autocovariance function is

γ(0) = σ2 + θ12σ2

and

γ(1) = γ(-1) = θ1σ2.

So, the spectrum is

formula

Using the fact that 1/2(e-iα + eiα) = cos(α), we have

formula


We could also derive the formula for the spectral density of any ARMA (see textbook), but this becomes rather complicated. Let's just take a look at how we derive the formula for the AR(1). First of all, we will take our AR(1):

formula

We know something about the spectral density of white noise process. We will take advantage of this in order to solve the problem for the AR. We rewrite this as:

formula

Next, we will try to get the autocovariance of our xt's in terms our wt autocovariance. We are trying to relate what we know about the spectral density of white noise to what we want to know about the spectral density of the AR.

formula

In the second step above we can multiply all of this out and then take the expectation of each of these terms below. What we end up with is the left hand side is γω(h) and terms including γx(h). (Note that the two autocovariances are delineated by their subscripts, ω and x.)

formula

γw(h) represents the autocovariance for white noise. γx(h) represents the autocovariance of xt .

The next step involves combining terms to simplify things. Now, this is true for every h so we can sum these:

formula

and this is how we obtain spectral density.

formula

formula

If we follow this out with one of these summands, we will get the gist of what is going on.

Let's just concentrate on one of the terms. The problem is that the formula for spectral density has an h and this has h-1. What can we do to fix this?

formula

Here is where the power of the exponential comes into play. What we have done is add and subtract one of these exponents, and it can be factored out of the summation because it has no h in it.

formula

Now, this summation on the right becomes fx(ω).

If we had worked through the other summation, the difference would have been an h+1 instead. In this case, we would have added and subtracted a positive instead. Here is what results for both cases:

formula

What do we have at this point? fω(ω) = σ2. And, the fx(ω)'s factor out so now we get:

formula

Next, in order to find fx(ω) we can divide and move things around like this:

formula

What is wrong with this expression for our formula? Something is here that should not be. How can we plot this with ω on one axis and the value on the other if this is a complex function?

It is more convenient to move back out to sines and cosines. Here are our definitions again:

formula
and
formula

And, here we just write out the definition and looking above at what we had ended up with for fx(ω), we essentially have two cosines:

formula

Something along these lines is always going to happen with an AR. When we move up the order of the AR, increasing the number of cosine terms, what will happen is that we will keep increasing the number of values of h, i.e., , h-1, h-2, h+1, h+2, etc. that we will have. In the book, we will find a general formula for how these are derived. We will end up with a sum involving more of these values and cosines, each with their own φ. The main point is that for the ARMA models these formulas are closed form and can be written out.

Our work in this section will involve taking a time series and calculate not the auto covariance, but instead try to calculate spectral density. We could go back and try to fit it in a time domain, ARMA's, SARIMA's, etc. What the book gives us is the closed form formula for an ARMA(2,1) for instance, plug in the values that we fit and look at what the spectral density is. We will then use the methods that we will talk about to fit a spectral density. These should lineup fairly well with our time domain model. This will be another way to check our time domain model.


Intuition Behind Spectral Density

Let's take a look at some of the spectral densities and think about the corresponding time series. We have seen that we can calculate spectral densities for simple models. And, here is the form of a spectral density for an AR(1):

formula

What happens when φ1 = 0? The model would be xt = 0 × xt-1+ wt , or white noise. What you will end up with is σ2 , a constant, which we already know is the spectral density for white noise.

Let's take a look at a couple of other AR examples. σ2 doesn't really influence the inherent shape of the spectral density curve, it only matters for the dimensions of the spectral density. For the first example let's set φ1 = 0.8.

plot

The model is: formula

If xt-1 is high, what would you expect xt to be? The higher xt-1 is, the more it will keep things positive if they are positive, and negative if they are negative. What you would expect is that if φ1 is fairly high and positive then you will get a lot of low frequencies. The plot above indicates this through the spectral density. When the time series is high, it tends to stay high. When time series is low it tends to stay low.

What we see are the sine and the cosine at low frequencies are all strong. And while it would be difficult to pick out one pattern, but what we will see is a more general pattern of long oscillations.

Let's take a look at 0.4.

plot

We are seeing something that still has a pattern, to a certain extent, when we are high, we tend to stay high, etc., but it is less extreme.

Please note that these spectral density graphs do not go down to zero. In the graph above, it goes down to 0.5. Even for the values that are very low, it means that they still have high frequency components, it is just that the low-frequency components are stronger.

Let's look at -0.6 just so that we get some contrast.

plot

We can see that it is oscillating much more quickly. If we get an observation above zero, it is almost always followed by an observation that is negative. Again, the spectral density is not going down to zero, it is not that we have no low frequencies, it is just that the higher frequencies are about 12 times higher.

As you move through the spectral densities notice that not only is the shape changing but the scale is changing as well.

Inspect!

We will talk about ARMA's later. Even if we took a look at just an AR(2), one of the things that we can get is something that looks like this:

plot

We can interpret this much as you might for an El Niño that might occur every three to five years. In this case, this "spike" is taking place over a range of time. This is why they can't say this or that event occurs every three years exactly. Even to say that it is every three to five years is a little misleading because we could go longer. In fact, it is not even clear when an oscillation happens and when it doesn't. Hence, we get reporting of a strong El Niño or a weak El Niño because what we are analyzing is a stationary pattern which doesn't have this deliberate, "it happened or it did not" -- but the frequencies of 1/5 - 1/3 is stronger.