Here's the plan for this post. We will complete our look at functions undergoing Fourier transformation; there are some really useful relationships to see and to commit to memory (even when the figures are hand drawn for expediency!). Then we will look at the effects of the FT on real signals. We have two issues to consider: 1) a finite sampling window, and 2) digitization. Off we go!

**Fourier pairs**

We saw in the last post (Part Five) how a single frequency - a sinusoid - can be represented by a single line - a delta function - in a frequency domain plot. This is an example of a Fourier pair because the relationship holds both ways, i.e. if you take a delta function in the time domain and FT it you get a sinusoid in the frequency domain, and vice versa:

What about other useful Fourier pairs? Here's a useful relationship. An exponential decay Fourier transforms into what's known as a Lorentzian line:

Again, remember that the exponential can be in either the time domain or the frequency domain, although in MRI we generally deal with exponential decays (signals) in the time domain. It's also worth pointing out here that the faster the exponential decay, the broader the Lorentzian line in the other domain. This inverse relationship has a number of practical consequences for fMRI. I'll come back to this point below.

I found an interactive online tool on the National High Magnetic Field Lab's website that allows you to change the rate of decay as well as the frequency of an oscillation in the time domain, and see the resulting Lorentzian line in the frequency domain. Tinker with it here. (It's Java, it takes a couple of seconds to load.)

Next, let's look at what could be arguably the most useful Fourier pair for MRI. A boxcar (or top hat) function Fourier transforms into a sinc function, where sinc(x) = sin(x)/x:

Note that, in principle, the side lobes of a sinc function in either the time or frequency domain decay away for infinity (just like an exponential, in principle). But for most practical purposes the intensity is considered to have decayed to zero by the fifth or sixth side lobe, where the lobes are counted on either side of a sinc, including the negative (1st, 3rd, 5th,...) as well as positive (2nd, 4th, 6th,...) lobes in the total. The sincs in the cartoon above would be considered to be 5-lobe sincs.

I've included both time and frequency domain representations of the boxcars and the sincs, to reinforce the point that one begets the other in the Fourier domain. It turns out that we use sincs and boxcars a lot in imaging. For example, sinc-shaped RF pulses in the time domain are used to select slabs of frequency (i.e. slice selection). And, as we'll see in a moment, the fact that we are acquiring a digital signal over a finite acquisition window (a boxcar) actually makes our image pixels sinc-shaped in reality. But more about these issues later in this and subsequent posts.

**Convolution in pictures**

What about the situation when the time domain signal isn't just one of these typical functions, but two or more? For example, real signals tend to die away with time in an exponential fashion. That's equivalent to a persistent (infinite) signal multiplied by an exponential decay function. The result to expect in the conjugate domain is then quite easy to visualize because the FT of the resultant (exponential multiplied by the raw signal) is equivalent to the convolution of the FTs of the individual signals:

An example of convolution in the frequency domain (bottom row), which is equivalent to multiplication in the time domain (top row). |

What exactly

*is*convolution? If you want to get into the mathematics go right ahead! Or, like me, you could content yourself in "seeing" the blending together of the frequency domain functions. It's quite an intuitive process once you've seen it a handful of times. (There will be other examples later on.)

Look again at the (badly drawn) detail in the above figure. (See Note 1.) Multiplication of the original signal (top-left) with an exponential decay (top-middle) has had two effects in the final frequency domain representation (bottom-right). The first is that the two peaks have been broadened. The second is that the noise level has been decreased (which is why the smaller of the two peaks appears taller in the bottom-right figure - see Note 2). This process is called "line broadening" by NMR spectroscopists and is a standard "denoising" technique, though of course the reduction in noise comes at the expense of reduced (frequency domain) resolution. Do we use similar noise reduction tricks in MRI? Sometimes. We tend to call it "smoothing." We'll cover smoothing at a later date.

**Heisenberg's uncertainty principle and time-decaying signals**

There is one relationship that we should consider in a little more detail. Above, an exponential function was applied intentionally to a time domain signal to reduce noise, simultaneously causing the peaks in the frequency domain to get broader. This broadening effect is an example of the uncertainty principle, named after Werner Heisenberg, one of the fathers of quantum mechanics. Fortunately, however, we don't need to get into quantum mechanics to apply his principle. For our purposes the uncertainty principle says that the faster a signal decays in the time domain, the broader it becomes in the frequency domain, and vice versa.

The uncertainty principle has profound implications for imaging resolution because the faster the imaging signals decay, the broader will be the effective pixel dimensions. Put another way, rapidly decaying signals place restrictions on the maximum available spatial resolution. We will usually seek to maximize the signals - minimize their decay - through processes such as shimming. This is just one of a litany of issues we face with EPI for fMRI, as we will see in a few posts' time.

Dealing with real signals

**Finite acquisition periods and signal clipping**

Line broadening is an optional trick. Using a finite acquisition window isn't, however! In the previous example the time domain signal at top-left was considered to have decayed away to near zero intensity by the end of the acquisition period. But what happens if acquisition ends "too soon?" To understand the effects we can employ the convolution theorem again. Take another exponentially decaying wave - this one contains just a single signal frequency, plus noise - but now multiply the wave by a boxcar, resulting in a clipped signal, as shown at top-right in this cartoon:

An example of signal "clipping." |

Since we know that the FT of the whole signal should be a nice, sharp Lorentzian peak (bottom-left), and since the FT of a boxcar is a sinc function (bottom-middle) then the FT of the clipped signal is a peak with wiggles, or side lobes, (bottom-right). The resultant is some sort of blend (a convolution) of a Lorentzian with a sinc.

Spectroscopists call these wiggles "feet," and they might try to remove them via multiplication with, for example, an exponential function in the time domain (at the expense of broadening, as we've already seen). (See Note 3.) While the signals we're considering here are just decaying sinusoids (plus noise), the principle of signal clipping holds for MR imaging, too. We often see these "feet" in anatomical MRI especially, where we call the effect "Gibbs ringing." These sorts of wiggles often aren't much of a problem in EPI scans as used for fMRI, but they are fairly common in anatomical images. Why do you think that might be? (See Note 4 for the answer.) We'll look in more detail at Gibbs ringing in a later post.

**Digitization**

Now a slight change of direction. Even though I didn't mention so explicitly, to this point all the functions we've dealt with have been continuous, i.e. non-digital. That's because the Fourier relationships (Fourier pairs, convolutions) we've seen so far hold to good approximations whether the functions are discrete or continuous. In practice, though, we actually detect voltages and must convert them with an analog-to-digital converter (ADC) to digital signals so that we can use computers (which are inherently digital at the present time) to do the processing and yield the desired product: images. You don't need to know how ADCs work, but it is important to recognize that acquiring discrete, digital data points can only approximate the underlying continuous signals being generated by the spins. We'll see a consequence of this approximation below.

The ADC, also called the digitizer, is either off or on. (This was represented in the previous cartoon as a boxcar function in the top-middle part of the figure.) With modern scanners the digitizer is usually run at its maximum speed (see Note 5) and then, if required, digital filtering and other niceties can be applied to the oversampled data. Finally, any redundant data points can be discarded to yield the total number of data points requested by the scanner operator. For MRI, the number of data points translates into the number of pixels in your image. (Technically it's only in one dimension of your image, but let's not split hairs.)

Once we have digital signals representing our continuous functions we need compute something called the Discrete FT (DFT) and we generally use something called a Fast Fourier Transform (FFT) algorithm to do this. You don't need to remember all these different flavors of (digital) FT, you don't really even need to understand the differences between them; the concepts I've described above and in previous posts will stand you in good stead. There is, however, one exception: digital sampling places a restriction on the amount of information that can be approximated in the discrete representation. One important aspect of this restriction is encapsulated in something called the Nyquist theorem.

**The Nyquist frequency**

Consider the two sinusoids in this figure:

From: http://en.wikibooks.org/wiki/Analog_and_Digital_Conversion/Nyquist_Sampling_Rate |

Clearly, they different frequencies, but if the digitizer is sampling only at the time of the black dots then both waves would be digitized in precisely the same manner; there would be no way to differentiate between them in the digital representation given. The blue wave would

*appear*to have the same frequency as the red wave!

In order to capture accurately all frequencies we must digitize at a rate not less than two data points per cycle for the highest frequency of interest,

*i.e.*the sampling frequency must be twice the highest frequency to be sampled. If this criterion is not met then, as we've seen above, it may not be possible to differentiate between signals having different frequencies. Let's look at this more closely.

**Aliasing**

Consider the three (continuous) signals below, given as solid black lines on the top-left part of the figure. Each wave is being digitally sampled at the blue dots and each wave is sampled with the same sampling rate. Joining up these points produces the dashed blue waves shown at top-right. The digital representations of waves 1 and 2 are perfectly adequate, but for wave 3 we clearly have a problem: its digitized frequency appears to be lower than it is in (continuous) reality.

Aliasing of a frequency (in red) that is above the Nyquist frequency (the dashed white lines). |

In the frequency domain all frequencies, both positive and negative, that lie between the two dashed white lines would satisfy the Nyquist condition and be properly sampled, thus appearing correctly in the frequency domain. (Remember that a negative frequency is simply one that is phase shifted by 180 degrees compared to its positive counterpart.) Wave 1's frequency is well below the Nyquist frequency limit of two data points per cycle, so it appears correctly as a green line after Fourier transformation. Wave 2 is exactly at the Nyquist frequency so it, too, is correctly represented (in yellow) after FT. But wave 3, in red, is

*aliased*(or

*folded*) to a lower frequency and actually appears at the position marked with the dashed red line. Oops.

We're faced with two choices (three if you include "living with it"). If the red line (wave 3) is of interest to us then we could increase the digitization rate so that the Nyquist frequency is increased to beyond the frequency of wave 3. This means we will have increased the bandwidth of our frequency domain; the red line would then fall within the (expanded) position of the dashed white lines, and everything is good again.

Alternatively, if the red line isn't of interest to us we could try to "chop it out" of the bandwidth that we've selected. It may be possible, under certain circumstances, to apply a filter to the bandwidth as specified above, thereby allowing the green and yellow signals to pass through but rejecting the red signal. This would prevent the red signal from being aliased back into the bandwidth of interest. (See Note 6.) But there are limits to filtering, and it's more than I want to get into at this point.

That'll do for today. Next time: magnetic field gradients and simple pulse sequences.

**------------------**

**Notes**

1. There are two frequencies in the interferogram, the smaller with an amplitude about 1/10th of the other. Everything else is noise. This is an example of an NMR spectrum, but it doesn't matter if you don't know what one of those is. Just recognize the two sinusoidal signals and the noise and you're good to go.

2. If you're wondering how the smaller peak seems to have grown relative to the bigger peak, this can be another consequence of line broadening. Note that as the the exponential function removes progressively more signal with time, it will attenuate more signal from the more intense frequency (as well as noise) than it will from the lower amplitude frequency; the lower amplitude frequency has already died effectively to zero long before the end of the interferogram. Indeed, this is one reason to apply line broadening: it can enhance the visibility of small peaks from noise, even if it might distort the relative amplitudes of some signals in the process.

3. The NMR spectroscopist's term for removing the wiggles is

*apodization*- literally, "removing the feet." There are many approaches, depending on the application. As with noise-reduction strategies in MRI, we tend to throw these approaches under the moniker "smoothing" which, while accurate, isn't nearly as gory! I guess there's a good reason for not chopping feet off in MRI, given its medical role. Same rationale as not calling it

*nukular*magnetic resonance imaging I suppose. Fair enough.

4. In fMRI we are acquiring relatively low amplitude imaging signals using EPI (typically). We're intentionally waiting for BOLD contrast to evolve over TE, which reduces the (maximum) available SNR (compared to, say, a spin echo signal) by an order of magnitude. Furthermore, we then acquire a succession of gradient echoes over tens of milliseconds, each signal getting progressively weaker and weaker... So, while the effects of the finite sampling window are present for EPI, the effects are very hard to see in typical scans acquired for fMRI experiments. EPI has lots of other problems instead! We will look at examples of all of these effects in real data (in EPI as well as anatomical scans) once we've got through these background physics posts. Perhaps now you can see why I didn't just jump right into the examples in real data! It's crucial to understand where the effects come from first!

5. Quiz time! Without emailing your facility physicist, what is the speed and resolution of the ADCs on your MRI scanner? What, you don't know??? Shame on you! But not really. You see, it's very easy to get beguiled by all the pretty pictures rolling off a modern scanner, to the point where you don't really feel the need to ask the performance questions. And as it turns out, it's not often you need to think about it. A typical scanner these days may have a receiver bandwidth (speed) of a MHz or more, meaning it samples data points every microsecond or faster, and a dynamic range (resolution) of 12 bits or more. Yes, more is generally better in both departments, but there are limits and it's also possible to get to a point of diminishing returns. If I come across experimental reasons to think about ADC speed or resolution then I'll consider a dedicated post. As a general rule, though, fMRI isn't greatly hampered by ADC performance. Other hardware specifications tend to play a far larger role.

6. Using a combination of analog and digital filtering, modern MR scanners produce a "pass band" that is approximately a square that covers the user-defined bandwidth as given by the white dashed lines in the cartoon. Filters are imperfect so they tend to perform slightly more as trapezoids than as squares, but using clever combinations of analog filtering to produce the first level pass band, and digital filtering to clean up the edges, from our perspective we can think of these devices as near-perfect squares. With one caveat. Analog filtering can only be applied to the digital signals as they are acquired, i.e. as the signals are received and digitized. This limits our ability to reduce or eliminate aliasing to one dimension of 2D echo planar images, as most often used for fMRI; the other dimension can still exhibit aliasing. I will show several examples of aliasing in a dedicated post on the subject.

Thanks. this is one of the best introduction to Fourier Transform I've ever seen. Nice hand drawing work.

ReplyDeleteYes,exactly really help full...

DeleteI really appreciate your work! Very clear and sympathic way of explaining;).

ReplyDeleteBut i guess you made a little mistake in the "The Nyquist frequency" section, claiming that the shown sinusoids have different frequencies.

Hi Anon (March 2), if you look carefully at the blue and green sinusoids in the fig you'll notice that they are at slightly higher frequency than the red wave. (The red wave has precisely three peaks and three troughs whereas green and blue waves have three peaks and three troughs plus a little "bonus.")

ReplyDeleteCheers!

Thank you for your very nice work, very helpful!

DeleteHowever, I have to agree with Anon on this one, imo the sinusoids all have the same frequency. They all fit in exactly 3 periods in the shown interval and intersect at the exact same point each period, no?

Thanks again, big fan!

Ha! Moritz, Anon from ages ago, I stand corrected! Thanks for being eagle-eyed! I grabbed the wrong figure (and didn't credit it so had to search again for it). I've replaced the erroneous three-wave figure with a simpler two-wave figure because I can't locate the figure I had intended to use. I was trying to match the three waves of the subsequent figure but it doesn't matter, a correct two-wave figure is sufficient. Thanks again!

ReplyDelete