In Part Eleven of the series Physics for understanding fMRI artifacts (hereafter referred to as PFUFA) you saw how setting parameters in k-space determined the image field-of-view (FOV) and resolution. In that introduction I kept everything simple, and the Fourier transform from the k-space domain to the image domain worked perfectly. For instance, in one of the examples the k-space step size was doubled in one dimension, thereby neatly chopping the corresponding image domain in half with no apparent problems. At the time, perhaps you wondered where the cropped portions of sky and grass had gone from around the remaining, untouched Hawker Hurricane aeroplane. Or perhaps you didn't.
In any event, you can assume from the fact that this is a post dedicated to something called 'aliasing' that in real world MRI things aren't quite as neat and tidy. Changing the k-space step size - thereby changing the FOV - has consequences depending on the extent of the object being imaged relative to the extent of the image FOV. It's possible to set the FOV too small for the object. Alternatively, it's possible to have the FOV set to an appropriate span but position it incorrectly. (The position of the FOV relative to signal-generating regions of the sample is a settable parameter on the scanner.) Overall, what matters is where signals reside relative to the edges of the FOV.
Now, on a modern MRI scanner with fancy electronics, aliasing is a problem in one dimension only: the phase encoding dimension. (Yeah, the one with all the distortion and the N/2 ghosts. Sucks to be that dimension!) The frequency encoding dimension manages to escape the aliasing phenomenon by virtue of inline analog and digital filtering, processes that don't have a direct counterpart in the phase encoding dimension. Instead, signal that falls outside the readout dimension FOV, either because the FOV is too small or because the FOV is displaced relative to the object, is eliminated. It's therefore important to know what happens where and when as far as both image dimensions are concerned. One dimension gets chopped, the other gets aliased.
I will first cover the signal filtering in the frequency encoding dimension and then deal with aliasing in the phase encoding dimension. Finally, I'll give one example of what can happen when the FOV is set inappropriately for both dimensions simultaneously. At the end of the process you should be able to differentiate the effects with ease. (See Note 1.)
Effects in the frequency encoding dimension
Below are two sets of EPIs of the same object - a spherical phantom - that differ only in the position of the readout FOV relative to the phantom. In the top image the readout FOV is centered on the phantom, whereas in the bottom image the FOV is displaced to the left, causing the left portions of the phantom signal in each slice to be neatly, almost surgically, removed:
|Readout FOV centered relative to the phantom.|
|Readout FOV displaced to the left of the phantom, resulting in attenuation of the signal from the left edge of each slice.|
I set the contrast so that you could see the background noise and verify that there's no sneaky trace of the removed signal somewhere else in the image. It's really gone. But where? And why?
The readout dimension is the one for which the k-space sampling is actually performed, using an analog-to-digital converter (ADC). As you will recall from the various posts on k-space, it's only necessary to sample the rows of k-space to fully sample the 2D plane; the 2D k-space matrix is simply a stack of these rows. What this means practically is that, as the name suggests, the readout dimension (a.k.a. frequency encoding dimension) is the one for which data points are obtained from the ADC, the rows of analog signals having been passed to the ADC from the receiver RF coil electronics.
Now, whenever you have a stream of data points you can filter them, e.g. with a passband that rejects frequencies above and below some bandwidth. This is what happens to each row of k-space data points as they are processed by the ADC. By itself this would tend to attenuate signal (and noise) from frequencies (which we know correspond to spatial positions) that are outside the passband. Except that analog filters are imperfect. So, to clean things up your fancy scanner is also equipped with digital filters that trim the passband until it performs as a near perfect square set at the FOV. This step is achieved by a high degree of oversampling (multiples of the specified Nyquist frequency), and it's all done invisibly to you, the operator. What we end up with is a neatly trimmed image. No signal (or noise) from outside of the defined readout FOV survives the filtering process. Pretty nifty, huh?
Effects in the phase encoding dimension
The columns of phase-encoded data points aren't detected in a continuous stream like the frequency-encoded data points, but are instead built up from a succession of readout data points, each row possessing a slightly different phase. (See PFUFA Part Twelve for a review.) The analog-to-digital conversion occurs for just one phase encoding value at a time, for each entire readout period, then the digital results are stacked up to produce the final 2D k-space matrix ready for Fourier transformation. Thus, the properties of the ADC, including analog filtering and in-line digital signal processing, only get an opportunity to operate on one dimension - the frequency encoding dimension - of k-space. The digital properties of the phase encoding dimension can be thought of as a "synthetic" construction of a stack of digital readouts. And this leads to different properties for the two dimensions when it comes to dealing with extraneous signal; signal that lies outside the defined FOV. (See Note 2.)
Below is a typical example of aliasing in the phase encoding dimension. Displacement of the sample relative to one edge of the phase encoding FOV (and vice versa) leads to aliasing, or wraparound, of part of the signal to the opposite side of the image:
It's almost as if the entire image is on a conveyor-belt. What falls off one side seems to be cycled around and deposited on the other side of the image. (See Note 3.)
Why does aliasing occur in this dimension when, in the readout dimension, the signal outside of the FOV would have been removed by filtering? The algebra for the relationship between the k-space step size (in the phase encoding dimension) and the FOV was given in Note 4 of PFUFA Part Eleven. In brief, for each k-space increment there is a total of 360 degrees of phase, +/- 180 degrees about the center on-resonance (or "carrier") frequency, imparted across the sample in the direction of the applied gradient. (This is true for readout and phase encoding gradients, but let's focus on the phase encoding gradient from now on.) Put another way, at the extreme spatial positions of the FOV - the very edges of the image in the phase encoding dimension - the phase evolution is exactly +180 degrees on one side and exactly -180 degrees on the other. At all spatial positions in between the phase is somewhere in between this range.
But what if there is some sample residing outside of these extreme positions? Magnetization there still "feels" the effect of the applied gradient, and it still gets a phase imparted to it. The problem is that if, say, the signal is left of the left-hand edge of the FOV then the imparted phase will be greater than +180 degrees (if we assume left edge is +180, right edge is -180). But, because phase is modulo(360) it means that a phase value of +200 degrees (say) is indistinguishable from a phase of -160 degrees. And you should immediately recognize where that signal past the extreme left position will end up: at the -160 degree position, which is actually on the right-hand side of the image FOV!
Note that it's not just the signal regions that get cycled (aliased) around the FOV; the N/2 ghosts do, too. Indeed, the relative position of the ghosts to the sample hasn't changed because the zigzag phase modulation across the phase encoding k-space (see PFUFA Part Twelve) is simply phase-shifted by a constant amount; the zigzag itself is unchanged.
Effects in both dimensions when the image FOV is too small
Finally for this post let's look at what happens when the FOV is made smaller than the signal extent in both the readout and phase encoding dimensions. Here I've simply made the FOV smaller than the phantom:
By now I hope you can immediately determine which dimension is frequency-encoded and which is phase-encoded. You can look for the aliasing in the phase encoding dimension or, more generally, just look for the N/2 ghosts. (Distortion can be harder to see, and it's impossible to see in this image!) Okay, so phase encoding is the vertical dimension in this image, and it's easy to identify the two overlapped signal regions top and bottom. Note how the aliased signal simply adds to the correctly located signal, increasing the SNR in the overlapped regions. You've already identified the N/2 ghosts (right?), but in case you haven't, the easiest parts to see are the two crescents (a smiley and a frowny) right through the middle of the image. The FOV is so small that the ghosts themselves overlap! (You might just be able to make out an interference pattern, established by the aliased ghosts, in a horizontal line across the center of the image.) The readout dimension is neatly cropped at the edges and no aliasing occurs.
Pretty easy stuff. Don't worry if you didn't understand why the frequency encoding dimension gets cropped while the phase encoding dimension aliases. It's fine to commit these features to memory and be ready to interpret the differences whenever signals get close to the edges of your image FOV. Be especially aware if you are using small FOVs, such as 192 mm FOV for axial EPI. It's critical to know if scalp fat, say, will alias or get cropped based on where it falls relative to the image dimensions. You don't want to alias fat signals onto brain!
Next up: Gibbs ringing. No, nothing to do with hand bells or Christmas caroling.
1. What you're reading here applies equally well to other forms of MRI; these aren't just phenomena affecting EPI. However, as this is a blog dedicated to experimental fMRI I am going to use EPIs as the example images and discuss only related issues (such as the aliasing of ghosts) that pertain to EPI.
2. Siemens has a parameter called Phase Oversampling (on the Routine tab) which might look like a way to circumvent aliasing, but it's not really. It simply acquires additional k-space steps in the phase encoding dimension - by the percentage as set in the parameter - and in concert increases the FOV invisibly to the user. Then, once the acquisition is complete and the data has been transformed into image space the image is simply cropped to leave the FOV and resolution you specified in the Resolution tab. In other words, the extraneous data from the enlarged FOV is simply discarded.
Now, given that in EPI the echo train must be extended to increase the number of phase encode steps, this "trick" has a cost. It increases the echo train length, thereby increasing the minimum TE and altering (generally increasing) the blurring due to T2* during the echo train. It also decreases the number of slices permissible in the TR period. That said, it is sometimes possible to set the Phase Oversampling parameter greater than zero without having to change any other parameters, but your images will still have the increased T2* blurring.
This oversampling feature is equivalent to explicitly increasing the desired FOV in the phase encoding dimension, while (explicitly) increasing the number of points in that dimension to maintain a constant nominal pixel resolution and then, once the data is acquired, simply zooming (or cropping) the image back to the originally desired FOV. Thought about this way it becomes obvious that there's no free lunch here, and all that the Phase Oversampling feature really achieves is an indirect way to increase the FOV and phase encode points in concert, then save a tiny amount of space in the database by not storing a larger image. Overall, I don't see much utility in Phase Oversampling for circumventing aliasing in EPI.
3. It is possible to reconstruct an unaliased image from an aliased image, but it's not good practice. You may find that heating effects or motion in the phase encoding direction leads to "interesting" statistics for the lines of pixels that fall along the edges of the aliased image. But, in a pinch, I'd probably try it to salvage an important data set. Of course, the point of reading this blog is that you become sufficiently skilled that you don't accidentally alias your images!