The Echo Planar Imaging (EPI) pulse sequence
In Part Ten we looked at a pulse sequence and its corresponding k-space representation for a gradient-recalled echo (GRE) imaging method. That sequence used conventional, or spin warp, phase encoding to produce the second spatial dimension of the final image. A single row of the k-space matrix was acquired per RF excitation, with successive rows of (frequency-encoded) k-space being sampled after stepping down (or up) in the 2D k-space plane following each new RF pulse.
One feature of the spin warp imaging scheme should have been relatively obvious: it's slow. Frequency encoding along kx is fast but stepping through all the ky (the phase-encoded) values is some two orders of magnitude slower, resulting in an imaging speed from tens of seconds (low resolution) to minutes (high resolution). That's not the sort of speed we need if we are to follow blood dynamics associated with neural events.
Instead of acquiring a single row of k-space per RF excitation - a process that is always going to be limited by the recovery time to allow the spins to relax via T1 processes - we need a way to acquire multiple k-space rows per excitation, in a sort of "magnetization recycling" scheme. Ideally, we would be able to recycle the magnetization so much that we could acquire an entire stack of 2D planes (slices) in just a handful of seconds. That's what echo planar imaging (EPI) achieves.
Gradient echo EPI pulse sequence
The objective with the EPI sequence, as for the GRE (spin warp) imaging sequence we saw in Part Ten, is to completely sample the plane of 2D k-space. That objective is unchanged. All we're going to do differently is sample the k-space plane with improved temporal efficiency. Then, once we have completed the plane we can apply a 2D FT to recover the desired image. Pretty simple, eh?
As before, sampling (data readout) need only happen along the rows of the k-space matrix, i.e. along kx. So we need a way to hop between the rows quickly, spending as much time as possible reading out signals under the frequency encoding gradients, Gx, and as little time as possible getting ready to sample the next row. EPI is the original recycled pulse sequence, so I'll color the readout gradient echoes in green:
The first four (and a half) gradient echoes in a gradient echo EPI pulse sequence. |
To keep things simple I've omitted slice selection and indicated a 90 degree RF excitation; this could of course be any flip angle in practice. (See Note 1.) I've also shown just the first four (and a half) gradient echoes in the echo train. The full sequence repeats as many times as there are phase-encoded rows in the k-space matrix. A typical EPI sequence for fMRI might use 64 gradient echoes, corresponding to 63 little blue triangles in the train shown in the figure above. But for the example k-space plane below, the k-space grid is 16x16 so assume for the time being that the full echo train would consist of 15 little blue triangles separating eight positive Gx gradient periods and eight negative Gx gradient periods.
Before we consider the 2D k-space representation, let's take a moment to assess a few features of the pulse sequence. First, note that the negative Gx period (in orange) at the start of the sequence is designed to balance half of the subsequent positive Gx period (in green). (See Note 2.) This is a gradient echo, exactly as we saw in Part Eight (and again in Parts Nine and Ten). Furthermore, after the first positive Gx gradient the sign alternates for subsequent Gx periods, meaning that we will end up with a "train" of gradient echoes, each one being sampled (read out) with frequency information. In the absence of any other gradients, all these gradient-recalled echoes would contain essentially the same one-dimensional spatial information. So let's look at the second dimension via the k-space plane.
Here's the k-space representation for a 16x16 matrix, with the trajectory color scheme corresponding to the gradients in the figure above:
You don't see the individual data samples under the read gradients, these are assumed. And as a general rule the dimensions of the matrix can be set somewhat arbitrarily. I say "somewhat" because as the number of data samples under each read gradient is increased, the time between successive phase encode gradients (the blue triangles) increases and so does the overall duration of the pulse sequence. As we will see below, there are practical limits to the amount of time that can be spent sampling the k-space matrix in EPI and you must be judicious in how you spend that time.
Let's add another parameter to the pulse sequence. In Part Ten when we looked at the regular (spin warp) GRE imaging sequence I didn't label the effective echo time, TE. However, I did indicate the point at which the phase imparted by the read gradient is returned to zero, and I mentioned that this point corresponds with a journey through the k-space origin. It is this journey through the k-space origin that we use as our definition of TE.
In both EPI and GRE, and for MR pulse sequences in general, we define the TE as the time corresponding to the interval between the creation of transverse magnetization, i.e. from the center of the RF excitation pulse, to the moment the k-space trajectory passes through the k-space origin. This isn't an arbitrary designation, even though there are clearly signals (indeed, the majority of signals!) being acquired at times other than at TE. Signal intensity at the k-space origin should be maximal - assuming the imaging gradients dominate any artifactural gradients in the sample. Then, because variations in signal intensity define the image contrast, and it is image contrast that is usually of primary interest/utility (edges and small details tend to be secondary in most imaging applications, not just fMRI) it is a good compromise timing at which to understand the overall image features.
Here is TE indicated on a complete EPI pulse sequence comprising 32 total echoes:
Courtesy: Karla Miller, FMRIB, University of Oxford. |
The TE is of vital importance for understanding EPI artifacts and for setting up BOLD contrast. But it is also worth remembering that most of the signals acquired in the echo train are not actually acquired at TE but at differing offsets from it. As we saw in Part Eleven, high spatial frequencies appear at the extremes of k-space, so we can already see that the early and late echoes in EPI - which are the echoes containing edge information in the phase encoding dimension - will have markedly different contrast properties than the signals towards the k-space center. More on the practical manifestations in the artifact recognition posts to come (but see Note 3 if you fancy trying to predict what we might observe experimentally).
Processing 2D k-space for EPI
So I may have lied. Slightly. It transpires that there's an additional step necessary before the 2D FT can work its magic and obtain an image from the 2D k-space presented above. Did you notice how the green arrows point in alternating directions in the k-space trajectory? In effect, the way we have sampled alternating rows of kx-space is akin to time going forward then backward from the spins' perspective. Phase accrual, which as we know from Part Ten is how the spatial encoding actually happens, appears to be running backwards on alternating rows. We need to reverse the direction of all of the odd (or all of the even) lines to make the phase evolution consistent:
Left: original k-space matrix acquired by the single-shot EPI sampling trajectory. Right: the matrix after reversing alternate rows of k-space. |
The necessity of reversing all the odd (or even) rows of k-space prior to the 2D FT would be a mere processing formality were it not for something that you may have picked up on in the previous paragraph. I said that the phase accrual under the readout gradient appears to be forwards in one kx row, then backwards in the next, then forwards again, and so on through the matrix. If the only sources of phase shift were the imaging gradients, and if these gradients were ideal, we'd be in good shape. However, any additional (or erroneous) phase shift, e.g. arising from magnetic susceptibility gradients in the sample, or from imperfections in the imaging gradients themselves, will cause a problem. (See Note 4.)
By way of example let's look at one source of spurious phase shift that we can easily represent in a diagram you've seen before. Consider the trusty one-dimensional gradient echo sequence that we first saw in Part Eight. However, in addition to the imaging gradients that we can control, let's add another background gradient (that we can assume arises from magnetic susceptibility differences across the sample) in the same direction, in this case the x axis. With the imaging gradient and background (susceptibility) gradient in the same direction we can instead consider just the effective gradient, Geff:
An illustration of the effect of a background (magnetic susceptibility) gradient on the echo refocusing time for a gradient echo sequence. The (resultant) effective gradient is shown in orange. |
For simplicity I have suggested that the background gradient is linear, just like the imaging gradient, Gx. (Real background gradients can and will have complex spatial dependencies.) And, although real background gradients persist indefinitely - as long as the subject is in the magnet they are "on" - I've draw the background x gradient only for the period when it coincides with the Gx imaging gradient pulses, because that's when it has its effect on the k-space problem we're considering. What happens before the RF pulse or after the data sampling (analog-to-digital conversion) period is of no consequence for us here.
Clearly, in the absence of the background gradient the echo would refocus at the time of the dashed line, as we saw in previous posts. But the additional positive term in Gx actually causes the echo to refocus late, at the dotted line, because that's the time at which the effective gradients, in orange, are balanced.
What does this late-arriving echo mean for our k-space representation? Well, we intended the center of k-space to be at the dashed line position, whereas the bulk of the signal and the actual position of zero phase (the echo top) corresponding to the center of k-space happens later, at the dotted line. If we were doing a one-dimensional FT to get a 1D profile this isn't a big deal; we accrued a bit of spurious phase but the echo still occurred inside the period of signal detection. (We nearly always look at magnitude images so a little phase twist in the profile would be eliminated.) Likewise, if we were doing a conventional 2D spin warp phase encoding for our 2D image it's another case of having a constant phase offset in each kx row that essentially "falls out" of the 2D FT with no major consequence for the resulting image. (Again, taking the magnitude of the image removes any phase twist.) But EPI is different.
In EPI we use a train of readout gradient episodes, first positive, then negative, then positive again, etc. We are reversing the sign of the readout gradient, but of course the background gradients are unchanged! Any background gradient that is additive to positive read gradients will be subtractive from negative read gradients, causing the echoes collected each time to be either early or late relative to the expected echo tops. (I hope it's clear that if we reversed the signs of the Gx imaging gradients in the previous figure, leaving the background gradient as it was, and then reconsidered the effective Gx gradient, that the echo top would arrive earlier than the dashed line.) The exact ordering - additive or subtractive, early or late - isn't really an issue. All we need to recognize is that we have an alternating pattern for each successive gradient in the EPI readout echo train. And it is this alternating pattern that causes the problem for the resulting image.
We will consider in detail the effect of the alternation in the section on ghosting, below. For now, let's just complete the picture that we get in our 2D k-space for EPI in the presence of a small background gradient. We're going to have a zigzag offset across the ky dimension of k-space, one that persists after reversal of alternating k-space rows:
In this figure the green arrows show the effective k-space matrix overlaid on the ideal matrix. But we're not FTing the ideal matrix, we're FTing the zigzagged k-space matrix in green on the right. More on the practical consequences of doing this in the next section. (See Note 5.)
EPI artifacts
You never get something for nothing. In exchange for the speed of EPI we pay a price in terms of image quality. There are essentially three artifacts that might be considered characteristic of the pulse sequence: ghosting, distortion and dropout. Strictly speaking, the latter - dropout - isn't a characteristic of EPI per se but is instead a consequence of using a relatively long TE (compared to the brain's T2*) to acquire the image. Another gradient echo sequence, such as a conventional (spin warp) phase-encoded gradient echo sequence, would suffer from similar signal dropout were it to be acquired at the same TE.
Anyway, let's look at the three artifacts that will plague every EPI you ever acquire for fMRI and consider in brief some of the factors contributing to the level of artifacts thus produced. One point to keep in mind as you read is that the severity of each artifact will nearly always vary across the image; these aren't usually constant, global effects. The regional variation is a function of the magnetic susceptibility gradients across the sample. The larger the heterogeneity of the magnetic field the stronger the (local) artifacts and the more anisotropic will be their spatial pattern. Put another way, some parts of the brain - occipital and parietal cortices, for example - can be imaged relatively free of major artifacts whereas other parts of the brain - mid-brain structures, frontal and temporal lobes - will be plagued by them.
Ghosting
Often called N/2 ghosts or Nyquist ghosts (for reasons that will become clearer later), these artifacts are an unavoidable consequence of the back and forth sampling trajectory in k-space. Let's look at the appearance of the ghosts, in particular their spatial location, before examining their origins further.
Consider this EPI through a spherical phantom:
The first thing to note is the dimensionality of N/2 ghosts: ghosting occurs in the phase encoding dimension only, because that's the dimension that suffers from the zigzag errors in 2D k-space, as we've seen above. The frequency encoding dimension (left-right in the image above) doesn't show ghosts.
Next, note how the ghosted images appear displaced by half the field-of-view (FOV), with the bottom half of the (real) image generating a ghost image that appears at the top of the FOV and the top half of the (real) image generating a ghost image that appears at the bottom of the FOV. Single-shot EPI ghosts always appear at these locations, which is why they are often referred to as N/2 ghosts, where N indicates the total number of pixels in the phase-encoded dimension. (Clearly if the FOV is defined by N pixels then N/2 defines half the FOV.)
You should also note that portions of the ghosts overlap with the real image. As we will see when we look in-depth at real data, such overlap can have profound implications for your fMRI statistics because, as a general rule, anything that perturbs the time series data, such as subject motion, will affect the ghost intensity, thereby increasing the variance of the ghosts above that for real signal regions. So, if you have a ghost parked on your most critical anatomical region, expect to get crappy statistical power! More on tactics to reduce and relocate ghosts in future posts. (See Note 6.)
A final comment on ghost appearance: the magnitude of N/2 ghosts can vary considerably, but a rough rule of thumb is that they should be no more than 5% of the intensity of the main signal regions on a well-behaved scanner with good parameter selection and minimal subject motion. It's quite feasible to get ghost levels as low as 1% with a modicum of effort.
Okay, now that you know what they look like, let's go back to the zigzag in k-space and get a basic understanding of why the ghost images arise. In essence, we can consider the final echo planar image that emerges from the 2D FT as the sum of an ideal image with an artifactual (ghost) image, the latter arising purely from the offsets producing the zigzag across k-space. Here's a cartoon of this process:
The k-space for the ghosts (right panel) comprises erroneous "zig" offsets that sum with the odd rows of the ideal k-space (center panel), plus "zag" rows that are just zeros, i.e. no signal or noise, as indicated by white dotted lines. Together, the zigzagged erroneous k-space (right panel) adds to the ideal k-space (center panel) to yield the actual k-space (left panel).
Now, since we know from Part Eleven that an image's FOV is defined by the k-space step size, it should be intuitive that the erroneous k-space step in the right panel is effectively 2*delta-ky (where delta-ky is the actual k-space step size in the phase encoding dimension) and will therefore produce an image that has half the FOV of the ideal image. (Bigger step in k-space generates smaller FOV in image space.) In other words, the ghost signals arising from the zigzag in k-space appear at positions appropriate for an image with half the FOV of the actual FOV. The ghosted signal ends up misplaced by that amount - half the FOV - in the final image.
So, what can we do about these pesky ghosts? We first spend some time minimizing the experimental processes that lead to the zigzags in k-space, then we usually use a correction step that involves a handful (three is typical) of "navigator" gradient echoes that are inserted into the pulse sequence between the RF excitation pulse and the readout echo train. These navigator echoes are designed to capture information on the zigzags that can be expected in the readout echo train, allowing a post-processing phase correction to be applied to the 2D k-space before the Fourier transform. The correction step is most likely built-in to your scanner's software, so it's not something that you have to worry about unless the default correction proves insufficient for some reason. (See Note 7.)
Distortion
Spatial distortion is probably the most insidious of the EPI artifacts. I label it as insidious because it isn't until the effects become grotesque that most fMRIers acknowledge the problem, even though it's present in all EPIs. Want a good example? Go back and look at the EPI of the sphere above, the one showing the position of the N/2 ghosts. Does that bright white thing in the center of the image look like a circle to you? That's what it's supposed to be because it's a slice through the center of a sphere. Doesn't it look more like a section through a prolate spheroid? Bear in mind that this image was obtained from a phantom that can be shimmed better than any head! (Spheres have infinite axes of symmetry so it's relatively straightforward to homogenize the magnetic field with three first order and five second order shims that you find on most modern scanners.) In prosaic terms, the level of distortion in your real brain data is going to be worse than what you've just seen in the phantom.
So where does the distortion come from and what can we do about it? It's a consequence of the "recycling" of magnetization through a gradient echo train, i.e. it is due to the time it takes to produce the entire data readout. As fast as EPI is it's not instantaneous, meaning that background susceptibility gradients (yet again!) can make their unwanted presence felt. Let's look in more detail at the effect by first considering the situation in conventional (anatomical) imaging, where distortion can usually be safely ignored.
In conventional anatomical imaging we acquire one line of (frequency-encoded) k-space per RF excitation, stepping through successive k-space rows with a new RF excitation and a different phase encoding gradient value. Taking the frequency encoding axis first, then, the bandwidth (the spread of the spatially-encoded frequencies imposed by the gradient) of that dimension is given by the inverse of the time between data sampling points (the so-called "dwell time"). This is equivalent to our previous observation in k-space, that small steps in k-space define the image FOV; all that differs here is the conjugate pair under consideration, i.e. time and frequency rather than k-space and real space. So, the time between individual readout data points under the frequency encoding gradient defines the entire bandwidth of that dimension (and also the FOV, except that we're not thinking in terms of k-space at the moment).
What's a typical dwell time, hence bandwidth, for a readout dimension? On a modern scanner with typical gradients we might acquire as fast as 5-10 microseconds per point. Thus, a typical readout dimension bandwidth is the reciprocal of this range, i.e. 100-200 kHz. Now consider the effects of background magnetic field heterogeneities on this dimension. If I tell you that a typical "bad" region in the brain produces errors in the main magnetic field of around 1-3 parts per million (ppm), this corresponds errors of one to three millionths of the operating frequency of the scanner, or about 100-300 Hz at 3 T. Compared to the bandwidth of the readout axis, that's pretty small. Let's suppose we acquired 128 data points under the readout gradient. That means each pixel in the readout dimension is 128th of the bandwidth, or between 800 and 1,600 Hz per pixel. At most the magnetic susceptibility gradient causes a spatial error that's a third of a pixel. Pretty tiny.
Now consider the conventional (spin warp) phase encoded axis in a non-EPI scan, which we saw in Part Ten. Here the situation with respect to distortion is even better! Although the bandwidth in the phase encoding dimension can be defined as the reciprocal of the time underneath the phase encoding gradient episode (given by tp in Part Ten), there's a subtle difference in the way the magnetic susceptibility gradients manifest in the image. Note that if the magnet heterogeneities remain fixed throughout the acquisition of the image - as they should - then each row of k-space will possess precisely the same erroneous phase shift. This constant phase error effectively passes through the Fourier transform, adding a net phase twist to the image that is neatly removed by the expedient of looking at a magnitude image. In effect, we can consider the bandwidth of the conventional phase encoding axis as being infinite, rendering negligible the 100-300 Hz misplacement we saw above. The conventional (spin warp) phase encoding dimension is distortion-free.
And then there's poor old EPI. We have the opposite situation than with conventional phase encoding. Instead of acquiring one phase encode step per RF excitation, we're going to acquire every single phase encoding blip in succession following a single RF excitation pulse. The time between each phase encode blip, which is the echo spacing as shown in the first figure of this post, defines the bandwidth of the phase encoded axis. A typical echo spacing might be 0.5 ms, yielding a bandwidth of 2,000 Hz defined by, say, 64 pixels. That corresponds to around 30 Hz per pixel. If we have a 100-300 Hz error arising from magnetic susceptibility gradients then we can have a spatial distortion in this dimension that could be 3-10 pixels in magnitude. For a typical pixel of 3-4 mm that is quite a lot if displacement!
Okay, let's recap. In EPI we have a frequency encoding dimension that suffers only modestly from distortion - it will generally be at the sub-pixel level - and we have a phase encoding dimension that suffers extensively from distortion, to a level of several pixels. I'd like to emphasize that the distortion of the phase encoding axis isn't global, rather it is a regional effect that is determined by the local magnetic susceptibility gradients. And, because these background gradients aren't usually linear (indeed, they are nearly always very complex with high spatial order) we don't have a neat linear distortion that is easily remedied. The following example should immediately convince you that this is the case:
The effect of a long echo spacing (left) and halving the echo spacing (right). Halving the echo spacing also halves the distortion in the phase encoding dimension (which is anterior-posterior here). |
The weirdly distorted frontal lobe regions are stunningly obvious in the image on the left, especially. While we can't determine by inspection precisely what spatial variations gave rise to the spikes and troughs in the brain signal, it's readily apparent they weren't linear. What's more, the distortion can be a compression or a stretch, depending on the relative sign of the susceptibility gradients in the sample and the phase encoding gradient polarity. In the above image the frontal lobe regions are being mostly, but not exclusively, stretched.
The non-linearity and a combination of stretches and compressions makes fixing distortion a tricky proposition. There are methods, typically involving some sort of magnetic field mapping (to determine the susceptibility gradients that gave rise to the distortion) that may, under certain circumstances, be able to provide a rudimentary correction. I'm not going to get into this issue here, except to note two important issues: generally, it is easier to relocate (distortion-correct) pixels that have been stretched from their correct location than it is to relocate pixels that have been compressed. This is because compression causes displaced pixels to coalesce, massively complicating the mathematics of correction. Secondly, all distortion correction schemes produce only approximations to the anatomically-correct pixel locations. There are numerous assumptions in the methods that make the approximations more or less valid. There is as yet no robust, foolproof way to accurately correct distortions, which is presumably why the use of distortion correction isn't as widespread in the fMRI literature as you might expect. Instead, many fMRIers try as best they can to minimize the distortion at the acquisition stage, then let non-linear warping algorithms (spatial normalization) tackle the distortion at the same time as one is trying to morph the subject's brain anatomy into a standard space (such as Talairach or MNI coordinates). The whole process is complex and fraught, so I'll deal with distortion correction in a future post.
A final note on distortion. It may be easier to minimize distortion than to even try and fix it. As the above example images show, by halving the echo spacing duration the level of distortion is also halved. Of course, there are hardware limits to how short we can make the echo spacing, so other alternatives might use parallel imaging methods (e.g. GRAPPA) or segmented (multi-shot) EPI, to reduce the distortion level. But, as you have come to expect (I hope!), these new methods bring with them their own complexities and artifacts, so it's not like we can simply avoid the distortion by turning on another scan option. Some level of distortion is the cost of doing business with EPI. Sorry.
Signal dropout
As already mentioned, strictly speaking this artifact isn't a pure characteristic of EPI because similar regions of dropout will occur for any gradient echo imaging sequence acquired at the same TE. But, in fMRI we have a requirement to generate BOLD contrast by setting the TE within a certain range of values; the optimum BOLD contrast occurs when the TE matches the local T2* of the tissue of interest. Thus, we don't typically minimize the TE in order to minimize signal dropout, we instead set our TE to provide the BOLD contrast we want and we have to accept the concomitant signal dropout. (See Note 8.)
Let's first consider the origins of the effect, then we can consider tactics to reduce it. As with the other artifact sources, dropout arises because of spurious dephasing caused by magnetic susceptibility gradients. As before, the frontal and temporal lobes are particularly at risk because of the nearby presence of sinuses and air-filled bony cavities (ears). Our EPI sequence is T2*-weighted (by design), meaning that the signal contrast across an image will be due primarily to T2* variations across the brain, with T1 and spin density differences contributing secondary contrast. If we set the TE to be, say, 40 ms in order to match the approximate gray matter T2* in occipital cortex, in the frontal cortex where the T2* might be as low as 20 ms there is going to be pronounced signal attenuation (not to mention sub-optimal BOLD contrast).
Here are echo planar images for two different TEs acquired at 3 T. Notice how it's the frontal and temporal lobes, as well as the inferior surface, that pay the biggest penalty in terms of signal loss (dropout) at the longer TE, although if you look very closely you can also see that signal is reduced for the edges of the brain in all slices compared to signal at the shorter TE:
(Click to enlarge.) |
What can be done to ameliorate the problem? One tactic is to compromise between the optimum TE for BOLD in regions of the brain that are well shimmed (i.e. that have low magnetic susceptibility gradients), such as occipital and parietal cortices, and a shorter TE that will retain frontal and temporal signals. If you're only interested in occipital cortex at 3 T you might use a TE of 40-50 ms. But if you're interested in other regions, especially inferior frontal and lateral temporal regions, you might opt for a TE of 20-30 ms; the loss of functional contrast in occipital/parietal cortex should be small.
Another simple tactic is to recognize that the majority of dropout is due to phase variation through the imaging plane, so lots of thinner slices will generally provide less dropout than fewer thicker slices. This is illustrated in the following cartoon, where for simplicity I will pretend that the entire slice has a uniform T2*:
In the thick slice (top) the phase variations across the upper and lower portions of the slice tend to cancel, resulting in a small net signal vector, in blue. When the thick slice is divided into two thinner slices (bottom) the partial cancellation of spin phases is reduced, leading to larger net signal vectors for the two thin slices combined. (I've exaggerated the lower blue vectors for emphasis.) Note also that because we nearly always deal with magnitude images - no phase information - the direction of the net vectors is irrelevant; only their magnitude determines image signal level.
Of course, acquiring arbitrarily thin imaging slices reaches a practical limit very quickly, not least because coverage in the slice dimension is decreased. Remember that if it takes (a typical) 60 ms to acquire each EPI slice it will take about the same time whether that slice is 2 mm thick or 4 mm thick! If you need whole brain coverage and you don't want to violate the Nyquist condition for the hemodynamic response, you only have a TR < 3000 ms to utilize. But even if you only need a handful of slices you may run into another limit. Remember that the overall image SNR scales with the volume of the signal being sampled. While there may be some regional benefits (reduced dropout) from acquiring two 2 mm slices instead of one 4 mm slice, you should note that the individual image SNR will be reduced by 50% on purely volumetric grounds. The regional benefit of lower dropout might come at the expense of worse global statistical power. It all depends on your application.
Final thoughts
Variability of artifacts
There is a fair degree of anisotropy in the way EPI artifacts behave, their relative severity, and so on. As with nearly all aspects of fMRI acquisitions, it means that you have (limited) choices that may be able to produce better data for your application. These include the slice direction, the quality of the shim, the slice thickness, the sign of the imaging gradients, TE, echo spacing and so on. What you actually observe in any situation will result from the interplay of all of these factors, implying that no single parameter should ever be modified in isolation; you need to consider concomitant effects. We will look at some of these issues as we go through real data and real artifacts.
Alternative EPI pulse sequences
There are several EPI variants in common use that don't acquire an entire 2D k-space plane in a single shot. These include partial Fourier EPI, segmented (multi-shot) EPI and accelerated EPI using parallel imaging methods such as GRAPPA. The methods can be differentiated based on the way they acquire the phase encoding dimension of k-space, and you should find that the background physics you've learned in this series of posts will enable you to understand each one. That's why I'm stopping at this point; you have the background knowledge that you need to comprehend all the EPI variants as and when we encounter them in practical situations.
________________________________
Notes:
1. Some recent work at NIH suggests that we should be using lower flip angles for fMRI than many of us are, and certainly lower than the Ernst angle (which produces maximum SNR per unit time, i.e. for the TR being used). This is because in fMRI we are usually operating in a regime limited by physiologic noise in our time series acquisitions, we generally don't have to worry nearly as much about thermal noise (unless the scanner has developed a problem). BOLD contrast tends to be invariant to the particular flip angle being used, whereas it is highly dependent on TE. Physiologic noise, on the other hand, tends to scale with raw image SNR, i.e. with flip angle, and it also has a TE dependence. Overall, the NIH group found that one could operate in an optimum regime by reducing the physiologic noise in the time series, even though the raw image SNR was low. This all makes sense to me. I haven't quite got around to recommending that everyone use single digit flip angles, as tested in the paper, but I have started suggesting that people scale down the RF to between 30-50 degrees while I investigate further.
"Physiological noise effects on the flip angle selection in BOLD fMRI."
J Gonzalez-Castillo, V Roopchansingh, PA Bandettini and J Bodurka.
Neuroimage 14; 54(4):2764-78 (2011).
On a related note, I was talking to a member of Peter Bandettini's group at a conference earlier this year, and I hear that this reduced flip angle suggestion holds for resting state fMRI as well as task-based experiments as tested in the above reference. This also makes sense to me.
2. While there are practical consequences arising from the sign of the so-called "dephasing" gradients (colored orange), for the purposes of understanding the EPI sequence and its k-space trajectory the signs of the orange and green gradients don't matter at all, provided they are balanced in the manner I've drawn them. I just happened to start with a negative dephasing gradient for Gx and a positive dephasing gradient in Gy, but I could just as easily have reversed one or both of them. In later posts we will look at the consequences of the read and phase encode gradient signs because our spatial encoding gradients tend to interfere with the intrinsic and anisotropic (and unwanted) gradients arising in your subject's head. So, while there are often practical consequences, at this stage of the game it's not important whether we start with negative or positive gradient lobes.
3. Remember convolution from Part Six? Recall how an exponential decay function applied to a time domain signal causes a broadening of the frequency domain representation? Well, T2* decay during the echo train is going to cause a certain amount of smoothing to the image. If you simply must know more about this subject now, take a look at pages 262-5 of Introduction to Functional Magnetic Resonance Imaging (2nd Edition) by Rick Buxton. I've not yet managed to find a good (free) online description of T2* broadening I'm afraid, but I'll keep searching.
5. These zigzags in k-space may be localized or global, depending on the origin of the particular offset. In other words, for many parts of the brain the acquired k-space trajectory may match quite well the ideal (target) trajectory, whereas in other parts (e.g. frontal lobes, temporal lobes) there may be appreciable mismatch. Remember that there is no single k-space trajectory for the entire brain, only a global target (ideal) trajectory. Localized variations in k-space (i.e. zigzags) may be due to magnetic field heterogeneity (i.e. imperfect "shim"), but global variations may result from a mismatch between the ADC sampling periods and the readout gradient waveforms (e.g. because of the time it takes to ramp up the readout gradients). The important point to remember, as you read about the three typical artifacts in EPI, is that there will likely be considerable spatial heterogeneity in the severity of artifacts across the brain. Not all regions of the brain are created equally when it comes to EPI!
6. The obvious tactic of simply increasing the FOV until the ghosts fall into noise regions, thus not overlapping valuable signals, can be employed but tends to penalize severely spatial resolution in the phase encoding dimension. In most fMRI studies, therefore, the N/2 ghosts are allowed to overlap signal regions and we do our best to minimize ghost intensity.
You might also be interested to learn that there is one version of EPI that is entirely ghost-free. It's usually referred to as "fly-back" EPI on account of using all positive (or all negative) readout gradient periods:
With fly-back EPI the k-space rows are all read with the same polarity, in the same direction, eliminating the need to reverse alternate rows and also eliminating the zigzag phase errors across the phase encoding dimension. Thus, no ghosts.
Wondering why we don't all default to using fly-back EPI for fMRI, negating the ghost problem entirely? It's because the demands on the gradients are far higher for fly-back EPI than for regular EPI. You can only slew (switch) the gradients so fast before you start causing peripheral nerve stimulation in your subject. Furthermore, the time between readout periods is longer for fly-back EPI than for regular EPI, making the distortion problem worse in exchange for eliminating the ghosts. Doh!
8. There are customized EPI pulse sequences that can reduce dropout, such as so-called "z-shimming" methods, but these usually come at the expense of lengthening the pulse sequence (through the incorporation of the compensation scheme) and thereby decreasing the number of slices that can be acquired for a given TR. Few of these custom sequences are offered on commercial scanners anyway. I may deal with z-shimming methods in a future post, but because they're not in widespread use I won't make it a high priority.
Great post. Did you write a blog about how to distortion correct using field maps in the end?
ReplyDeleteNot yet, I'm afraid. But it is on a long list that will get done eventually! If you have specific questions I'll be glad to try to offer some advice in the mean time.
DeleteThanks for another great post. I'm wondering if you have any recommendations for choosing a phase encoding direction. It sounds like correcting stretches is preferable to correcting compressions and it looks like A-P results in compression of the frontal lobes (although you say it stretches them -- am I seeing it wrong?). Is it then better to use P-A if a study is looking at activation in frontal lobes? Thanks!
ReplyDeleteHi Michael, you're correct, stretches are preferable to compressions for remediation with a field map because coalesced signals all parked into a single distorted voxel can't be properly redistributed to their correct locations either in principle or in practice. At least with a distortion one can, in principle, place the signals back to their correct locations. The problem is that there are always both. For axial slices, A-P phase encoding will cause a stretch in the occipital and a compression in the frontal lobe. For P-A it's the reverse. So, which one do you pick? Depends on your primary areas of interest. If you're a vision scientist you probably want stretched occipital, making A-P the natural choice. If you're interested in executive control then perhaps the frontal lobe is primary and P-A is a better choice, as you suggest.
DeleteAs always, however, before committing it is best to do a quick pilot experiment in case your particular setup might benefit from concomitant tweaks. For example, the slice tilt you've selected to be optimal for A-P may not match that for P-A, all other things held constant. Why? Because in addition to the distortion there is also the issue of in-plane dephasing leading to dropout. You may lose different signals with one p.e. direction versus the other. You also want to check the location of N/2 ghosts and whether they park themselves unfavorably with one p.e. direction or the other. By assessing the impact of p.e. collectively, via an overall assessment of fMRI signals, you take into account all the parameters simultaneously. Let the pilot data determine the answer. It will likely be vague, with one option slightly out-performing the other if you have many regions of interest spread across the brain.
"At least with a distortion one can, in principle,..." Oops. I meant to say stretch, not distortion here!
DeleteThanks so much!
DeleteThis comment has been removed by the author.
ReplyDeleteNice post! It remains two doubts for me:
ReplyDelete1. Why BW of phase in mr convencional is "infinite"?
2. How parallel acquisition can help in distortion issue?
Tks
1. With conventional (spin warp) phase encoding only one phase value is acquired per RF excitation. Thus, no phase errors accrue between phase encode steps because the starting phase is reset with each successive excitation. For this reason we eliminate distortions, we also eliminate chemical shift artifacts. These two artifacts still apply to the frequency encoding dimension although the amount of distortion in the frequency encoding dimension is a small fraction of what we see in the phase-encoded dimension of EPI.
ReplyDelete2. In-plane parallel imaging methods like GRAPPA and SENSE reduce the total echo train length acquired for EPI. If the acceleration factor R is two, say, then the k-space step size is doubled and the echo train length is halved. Doubling the k-space step halves the final image field-of-view, necessitating an unaliasing step in the image reconstruction, while halving the echo train length means that accrued phase errors are half as big as they would have been. This leads to a reduction of distortion by a factor of two.
This comment has been removed by the author.
DeleteThanks. My first doubt I got it. But the second remains not clear to me.
ReplyDeleteI agree that the k-space step size is doubled when you use R=2, but why the echo spacing is halved?
When you use R=2 , you acquire one odd lines of k-space and jump the even ones (or vice-versa), correct? And you say that BW of phase is calculate as a inverse of echo spacing, correct? Why the echo spacing changes? For me to achieve acquisitions of odds k-space lines, you only have to double the blips amplitudes. What Im thinking wrong?
tks in advance
Ok, I see where you're confused. First, recall that k-space and image space features are reciprocals such that small in k-space is big in image space, and vice versa. Thus, the k-space step size determines the image field-of-view (FOV) while the maximum k values determine the image resolution (or pixel size if you prefer). Before enabling GRAPPA we have a step size delta-k that determines the FOV and a maximum phase encode k value of k(pe)max giving a pixel with phase encode dimension length x. Now we enable GRAPPA for R=2. If we only halve the number of k steps without changing the step size then we only end up going to k(pe)max/2, causing an image with the same FOV but half the resolution as the original. But if we double the k step size as well as halve the number of k steps then the FOV shrinks to FOV/2 and thus the resolution is returned to that of the original. Now we have the same in-plane resolution but an image that is folded in the phase encode axis because the FOV is halved.
DeleteNote that it is the echo train length that is halved, not the echo spacing. The echo spacing remains the same.
Tks for the reply. These reciprocal relationsheep between k-space and image that you explain I already understood, but the 'distortions reductions using parallel imaging' issue remains unclear to me. In your reply you confirm that the echo spacing remains the same, correct? And in this post (6th paragraph of "Distortion" topic) you say: "The time between each phase encode blip, which is the echo spacing as shown in the first figure of this post, defines the bandwidth of the phase encoded axis".
DeleteOk if the echo spacing remains the same in parallel imaging acquisition, as you told in your reply, so the BW remains the same, correct? So, why the parallel acquisitions can reduces distortions artifacts?
Thanks again, and sorry to disturb you with my doubts, but i wanna really understand that issue.
Hi Bruno, in the original post I didn't want to get too deeply into parallel imaging. I also wanted a simple way to explain why the phase-encoded axis of EPI is different to the frequency-encoded axis of EPI, and also different again to the (conventional) phase-encoded axis of anatomical scans. Also, on a Siemens scanner at least, the only variable reported in the acquisition parameters that relates to the expected level of distortion is the echo spacing, so I focused everything on that. For single shot EPI it is a trivial matter to compare the expected level of distortion for two different echo spacing times, all other parameters being held constant. But shifting to a discussion of parallel imaging necessitates consideration as the total echo train length as the actual parameter defining the level of distortion. The echo spacing itself, which was a handy proxy for single shot EPI, now doesn't strictly apply.
ReplyDeleteIn reality what causes the distortion is the total echo train length (ETL) in an EPI acquisition, and for a fixed in-plane spatial resolution. It gets complicated because doing interleaved multi-shot (say 2-shot) EPI reduces the ETL by the number of shots, thereby reducing the distortion by the same factor. Parallel imaging also reduces the ETL by the acceleration factor and again reduces distortion by the acceleration factor. However, partial Fourier single shot EPI, which simply omits a fraction of the echoes in the train but leaves the k-space step size and other parameters unchanged, doesn't reduce distortion even though the ETL may have been reduced by, say, 25% (for 6/8ths partial Fourier). A full understanding of the expected distortion level thus requires that we know the echo spacing, the k-space step size, the in-plane resolution and the ETL. In some situations, as for single shot EPI, it's acceptable to default to a simple proxy such as the echo spacing as a way to understand why the distortion level might change as the acquisition is changed. But these simplifications only work under certain circumstances.
I never wrote a post dedicated to parallel imaging so I'm afraid I don't have a more detailed explanation to hand. If you don't have any good text books then you might get some useful information out of Stuart Clare's excellent PhD thesis, which he has made available on the web: http://users.fmrib.ox.ac.uk/~stuart/thesis/ His focus was on multi-shot EPI but the principles are quite similar to parallel imaging as far as distortion reduction is concerned.
Hi, to my understanding, halving the ETL with GRAPPA =2 is not doubling the total BW in the phase-encode direction but it does at the pixel level since you get BW_total/n_encoding = BW per pixel. To come back at the example in this article, you would go for an echo spacing of 0.5 ms (unchanged with GRAPPA) and a corresponding BW of 2000 Hz but only 32 k-space lines instead of 64, leading to 60Hz/pixel instead of 30. Thus a background field of 300Hz would produce only half of pixel displacement with GRAPPA =2.
ReplyDeleteThanks for the very useful post. I have a basic question about the notion of bandwidth for the phase encode direction. You define bandwidth as "the spread of the spatially-encoded frequencies imposed by the gradient." For a spin warp sequence with a single frequency-encoded dimension, this seems straightforward to me. However in EPI, the phase encode direction is (obviously) not frequency encoded, and spatial locations along this direction do not differ in frequency during readout. So does it still make sense to interpret bandwidth in the phase encode direction as relating to spreading of frequencies? And if not, is there a physical interpretation of bandwidth in this direction?
ReplyDeleteHi Ben, remember that frequency is rate of change of phase, and so we can define a bandwidth in the phase encoding dimension of EPI just as for the read (frequency encoding) dimension. Think about the dwell time, the time between samples. In the frequency encoding dimension this is the time between individually recorded data points, the bandwidth in that dimension is the reciprocal of that time. Analogously, you can think of the echo spacing delay as the sampling interval in a single shot EPI, and the bandwidth is the reciprocal of that delay. There are practical issues that make frequency and phase encoding different, but they are more similar than the usual nomenclature suggests. Hope that helps!
Delete