Education, tips and tricks to help you conduct better fMRI experiments.
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!

Monday, August 15, 2011

Physics for understanding fMRI artifacts: Part Eleven

Resolution and the field-of-view as seen in k-space

Understanding how distances in k-space manifest as distances in image space is quite straightforward. All you really need to remember is that the relationships are reciprocal. The discrete steps in k-space define the image field-of-view (FOV), whereas the maximum extents of k-space define the image resolution. In other words, small in k-space determines big in image space, and vice versa. In this post we will look first at the implications of the reciprocal relationship as it affects image appearance. Then we'll look at the simple mathematical relationships between lengths in k-space and their reciprocal lengths in image space.


Spatial frequencies in k-space: what lives where?

I mentioned in the previous post that there's no direct correspondence between any single point in k-space and any single point in real space. Instead, in k-space the spatial properties of the object are "turned inside out and sorted according to type" (kinda) in a symmetric and predictable fashion that leads to some intuitive relationships between particular regions of k-space and certain features of the image.

Here is what happens if you have just the inner (left column) or just the outer (right column) portions of k-space, compared to the full k-space matrix arising from 2D FT of a digital photograph (central column):

An illustration of the effect of nulling different regions of k-space from a full k-space matrix, applied to a digital picture of a Hawker Hurricane aircraft. The full k-space matrix and corresponding image are shown in the central column.

Inner k-space only:

The inner portion of k-space (top-left) possesses most of the signal but little detail, leading to a bright but blurry image (bottom-left). (See Note 1.) Most features remain readily apparent in the blurry image, however, because most contrast is preserved; image contrast is due primarily to signal intensity differences, not edges. If this weren't true we would always go for the highest signal-to-noise MRIs we could get, when in practice what we want is the highest contrast-to-noise images we can get! Imagine an MRI that had a million-to-one SNR but no contrast. How would you tell where the gray matter ends and the white matter begins? Without contrast no amount of signal or spatial resolution would help. So much for SNR alone!

Outer k-space only:

If we instead remove the central portion of k-space (top-right) then we remove most of the signal and the signal-based contrast to leave only the fine detail of the image (bottom-right). Strangely, though, it's still possible for us to make out the main image features because our brains are able to interpret entire objects from just edges. In actuality, however, there is very little contrast between the dark fuselage of the Hurricane, the dark shadow underneath it and the dark sky. Our brain infers contrast because we know what we should be seeing! If we were to try doing fMRI, say, on a series of edges-only images we would run into difficulties because we process the time series pixelwise. With a relatively low and homogeneous signal level you can bet good money the statistics would be grim.

Whole k-space:

The central portion of k-space is important because it provides the bulk of the image signal as well as the signal-based contrast, while the outer portions of k-space provide image detail, in particular establishing the boundaries in image contrast. Having only one or the other might not prevent us, by inspection, from being able to recognize an object in an image, but it may not suffice for pixelwise processing. The objective in fMRI isn't simply to be able to recognize an image as that of a brain!

So why bother to categorize k-space in this manner? Well, for starters, in several future posts we will need to consider the effective k-space matrix to understand many properties of an EPI time series as used for fMRI. When we look at spatial smoothing, for example, it will be imperative for you to understand where in k-space the primary effects of a smoothing function are manifest. A second reason concerns artifact recognition. This simple, intuitive partitioning of k-space regions can be extremely useful when it comes to diagnosing certain data artifacts. Because of the reciprocal relationship, a feature that is widespread (spatially) in image space will likely be focal in k-space. Tracking down a focal artifact source can be considerably easier to do. But I digress. We will look at artifact recognition in the next series of posts. For today I am going to focus on clean data and restrict the topic to features in an ideal image.


Why does the signal level change across k-space?

Here, I am going to offer you two alternative explanations. First, an MRI explanation considering the action of the imaging gradients. We know that whenever a phase is imparted across a sample there will be some partial signal cancellation (as we saw in Part Eight). Near to the center of k-space the signal is high because the amount of phase applied by the imaging gradients to the sample magnetization is low; the degree of signal cancellation is low. The more spatial information (detail) we try to encode, the more phase we need to impart to the signal, the farther the signal level will be reduced. In the outer regions of k-space, where the imaging gradients are comparatively large and the concomitant dephasing is relatively large, the signal level will be diminished.

But the pictures I've presented aren't MRIs, they're digital photographs. Thus, an alternative (and technically more correct) explanation is to consider the spatial frequency content of the image. The image of the Hurricane contains more broad areas of relatively uniform intensity - clouds in the sky, the grass, large blobs of camouflage painted on the wings and fuselage, etc. - than it does edges and other fine details. And since we now know that edges live in peripheral (high) k-space regions whereas spatially broad features live towards the center of k-space, we can consider the k-space plot as a kind of "spatial content map." There is simply more image content to map that changes slowly with distance than there is content that changes rapidly with distance (i.e. detail).

In physical terms, going into high k-space regions means we are encoding high spatial frequencies. And an edge is something with a high spatial frequency; the feature changes rapidly over a short distance. At this point we can even make a prediction. It's reasonable to predict that to get more resolution - the ability to resolve finer structure - we will have to push out to higher k values. We'll deal with this last point in the image resolution section, below.


Defining parameters in k-space to yield the image you want

Okay, now you have a rough idea of what features live where in k-space, it's time to return to entire k-space matrices and learn how to establish k-space to yield an image having the spatial properties that you want. Here, in illustrative form, are the spatial parameters we need to consider:


Delta-kx and delta-ky are the steps in each k-space dimension, while 2kxmax and 2kymax are the spans of k-space. In this example the k-space steps and spans are equal so the resulting image is a uniformly sampled square, but that doesn't have to be the case. FOVx and FOVy define the image size while delta-x and delta-y define the pixel size, i.e. the spatial resolution. (See Note 2.)


Image field-of-view

The relationship between k-space and the image FOV is straightforward. The reciprocal of the k-space step, delta-k, defines the image space extent, the FOV, i.e.



Here is a k-space matrix with small delta-k and its corresponding image:


If the span of k-space (i.e. the maximum k value) is left constant but the step size is changed, the effect on the image is to alter the FOV. If delta-k is increased the result is a reduced FOV, i.e. a zoomed image. Here is the same image with delta-k doubled in the y direction only, resulting in an image that is unchanged in x but that has half the FOV in y (see Note 3):



You might be wondering why this inverse relationship holds. What is it about the delta-k value that sets (or restricts) the image to a particular size? The relationship arises because of a restricted ability to interpret the phase changes imparted across the magnetization in the sample by the imaging gradients. Between gradient increments - that is, between successive sampling points under the frequency encoding gradient for the x dimension, or for each phase encoding step in y - we can't impart more than 360 degrees of phase because we cannot discriminate between 360 degrees and 0 degrees. Phase changes greater than 360 degrees "alias," so that a change of 450 degrees would be measured as only 90 degrees, and so on. It's the Nyquist sampling theorem, which we saw in Part Six, in another guise. (The algebra demonstrating the 360 degree phase discrimination limit is in Note 4.) All you need to remember is the simple inverse relationships given in the blue box, above.

Since we know that k is the time integral of the (readout or phase encode) gradient being applied to encode spatial information, this inverse relationship produces an interesting observation: big images are easy to produce, smaller images are more difficult to produce. Obtaining a small delta-k requires just a low amplitude gradient or a small amount of time under a gradient, which is obviously easier to achieve experimentally than either a large amplitude gradient or a protracted gradient period.

At first glance this seems a little counter-intuitive, but that's because it's not the whole story. The image is comprised of a fixed number of pixels (arising from the same number of k-space samples), so getting large images is not the freebie it might appear at first blush. If your image is, say, 64x64 pixels total then an image a meter on a side isn't going to be very useful for brain imaging! We need to know the resolution of the image - the size of the pixels - before we determine whether the k-space matrix is in fact appropriate.


Image resolution

You saw above how restricting the k-space coverage to a small, central region from a larger matrix has the effect of blurring the image. So it should come as no surprise that simply extending the size of the k-space matrix will have the effect of increasing the image resolution.

Defining resolution is straightforward now that we have already got the FOV relationships established. All we need do is divide the FOV in each dimension by the number of pixels defining that dimension to see the k-space relationship:


Remember that for an Nx x Ny image we acquire Nx/2 and Ny/2 values of k-space either side of zero, making the total span of k-space equal to 2kxmax and 2kymax.

Here's an example of a k-space matrix having a large extent, yielding a high-resolution image:


If delta-k is maintained, to keep the FOV constant, but the extent of k-space values is restricted then the image resolution decreases:


Here the image is blurred because the unsampled white area around the central (sampled) square of k-space has been "zero-filled." (See Note 5.) This produces a smoothing effect in the image. If the zero filling were not performed then the image would appear pixellated instead of smooth. Either way, the actual resolution of the image is reduced from the previous, high maximum-k situation.

What does this reciprocal relationship between total k-space extent and image resolution mean experimentally? Getting more resolution in the image requires larger k values, requiring either larger amplitude or longer gradient episodes (or some combination of the two). Thus, we can now see that while it is easy to get a large image FOV, it is difficult to get high image resolution! We will have to drive the gradients harder (larger amplitude) or leave them on for longer to attain smaller pixels. Indeed, this is probably the biggest single limit to MRI performance. Gradients can't be made arbitrarily high amplitude for engineering and safety reasons; large, rapidly switched gradients tend to cause peripheral nerve stimulation in the subject, as well as unwanted residual effects (eddy currents) in the magnet hardware, for example. And gradients can't be enabled for arbitrarily long durations or the signal that we're using to encode spatial information is likely to have died away to (near) zero, making our desired high-resolution image very low signal indeed.

In physical terms, the reason why k must be pushed very high to get high spatial resolution in the image is due to the ability of the imaging gradient to impart a significant phase difference between two nearby spatial positions. For a position, x and a nearby position x' we must be able to distinguish the phase difference in order to resolve these two positions as unique, and not the sum of the two. Only as the imaging gradient's time integral gets very large (leading to high k values) does there start to manifest a measurable phase difference between each increment in k-space. Sadly, as far as we know today, there is no way around this resolving power limitation. It's not the fault of the pulse sequences per se but a fundamental limitation in the way we encode spatial information with magnetic field gradients.


Okay, that's enough of the general properties of k-space for the time being. In the next post we will return to k-space trajectories and pulse sequences. We're finally ready to see the workhorse of the majority of fMRI experiments: the echo planar imaging (EPI) sequence.

---------------



Notes:

1.  The overall signal level and the image contrast are predominantly established by other aspects of the pulse sequence, such as the excitation flip angle and repetition time, not the k-space coverage scheme. So here all we're considering is where that signal, established prior to spatial encoding, ends up residing in k-space.


2.  Some of you may already be aware that the real shape of pixels in MRI isn't actually rectangular. They are in fact defined by a "point spread function." Without any sort of filtering the PSF is sinc-like because the sampling window is a square; we saw in Part Six that the FT of a square function is a sinc.  In EPI we also have some smoothing in at least one dimension arising from T2* relaxation. I don't want to get sidetracked with these issues at this point, instead we will deal with the true shape of pixels when we consider the Gibbs artifact (or ringing) in a future post, because these are two sides of the same coin.


3.  There's one more FOV consideration: is the image FOV big enough? In Part Six we saw the effects of aliasing when there were insufficient data points to properly sample a waveform. This was encapsulated by the Nyquist theorem. And here's where that data sampling restriction enters into imaging. We will look in detail at aliasing and the image FOV in a later post, early in the series on artifact recognition.


4.  In order to see why the FOV should be inversely proportional to delta-k we need to consider the phase evolution between successive k-space points. For frequency encoding along x we have:


For phase encoding along y we have:



There are two observations we can make from the two relationships just derived. Firstly, these relationships reinforce the fact that MRI axes are frequency axes - delta-wx and delta-wy - and not really spatial axes at all. However, the spatial labels can be made appropriate after a little bit of algebra. (That's rather the point with this whole k-space formalism. We use it because it's a more intuitive, convenient way to think about imaging than the time/frequency relationships.)

Secondly, we can now see that the maximum phase shift imparted by the imaging gradients to the extreme spatial positions along each image dimension (i.e. to the edges of FOVx and FOVy, which are defined as the equivalent frequency ranges of delta-wx and delta-wy, respectively) is exactly 2pi radians, or 360 degrees, during each k-space increment. The result is identical for the frequency encoding axis (x) and the phase encoding axis (y). This isn't a coincidence.

Noting that the FOV is really a frequency range with a central "carrier frequency," the 360 degree phase range is really +/- 180 degrees relative to the carrier frequency's phase (which is the nominal zero position). At spatial positions less than the FOV (i.e. within the image, but not at or beyond the limits of the FOV) the phase imparted by each k-space increment is somewhere in this +/- 180 degree range. Outside of the image FOV is where things get interesting. Remember aliasing from Part Six? We cannot meaningfully encode any more phase than 360 degrees because 360 degrees and 0 degrees are indistinguishable; phase is modulo(360).

We will consider FOV and aliasing again in one of the first posts on artifact recognition, because aliasing is one of the issues that can easily affect fMRI data if the scanner operator isn't careful. (Yeah, you can't blame aliasing on your subject! It's pure pilot error!)


5.  I don't have plans for a post on image smoothing or zero-filling at the moment, but the issues will be covered in part when I do a post on the Gibbs ringing artifact in the artifact recognition series. For fMRI most intentional image smoothing is done offline, in the image domain, as part of a processing pipeline rather than on the scanner (before 2D FT), so it's not really a scanner acquisition issue.

18 comments:

  1. great post. really helped me a lot. keep up the good work...

    ReplyDelete
  2. Yes, great post!
    I didn't understand this sentence "Remember that for an Nx x Ny image we acquire Nx/2 and Ny/2 values of k-space either side of zero, making the total span of k-space equal to 2kxmax and 2kymax."

    Maybe it is obvious, I'm sorry. Why we acquire Nx/2 points in the k-space for Nx points in image domain?Each point of the real object corresponds to a particular point in the k-space, isn't it? Can you try to explain me that in other words?
    Thank you in advance,
    f4bry

    ReplyDelete
  3. I have another question : why is it possible to obtain higher spatial resolution with a multi-shot (interleaved?) EPI?

    Thank you,
    f4bry

    ReplyDelete
  4. Hi f4bry,

    On your first question, I think I see the source of your confusion. The crucial phrase is "either side of zero." So we do actually acquire Nx k-space points for Nx points in image space; Nx/2 of them are on the negative side of kx-space, Nx/2 of them are on the positive side of kx-space (if you ignore for simplicity the usual convention of acquiring one point at kx=0, which results in one fewer k-space point acquired on one side of kx when Nx is even).

    On another tack, be careful trying to associate each point in k-space with a single point in image space. In fact, one point in k-space represents every point in image space that contains the particular spatial properties of that point in k-space. That's why it's useful to demonstrate what happens in an image when certain regions of k-space are set to zero, as in the first figure in the post. If just one point of k-space were set to zero it would have a (potentially) universal effect on the image; "potentially" because only if that particular k-space point represents spatial frequencies present in the image would there be a non-zero value at that k-space point.

    ReplyDelete
    Replies
    1. Thank you very much, I've got it!
      f4bry

      Delete
  5. f4bry, on your second question about multi-shot (interleaved) EPI, the root of the reason is the duration of usable signal after an excitation RF pulse. The typical T2* of brain tissue is in the range 20-60 ms at 3 T. Thus, after a best case of some 100-150 ms the signal level is rapidly approaching zero, and an increasing level of noise would be detected.

    Now think about wanting to acquire an EPI that is, say 256x256 pixel with a field-of-view of 224 mm. That is equivalent to a nominal pixel resolution of 0.875 mm. Now, to acquire each line of kx-space (the read direction) will take four times longer for 256 readout points as for 64 readout points. If 64 points causes an inter-echo time of 0.5 ms then each line of kx-space takes 2 ms to acquire. We might be able to acquire something like fifty total echoes before we simply run out of signal. We can't hope to acquire all 256 phase-encoded echoes in a single shot! The signal-to-noise would be awful, tending towards unity!

    But if we chop up the 256 phase-encoded echoes into four groups of 64 echoes each, and combine the four shots before doing the 2D FT, then we have a fighting chance of producing a final image with appreciable SNR at the target nominal resolution. Overall, then, it's a signal (or T2*) limiting situation. If there were a way to make T2* longer then, in principle, we could acquire higher spatial resolution in a single shot. Sadly, however, we're inherently limited in our ability to shim the brain and prolong the T2*.

    Cheers!

    ReplyDelete
    Replies
    1. Thanks a lot! It is very useful!

      I understood the idea but I have some problem with the nomenclature and to "see" that in the k-space.

      When you say "inter-echo time" do you mean the time to move from one kx-line to another? And when you say that each line is acquired every 2ms...do you mean a shot trajectory every 2ms?

      As you said in your post the image resolution is FOV/N = 1/(N Dk) = 1/kmax. If we keep the FOV fixed, we should increase the number of points acquired to increase the resolution, shouldn't we?Could we say that with more shots we have more points in the k-space?So, if n are the number of shots, FOV/N=1/n(N Dk) ?

      A multi-shot EPI is not longer than a single-shot EPI?

      I'm sorry to bore you with all these questions, but I have another question ;)
      Multi-shot sequences decrease also the geometric distortions, as you mentioned in one of your post. Also in this case, there is a relationship between the geometric distortions and the number of shots?

      Thanks you very much for all your answers!
      f4bry

      Delete
    2. "When you say "inter-echo time" do you mean the time to move from one kx-line to another?"

      Yep!

      "And when you say that each line is acquired every 2ms...do you mean a shot trajectory every 2ms?"

      Correct, it would be one entire line of kx (read axis) in my example.

      "As you said in your post the image resolution is FOV/N = 1/(N Dk) = 1/kmax. If we keep the FOV fixed, we should increase the number of points acquired to increase the resolution, shouldn't we?"

      Yes, and to do that we need to push out farther into higher values of k-space.

      "Could we say that with more shots we have more points in the k-space?So, if n are the number of shots, FOV/N=1/n(N Dk) ?"

      Ultimately the number of points in k-space is the same whether those values are acquired in one go - single-shot EPI - or via some sort of multi-shot scheme. Also, the actual k-space values acquired in the final 2D k-space matrix are the same regardless of the number of shots. We're simply chopping up the acquisition into smaller chunks in order to attain some experimental gain. There aren't many good articles on multi-shot EPI that I've found, but Stuart Clare's PhD thesis might make a useful next step:

      http://users.fmrib.ox.ac.uk/~stuart/thesis/chapter_5/section5_2.html


      "A multi-shot EPI is not longer than a single-shot EPI?"

      Yes, multi-shot EPI takes n times longer for n shots, which is a big reason why multi-shot EPI isn't common for fMRI.

      "Multi-shot sequences decrease also the geometric distortions, as you mentioned in one of your post. Also in this case, there is a relationship between the geometric distortions and the number of shots?"

      Yes, all other parameters being equal the geometric distortion extent will be inversely proportional to n. But because of the increased time to acquire each final image, and the increased motion sensitivity that extending the total imaging time takes, it's also not a common tactic for fMRI.

      Delete
    3. "Multi-shot sequences decrease also the geometric distortions, as you mentioned in one of your post. Also in this case, there is a relationship between the geometric distortions and the number of shots?"

      In case I haven't been clear, the critical parameters for distortion extent are the inter-echo time, i.e. the time between each line of kx sampling, and the size of the step in ky between each echo. For single-shot EPI, each new echo in the train defines a step in ky (the phase encode dimension) of size delta-ky.

      Any change that decreases the inter-echo time or increases the ky step size between each echo will tend to decrease the distortion, all other parameters held constant. For example, we could simply skip every other ky step, i.e. make the step size R*delta-ky, as is done in parallel imaging methods such as GRAPPA for R-fold acceleration. (The amount of phase evolution between ky samples has been reduced by a factor R, thereby reducing the distortion by a factor R.) Likewise, if one does an interleaved multi-shot EPI scheme then one can achieve similar n-fold reduction of distortion for an n-shot acquisition, but only if the inter-echo time is maintained and the ky step size is increased by the factor n, to n*delta-ky. (The interleaved k-space then has to be combined prior to 2D FT.)

      Other "go faster" EPI methods, such as partial Fourier EPI (post to come), don't decrease the distortion extent in the phase encoding dimension because the ky step size and the inter-echo time are unchanged. And it's also possible to acquire some segmented multi-shot EPI schemes where the ky step is unchanged, and those don't decrease distortion either (although they can permit higher resolution than single-shot EPI).

      I'll add a post on distortion and methods to reduce it to my list. There's too much information to portray here!

      Delete
  6. f4bry, a post script: One oft overlooked point about resolution is the intrinsic blurring of pixels by T2* decay under the EPI acquisition train. This is the so-called "point spread." I'll be doing a separate post on point spread at some point. But it's useful to remember that the nominal resolution, defined as FOV divided by the number of pixels, is always narrower than the effective resolution. Pixels are always smoothed somewhat by virtue of the physics of the acquisition.

    ReplyDelete
    Replies
    1. Great! I will read it willingly!
      thanks,
      f4bry

      Delete
  7. Thank you again, the discussion was very useful for me, could we please continue it by email? Mine is fabrybo@hotmail.com

    ReplyDelete
  8. Your blog rocks!
    I've been puzzling for quite a while about an image artefact that intermittenly appears in some of the DW images. Is there any way that I can send you the images for your comments? Thanks in advance.

    ReplyDelete
    Replies
    1. Sure! Be glad to take a look. It's practicalfmri at gmail dot com. I'm traveling thru 6th May but will take a look as soon as I'm able.

      Delete
  9. Wonderful blog... I learn a lot from that.
    There is a thing that I can't understand.. On the scanner is possible to change only the fov (increase/decrease pixel size) leaving matrix unchanged ?
    reading the formula : pixel size = FOV / matrix I think it's possible
    but when I read that : FOV = 1 / delta K in the image there is the same pixel size. I have change only the fov without changing pixel size. How is possible ?

    ReplyDelete
    Replies
    1. Hi Luca, yes, it is possible to change the FOV and the matrix independently. There are two relationships at work simultaneously, given in the first two blue boxes. The first defines the FOV, the second defines the pixel size in each dimension, as FOV/N where N is the number of pixels in that dimension. So this is the part of the post where pixel size is introduced:

      Defining resolution is straightforward now that we have already got the FOV relationships established. All we need do is divide the FOV in each dimension by the number of pixels defining that dimension to see the k-space relationship:

      But I didn't call the value FOV/N the pixel size. If you re-read the post but always think of FOV/N as the pixel size, does it now make sense? You can see from the blue box defining FOV/N that the FOV and the pixel size are independent parameters, and keeping pixel size constant if FOV changes simply means that N - one dimension of the matrix size - must also change in concert.

      Delete
  10. Your way to explain difficult things are amazing! Congrats and thanks for sharing your knowledge!
    As it is a complex issue, even with a brilliant explanation, some doubts still remains in my mind:

    You said that
    "Secondly, we can now see that the maximum phase shift imparted by the imaging gradients to the extreme spatial positions along each image dimension (i.e. to the edges of FOVx and FOVy, which are defined as the equivalent frequency ranges of delta-wx and delta-wy, respectively) is exactly 2pi radians, or 360 degrees, during each k-space increment"

    I had doubt in this comment because reading some papers and attending some classes of parallel imaging I heard that " gradients modulate FOV extension with spatial harmonics". I don't know if I understood correctly, for me, the first harmonic along phase axis for example, we'll apply a gradient that creates a 2*PI difference between Ytop and Ybottom(one peak and one valley) where PHASE_top = 0 and PHASE_bottom = 2*PI. At the second harmonic we'll apply a gradient that creates a 4*PI difference between Yt and Yb(two peaks and two valleys), i.e., PHASE_top=0 and PHASE_bottom=4*PI, and so on. Is it a wrong way to think? If it is wrong, please help understand where I am confusing myself because just at the second harmonic we will have a difference greater than 2*PI, and it doesn't satisfy the condition 2*PI=DELTA_t*DELTA_Wy.
    Thanks in advance

    ReplyDelete