Understanding how distances in k-space manifest as distances in image space is quite straightforward. All you really need to remember is that the relationships are reciprocal. The discrete steps in k-space define the image field-of-view (FOV), whereas the maximum extents of k-space define the image resolution. In other words, small in k-space determines big in image space, and vice versa. In this post we will look first at the implications of the reciprocal relationship as it affects image appearance. Then we'll look at the simple mathematical relationships between lengths in k-space and their reciprocal lengths in image space.
Spatial frequencies in k-space: what lives where?
I mentioned in the previous post that there's no direct correspondence between any single point in k-space and any single point in real space. Instead, in k-space the spatial properties of the object are "turned inside out and sorted according to type" (kinda) in a symmetric and predictable fashion that leads to some intuitive relationships between particular regions of k-space and certain features of the image.
Here is what happens if you have just the inner (left column) or just the outer (right column) portions of k-space, compared to the full k-space matrix arising from 2D FT of a digital photograph (central column):
|An illustration of the effect of nulling different regions of k-space from a full k-space matrix, applied to a digital picture of a Hawker Hurricane aircraft. The full k-space matrix and corresponding image are shown in the central column.|
Inner k-space only:
The inner portion of k-space (top-left) possesses most of the signal but little detail, leading to a bright but blurry image (bottom-left). (See Note 1.) Most features remain readily apparent in the blurry image, however, because most contrast is preserved; image contrast is due primarily to signal intensity differences, not edges. If this weren't true we would always go for the highest signal-to-noise MRIs we could get, when in practice what we want is the highest contrast-to-noise images we can get! Imagine an MRI that had a million-to-one SNR but no contrast. How would you tell where the gray matter ends and the white matter begins? Without contrast no amount of signal or spatial resolution would help. So much for SNR alone!
Outer k-space only:
If we instead remove the central portion of k-space (top-right) then we remove most of the signal and the signal-based contrast to leave only the fine detail of the image (bottom-right). Strangely, though, it's still possible for us to make out the main image features because our brains are able to interpret entire objects from just edges. In actuality, however, there is very little contrast between the dark fuselage of the Hurricane, the dark shadow underneath it and the dark sky. Our brain infers contrast because we know what we should be seeing! If we were to try doing fMRI, say, on a series of edges-only images we would run into difficulties because we process the time series pixelwise. With a relatively low and homogeneous signal level you can bet good money the statistics would be grim.
The central portion of k-space is important because it provides the bulk of the image signal as well as the signal-based contrast, while the outer portions of k-space provide image detail, in particular establishing the boundaries in image contrast. Having only one or the other might not prevent us, by inspection, from being able to recognize an object in an image, but it may not suffice for pixelwise processing. The objective in fMRI isn't simply to be able to recognize an image as that of a brain!
So why bother to categorize k-space in this manner? Well, for starters, in several future posts we will need to consider the effective k-space matrix to understand many properties of an EPI time series as used for fMRI. When we look at spatial smoothing, for example, it will be imperative for you to understand where in k-space the primary effects of a smoothing function are manifest. A second reason concerns artifact recognition. This simple, intuitive partitioning of k-space regions can be extremely useful when it comes to diagnosing certain data artifacts. Because of the reciprocal relationship, a feature that is widespread (spatially) in image space will likely be focal in k-space. Tracking down a focal artifact source can be considerably easier to do. But I digress. We will look at artifact recognition in the next series of posts. For today I am going to focus on clean data and restrict the topic to features in an ideal image.
Why does the signal level change across k-space?
Here, I am going to offer you two alternative explanations. First, an MRI explanation considering the action of the imaging gradients. We know that whenever a phase is imparted across a sample there will be some partial signal cancellation (as we saw in Part Eight). Near to the center of k-space the signal is high because the amount of phase applied by the imaging gradients to the sample magnetization is low; the degree of signal cancellation is low. The more spatial information (detail) we try to encode, the more phase we need to impart to the signal, the farther the signal level will be reduced. In the outer regions of k-space, where the imaging gradients are comparatively large and the concomitant dephasing is relatively large, the signal level will be diminished.
But the pictures I've presented aren't MRIs, they're digital photographs. Thus, an alternative (and technically more correct) explanation is to consider the spatial frequency content of the image. The image of the Hurricane contains more broad areas of relatively uniform intensity - clouds in the sky, the grass, large blobs of camouflage painted on the wings and fuselage, etc. - than it does edges and other fine details. And since we now know that edges live in peripheral (high) k-space regions whereas spatially broad features live towards the center of k-space, we can consider the k-space plot as a kind of "spatial content map." There is simply more image content to map that changes slowly with distance than there is content that changes rapidly with distance (i.e. detail).
In physical terms, going into high k-space regions means we are encoding high spatial frequencies. And an edge is something with a high spatial frequency; the feature changes rapidly over a short distance. At this point we can even make a prediction. It's reasonable to predict that to get more resolution - the ability to resolve finer structure - we will have to push out to higher k values. We'll deal with this last point in the image resolution section, below.
Defining parameters in k-space to yield the image you want
Okay, now you have a rough idea of what features live where in k-space, it's time to return to entire k-space matrices and learn how to establish k-space to yield an image having the spatial properties that you want. Here, in illustrative form, are the spatial parameters we need to consider:
Delta-kx and delta-ky are the steps in each k-space dimension, while 2kxmax and 2kymax are the spans of k-space. In this example the k-space steps and spans are equal so the resulting image is a uniformly sampled square, but that doesn't have to be the case. FOVx and FOVy define the image size while delta-x and delta-y define the pixel size, i.e. the spatial resolution. (See Note 2.)
The relationship between k-space and the image FOV is straightforward. The reciprocal of the k-space step, delta-k, defines the image space extent, the FOV, i.e.
Here is a k-space matrix with small delta-k and its corresponding image:
If the span of k-space (i.e. the maximum k value) is left constant but the step size is changed, the effect on the image is to alter the FOV. If delta-k is increased the result is a reduced FOV, i.e. a zoomed image. Here is the same image with delta-k doubled in the y direction only, resulting in an image that is unchanged in x but that has half the FOV in y (see Note 3):
You might be wondering why this inverse relationship holds. What is it about the delta-k value that sets (or restricts) the image to a particular size? The relationship arises because of a restricted ability to interpret the phase changes imparted across the magnetization in the sample by the imaging gradients. Between gradient increments - that is, between successive sampling points under the frequency encoding gradient for the x dimension, or for each phase encoding step in y - we can't impart more than 360 degrees of phase because we cannot discriminate between 360 degrees and 0 degrees. Phase changes greater than 360 degrees "alias," so that a change of 450 degrees would be measured as only 90 degrees, and so on. It's the Nyquist sampling theorem, which we saw in Part Six, in another guise. (The algebra demonstrating the 360 degree phase discrimination limit is in Note 4.) All you need to remember is the simple inverse relationships given in the blue box, above.
Since we know that k is the time integral of the (readout or phase encode) gradient being applied to encode spatial information, this inverse relationship produces an interesting observation: big images are easy to produce, smaller images are more difficult to produce. Obtaining a small delta-k requires just a low amplitude gradient or a small amount of time under a gradient, which is obviously easier to achieve experimentally than either a large amplitude gradient or a protracted gradient period.
At first glance this seems a little counter-intuitive, but that's because it's not the whole story. The image is comprised of a fixed number of pixels (arising from the same number of k-space samples), so getting large images is not the freebie it might appear at first blush. If your image is, say, 64x64 pixels total then an image a meter on a side isn't going to be very useful for brain imaging! We need to know the resolution of the image - the size of the pixels - before we determine whether the k-space matrix is in fact appropriate.
You saw above how restricting the k-space coverage to a small, central region from a larger matrix has the effect of blurring the image. So it should come as no surprise that simply extending the size of the k-space matrix will have the effect of increasing the image resolution.
Defining resolution is straightforward now that we have already got the FOV relationships established. All we need do is divide the FOV in each dimension by the number of pixels defining that dimension to see the k-space relationship:
Remember that for an Nx x Ny image we acquire Nx/2 and Ny/2 values of k-space either side of zero, making the total span of k-space equal to 2kxmax and 2kymax.
Here's an example of a k-space matrix having a large extent, yielding a high-resolution image:
If delta-k is maintained, to keep the FOV constant, but the extent of k-space values is restricted then the image resolution decreases:
Here the image is blurred because the unsampled white area around the central (sampled) square of k-space has been "zero-filled." (See Note 5.) This produces a smoothing effect in the image. If the zero filling were not performed then the image would appear pixellated instead of smooth. Either way, the actual resolution of the image is reduced from the previous, high maximum-k situation.
What does this reciprocal relationship between total k-space extent and image resolution mean experimentally? Getting more resolution in the image requires larger k values, requiring either larger amplitude or longer gradient episodes (or some combination of the two). Thus, we can now see that while it is easy to get a large image FOV, it is difficult to get high image resolution! We will have to drive the gradients harder (larger amplitude) or leave them on for longer to attain smaller pixels. Indeed, this is probably the biggest single limit to MRI performance. Gradients can't be made arbitrarily high amplitude for engineering and safety reasons; large, rapidly switched gradients tend to cause peripheral nerve stimulation in the subject, as well as unwanted residual effects (eddy currents) in the magnet hardware, for example. And gradients can't be enabled for arbitrarily long durations or the signal that we're using to encode spatial information is likely to have died away to (near) zero, making our desired high-resolution image very low signal indeed.
In physical terms, the reason why k must be pushed very high to get high spatial resolution in the image is due to the ability of the imaging gradient to impart a significant phase difference between two nearby spatial positions. For a position, x and a nearby position x' we must be able to distinguish the phase difference in order to resolve these two positions as unique, and not the sum of the two. Only as the imaging gradient's time integral gets very large (leading to high k values) does there start to manifest a measurable phase difference between each increment in k-space. Sadly, as far as we know today, there is no way around this resolving power limitation. It's not the fault of the pulse sequences per se but a fundamental limitation in the way we encode spatial information with magnetic field gradients.
Okay, that's enough of the general properties of k-space for the time being. In the next post we will return to k-space trajectories and pulse sequences. We're finally ready to see the workhorse of the majority of fMRI experiments: the echo planar imaging (EPI) sequence.
1. The overall signal level and the image contrast are predominantly established by other aspects of the pulse sequence, such as the excitation flip angle and repetition time, not the k-space coverage scheme. So here all we're considering is where that signal, established prior to spatial encoding, ends up residing in k-space.
2. Some of you may already be aware that the real shape of pixels in MRI isn't actually rectangular. They are in fact defined by a "point spread function." Without any sort of filtering the PSF is sinc-like because the sampling window is a square; we saw in Part Six that the FT of a square function is a sinc. In EPI we also have some smoothing in at least one dimension arising from T2* relaxation. I don't want to get sidetracked with these issues at this point, instead we will deal with the true shape of pixels when we consider the Gibbs artifact (or ringing) in a future post, because these are two sides of the same coin.
3. There's one more FOV consideration: is the image FOV big enough? In Part Six we saw the effects of aliasing when there were insufficient data points to properly sample a waveform. This was encapsulated by the Nyquist theorem. And here's where that data sampling restriction enters into imaging. We will look in detail at aliasing and the image FOV in a later post, early in the series on artifact recognition.
4. In order to see why the FOV should be inversely proportional to delta-k we need to consider the phase evolution between successive k-space points. For frequency encoding along x we have:
For phase encoding along y we have:
There are two observations we can make from the two relationships just derived. Firstly, these relationships reinforce the fact that MRI axes are frequency axes - delta-wx and delta-wy - and not really spatial axes at all. However, the spatial labels can be made appropriate after a little bit of algebra. (That's rather the point with this whole k-space formalism. We use it because it's a more intuitive, convenient way to think about imaging than the time/frequency relationships.)
Secondly, we can now see that the maximum phase shift imparted by the imaging gradients to the extreme spatial positions along each image dimension (i.e. to the edges of FOVx and FOVy, which are defined as the equivalent frequency ranges of delta-wx and delta-wy, respectively) is exactly 2pi radians, or 360 degrees, during each k-space increment. The result is identical for the frequency encoding axis (x) and the phase encoding axis (y). This isn't a coincidence.
Noting that the FOV is really a frequency range with a central "carrier frequency," the 360 degree phase range is really +/- 180 degrees relative to the carrier frequency's phase (which is the nominal zero position). At spatial positions less than the FOV (i.e. within the image, but not at or beyond the limits of the FOV) the phase imparted by each k-space increment is somewhere in this +/- 180 degree range. Outside of the image FOV is where things get interesting. Remember aliasing from Part Six? We cannot meaningfully encode any more phase than 360 degrees because 360 degrees and 0 degrees are indistinguishable; phase is modulo(360).
We will consider FOV and aliasing again in one of the first posts on artifact recognition, because aliasing is one of the issues that can easily affect fMRI data if the scanner operator isn't careful. (Yeah, you can't blame aliasing on your subject! It's pure pilot error!)
5. I don't have plans for a post on image smoothing or zero-filling at the moment, but the issues will be covered in part when I do a post on the Gibbs ringing artifact in the artifact recognition series. For fMRI most intentional image smoothing is done offline, in the image domain, as part of a processing pipeline rather than on the scanner (before 2D FT), so it's not really a scanner acquisition issue.