(For the answer to the homework k-space diagram given at the end of Part Nine, see Note 1.)

K-space in two dimensions

As anyone knows who has encountered MRI professionally, whether in research or medicine, there seems to be an endless array of pulse sequences to choose between. The variety can be overwhelming at first. Nor is the situation helped by different vendors using different acronyms - we always use acronyms in MRI! - for what are essentially the same sequence.

It's little wonder, then, that most neophytes' eyes glaze over when it comes to comparing and contrasting any two pulse sequences if the taxonomy appears to be ad hoc. Where on earth to start? But it turns out that most pulse sequences can be categorized fairly easily, and their heritage traced, by separating the part(s) of the sequence that is responsible for spatial encoding, from the part(s) of the sequence that will provide the tissue or functional contrast. Occasionally there is overlap within the sequence of these two missions, but even then it's usually straightforward to understand the spatial encoding and interpret its genesis.

**A useful pictorial representation of imaging pulse sequences**

It turns out that there are only a handful of spatial encoding methods in common use these days, almost all with roots in the late 1970s or early 1980s. While new pulse sequences appear in the literature all the time, when you look at their k-space representations you'll be able to see how each new method has developed from a small number of key ideas from those early years. It's possible to categorize the encoding methods without k-space, but the k-space formalism makes comparisons trivial (in MR terms).

Spatial encoding methods can be separated into families derived from a central idea. For instance, following Lauterbur's original imaging paper in 1973 (which led to the family of projection reconstruction methods), in 1975 Richard Ernst's group came up with a sequence that utilized a 2D Fourier transform to yield the final image. (See Note 2.) It was a remarkable breakthrough and is the grandparent of nearly all medical/biological sequences still in common use today.

Still, even geniuses miss opportunities every now and then. And in 1980 a group at Aberdeen came up with a far more practical implementation of Fourier imaging, using amplitude-modulated gradients in a "constant time" pulse sequence, rather than the fixed amplitude, variable time scheme of Kumar, Welti and Ernst. It is this constant time scheme, which the Aberdeen group termed "spin warp" phase encoding, that provides the basis for most clinical (anatomical) scanning used today. It's also a good scheme to look at when first encountering 2D k-space, so we'll consider it in detail in this post.

**The goal revisited**

In the first part of the last post (see Part Nine) I used two examples of digital images to illustrate how the information content in a 2D plane of image pixels can be equivalently represented in reciprocal 2D space, or k-space. I mentioned that both the images and the k-space comprised 512x512 points, but later on when I started to draw (one-dimensional) k-space trajectories I did so on a k-space plane that was represented by just a set of axes, not discrete points. In case you think that image space and k-space in MRI are continuous, I'm going to spend a moment considering the digital k-space plane explicitly. (Like real space, k-space can also be continuous rather than digital, but that's not how MRI works.)

Here is a 16x16 plane of k-space points (see Note 3) overlaid on some actual signals to reinforce the point that we're digitizing a continuous process:

Courtesy: Karla Miller, FMRIB, University of Oxford. |

The goal is to traverse the entire k-space plane,

*i.e.*to use our gradients to follow a trajectory that crosses every single point (as defined by the white grid itself), acquiring data (with our receiver coil), one point for each grid coordinate, as we go. Once we have traversed the entire 2D plane (and assuming a suitable data acquisition scheme) we will have 16x16 k-space data points and will then be in a position to apply a 2D FT and get a 16x16 image out. (See Note 4.)

It is worth emphasizing here an important point that is often overlooked when people are thinking about k-space. We aren't actually

*doing*a 2D FT to achieve the k-space representation and the pictorial analysis of the gradient actions. Rather, we are simply recognizing that a 2D FT of the image plane

*is*its 2D k-space representation; hence, the action of the imaging gradients is to trace through each point in that 2D k-space. Semantics? I don't think so. The 2D FT is ultimately required to recover the actual image, but it is performed on the completed 2D k-space plane,

*i.e.*only

*after*the k-space plane has been properly sampled by the action of the imaging gradients. We don't need to do the 2D FT in order to understand how the pulse sequence is encoding spatial information! And that's what makes the k-space picture so valuable as a sequence comparison tool.

*All*(2D) imaging sequences, from EPI to RARE/FSE, must achieve the same completed plane of k-space before the final image can be recovered (by 2D FT). Even spiral isn't immune to this requirement! (One of the processing steps for spiral is to "re-grid" the k-space trajectory so that it is rectilinear and can then be fed into a regular 2D FT algorithm.)

With this appreciation of the intuitive meaning of k-space under your belt, it's time to see the action of the imaging gradients as they trace through the entire 2D k-space plane.

**Gradients along the x direction (again)**

Last time out we didn't consider the data acquisition at all, but in this post it's going to be reintroduced. We know we need to acquire a signal corresponding to each point on the k-space grid and an easy way to achieve that is to acquire one line of kx information at a time, then figure out how to move across ky to hit each row of kx points in turn. The sampling process is therefore essentially the same as we saw previously for the gradient echo in Part Eight:

Except that now we can see that the period of signal acquisition (analog-to-digital conversion) occurs coincident with a journey from maximum -kx to maximum +kx, as we saw in Part Nine:

Eight data points are sampled during period 2, then the central value at kx=0, then a further seven data points during period 3. (Yeah, I know, the green arrow should have stopped one square earlier than I drew it. Sorry!) In the next post we will look at the effects of the number of k values, the space between them (delta-k) and the extreme k values attained, because these parameters determine the image resolution and field-of-view (FOV). For now don't worry about them, let's just get

*an*image to be going on with.

**Gradients along the y axis**

Clearly, to get off the ky=0 axis we're going to have to apply a gradient, Gy (just as was implied by the homework problem at the end of Part Nine). What's more, if we are interested in sampling a rectilinear grid then there is no point having Gy turned on when readout under Gx is happening, otherwise we will end up with a diagonal trace in k-space (as per the homework example last post) and we will spend a lot of time "missing" the grid points. (I won't discuss non-rectangular sampling in this series of posts. Perhaps I'll do a separate post on spiral scanning and its ilk at some point in the future.)

However, the -Gx period that precedes data acquisition is of no value to the data either, it just gets us to one side of kx space so that we can zip along one entire line of kx points in one go. Why not put the Gy gradient coincident with that? All that matters is that we have moved in ky prior to the readout line along kx. Handy! In this situation the actual vector in k-space - diagonal or otherwise - is of no consequence. We simply want to have arrived at the target kx,y coordinate as quickly as possible prior to the start of data acquisition under +Gx.

This can be achieved with the following pulse sequence (see Note 5):

Here's the corresponding k-space trajectory:

Now let's repeat the previous pulse sequence, starting from scratch, but this time we will reduce the amplitude of Gy by 1/8th. Look closely for the slight reduction of Gy; the previous value is indicated by a dashed line:

This time through we only get 7/8ths as far in the +ky direction before the data sampling commences along the kx points:

If we keep on repeating this acquisition process, stepping down each time by another 1/8th increment reduction of ky, then after sixteen total experiments we will have traversed every line of ky as well as every kx point along the sixteen ky rows. Here are the sixteen k-space trajectories on a single diagram (see Note 6):

Let's recap for a moment. Even though the data sampling only happens as the

*rows*of k-space are being traced out,

*i.e.*under the x gradient, it's clear that in stepping down a row each time through the pulse sequence we also hit every

*column*and thus completely sample the 2D plane. So, although the green traces above all appear to be identical, in actuality each row of kx values is slightly different. The incremented Gy gradient that precedes the Gx readout gradient (and data acquisition) has imparted a

*phase increment*to the sampled magnetization. I'll come back to this point below.

**One dimension's just like the other one**

The mathematics of the second, phase-encoded k-space dimension is quite straightforward. Indeed, to have followed the trajectories above you must have mentally integrated the changing area under Gy, first by recognizing that the initial Gy gradient had the same amplitude and time as the +Gx episode in the sequence. Thus you deduced (whether you realized it or not) that the areas under Gy and Gx were initially the same, making the first trajectory a diagonal journey from k0,0 to the top-left corner of the k-space matrix. You did the math without even realizing it! (That's the beauty of the k-space formalism. No math required!)

But for those of you who would like to see the equivalence of the two k dimensions, the definition of ky follows naturally from what we saw in Part Nine for kx. Recognizing that all the y gradient is doing physically is adding another phase shift to the signals, we can simply add one more phase term to the time-dependent signal equation and then recast the signal in terms of space and reciprocal space as the conjugate variables. Adding terms for y and ky to what we had in Part Nine for x and kx, we get:

While the definition of ky appears different to kx it's actually just because in this particular pulse sequence the y gradient amplitude was stepped in equal increments. If we define the maximum value used as Gy then we can write Gy = (N.gy)/2, where there are N total steps (we had N=16) and the increment is gy, and the two k variables are seen to correspond. In both cases the definition of k is simply the area under the gradient (which is G.t for a square gradient) multiplied by some constants (gamma/2pi) that we can ignore.

Note that whereas the kx value evolves with the x gradient's

*duration*, for the ky (or phase-encoded) dimension the gradient time, tp is constant and the

*amplitude*is changed. In both cases, however, the areas under the gradients are changing in equal amounts between each point in the k-space matrix, making delta-kx and delta-ky equal. (See Note 7.)

With our k-space plane fully sampled we can now do a 2D FT to get an image. Recalling that a 2D FT is actually two 1D FTs performed in succession, we can see that Fourier transformation along ky gives us the y dimension of an image, I(y):

**Gaining an intuitive understanding of phase encoding**

That's it! You've covered a 2D k-space plane completely and have arrived at the point where a 2D FT will yield an image. Job done, right? However, some of you - quite possibly those of you who have attended a class on MRI and have been introduced to phase encoding from a different perspective - might still be wondering what it

*means*to do phase encoding, and might be struggling to understand how it is different from frequency encoding. How does phase encoding

*work*?

Often it is much easier to see how frequency encoding works - as a projection of the spins in one dimension - because it seems a bit like a shadow cast from an object by a bright light. Well, phase encoding is actually the same! If you took the kx=0 column from the 2D k-space plane and FT'd it you would obtain a 1D projection of the object along the y axis. Sure, it took a lot of steps - sixteen separate acquisitions, one ky value on the kx=0 line per acquisition - to get to the point where you could do this, but the 1D profile of the kx=0 line from our 2D plot would look very similar indeed (discounting experimental issues) to a separate experiment where one used a single

*frequency encoding*gradient to get a 1D profile along y.

How is it that these processes - frequency and phase encoding - are seen to be equivalent in the final analysis? Because, as we saw in Part Four, frequency is just the rate of change of phase! In this case, what we're doing is imparting a stepped phase into a succession of waveforms, so that when we project in a dimension orthogonal to those waveforms (by "joining the dots," or data points) we are actually constructing new waveforms in the orthogonal dimension. The only true differences between the frequency-encoded dimension and the phase-encoded dimension are practical ones, as we will see in the artifact recognition posts to come.

Final thoughts to help you understand phase encoding, and to reinforce the mathematical relationship between frequency and phase:

- A frequency encoding gradient imparts a phase that changes quickly (and continuously) with time, because the frequency encoding gradient is turned on and left on.

- Conversely, the phase encoding gradient is used discretely. A stepped phase shift is imparted to the magnetization and then the phase encoding process is suspended, leaving a "phase memory" in the magnetization for when it is ultimately read out under the frequency encoding gradient.

Next post, a quick review of image field-of-view and resolution as they appear in k-space. Until then I would strongly urge you to consider the Further Reading, below.

-------------------

**Further Reading:**

*Introduction to Functional Magnetic Resonance Imaging: Principles and Techniques*(2nd edition) by R.B. Buxton.

For a description of how phase encoding works, read the section entitled "Phase encoding" on pages 214-6. Rick shows how frequency and phase encoding are equivalent in gradient terms and explains, with the assistance of some simple figures, how one can consider little chunks of gradient history to understand the similarities of the two processes. If you understand this section you will have developed an intuitive feel for gradient spatial encoding in MRI! Then keep reading for a nice description of k-space on pages 216-9.

*Functional Magnetic Resonance Imaging*(2nd edition) by S.A. Huettel, A.W. Song & G. McCarthy.

Another description of 2D spatial encoding (frequency and phase encoding) using the k-space formalism appears on pages 109-117. You will also be introduced to spatial frequencies in k-space, should this whole consideration of k-space have left you slightly numb and wondering what reciprocal space is all about.

**Notes:**

1. One possible answer to the k-space trajectory problem at the end of Part Nine:

It's a junk sequence purely to illustrate the convenience of the k-space representation, it doesn't have any specific role or name.

Note that in many k-space diagrams there is no information on the speed through any segment. In my quiz diagram I implied that each arrow represented a similar time for each gradient episode, but that doesn't have to be the case. Thus, another correct pulse sequence diagram could have used these relative magnitudes to keep the time integrals of the gradients consistent with the k-space representation:

In this case the speed through the diagonal trajectory would be half the prior version. Does it make a difference? Experimentally, yes. (There would be more relaxation effects in the longer duration variant.) But the k-space values achieved in each of these two pulse sequences would be the same, leading to similar spatial encoding until the effects of relaxation are considered.

2. Interestingly, in the first Fourier imaging paper the authors presented both 2D and 3D imaging variants. At the time, slice selection hadn't been invented so the 2D method was a bit of a kludge in some respects, at least when compared to today's sequences. The authors used the same type of sample as Lauterbur had used - a parallel arrangement of cylindrical tubes of water- to eliminate the requirement of some sort of slice for the third dimension; symmetry of the sample did the trick instead.

Here is one of the very first 2D Fourier images of water-filled tubes, with "pixel" intensities increasing in the order (blank),

**.**, *, A, B, C, D, E:

From: Kumar, Welti & Ernst, NMR Fourier Zeugmatography. J. Magn. Reson.18, 69-83 (1975). |

Many of the early imaging papers (1970s and early 1980s) used alphanumeric plots for images because gray scale printers were hard to come by back then. Don't laugh. One day your children will think your iPhone 4 looks like a flint hand axe.

Given that most modern MR imaging methods derive from this first use of a multi-dimensional Fourier transform to convert from signals to images, this seminal work was probably sufficient to merit a Nobel prize by itself. But for largely political reasons, it appears, the Nobel committee waited until 2003 to give a prize for MRI, awarding it jointly to Sir Peter Mansfield and Paul Lauterbur in the category of Phsyiology or Medicine. Was Richard Ernst miffed? Probably not. He'd already snagged the 1991 Nobel Prize in Chemistry for his contributions to MR spectroscopy, most notably the first use of the Fourier transform (in one dimension) to interpret the signals in an NMR spectrum. (Intriguingly, there is mention of his work in MR imaging in the press release announcing the 1991 Nobel award.)

For those of you interested in the history of MRI, there's a short essay courtesy of the European Magnetic Resonance Forum.

3. While there are 16x16 "pixels" of this k-space plane, we actually sample only at discrete points, as represented by the white grid lines and the yellow border. Thus, there are actually 17x17 points in the grid. In practice we tend to sample the kx=0 and ky=0 axes and then acquire 8 points on one half and 7 on the other, for a total of 16x16. The small asymmetry isn't usually a problem, nor does it usually matter which halves of k-space have the 8 or 7 samples. Why don't we just acquire 17x17 points? These days it's mainly for historical reasons. Back when computers were slow it was generally desirable to apply "fast" FT (FFT) algorithms that require powers of two input to work. Honestly, don't worry about it. It's a tertiary effect when it comes to creating artifacts in images!

4. There isn't a one-to-one correspondence between any one point in image space and any one point in k-space. It's a more nuanced reciprocal relationship. I'll mention some of these issues in passing in this and subsequent posts, but otherwise I leave these issues to further reading.

5. In practice it is common to start with -Gy values and cover the negative portion of ky-space first. It shouldn't matter which is performed for spin warp imaging. The direction can have important consequences for EPI, but I'll mention those when we get to that point. For now, top-down or bottom-up can be considered equivalent. We just want to fill the plane.

6. As already noted, there will be a small asymmetry in k-space when utilizing the central lines of k-space and an even number of data points per dimension. Thus, in my example I have eight points on one half of k-space and seven on the other, plus the central lines. Note that in drawing the green kx lines I actually overshot by one data point in the figures! In practice that last (right-hand) column of data points wouldn't be acquired.

7. In this example, delta-kx and delta-ky were equal and so were the maximum extents of k-space, i.e. the k-matrix was square. That yields a square (16x16) image with the same field-of-view and resolution in the two dimensions. This doesn't have to be the case, of course. In the next post I will cover the FOV and resolution issues in detail, albeit using square k-space examples. Just remember that images don't have to be square and there can be occasions when a rectangular image is preferable.

## No comments:

## Post a Comment