Monday, April 16, 2012

Common persistent EPI artifacts: Receive coil heterogeneity

 
The RF transmit (Tx) and receive (Rx) duties have been performed by separate coils on most commercial clinical scanners for about a decade. These days it's rare to find a combined Tx/Rx coil in-use for brain imaging, although they do exist. (We used one at Berkeley until 2008, on a Varian 4 T scanner.) The separation of Tx and Rx is generally regarded as a good thing because it means a large, body-sized coil can be used for Tx, thereby providing a relatively homogeneous transmission field over a region such as a human head, whereas a smaller (head-sized) coil can be used for Rx, thereby providing the higher intrinsic SNR that comes from using the smallest possible magnetic field detector. (As a general rule the smaller the coil, the higher its SNR close to the coil, because the sensitivity drops off with the reciprocal of distance.)

Indeed, most modern Rx coils aren't single electronic entities at all, but arrays of smaller coil elements put together in a "phased array." The entire phased array acts as a single coil only when the individual signals from individual channels are combined in post-processing. (Each coil element has its own receiver chain - preamplifier and digitizer - allowing separate treatment of signals until after acquisition is complete.) The details of these phased array coils and the combination of the separate signals aren't important at this point, although in subsequent posts they will become important. All we need to focus on right now is simply the fact that a multitude of individually received signals will be combined to produce the final MR signal. (See Note 1.) So, in this post we will consider the receiver characteristics of having multiple discrete coil elements.


Receive fields for phased array coils

Why is the modern Rx head coil a collection of separate circuits? A head-sized, single-circuit Rx coil would detect noise from the entire head, whereas redesigning the coil into a succession of small elements reduces the noise "field of view" for each element. Then, by combining the elements in an appropriate manner, the signal characteristics can be returned (as if a single circuit coil were being used) but with a reduced total noise level in the final images.

It should be relatively obvious that a small wire loop would detect signal with a localized sensitivity profile. The farther away the coil is positioned from the source of an MR signal - from a brain, say - the lower will be the voltage induced in that coil by the available magnetization. We don't need to know the particular mathematics of the receive profile - it's massively complicated for modern Rx coils in any case - so suffice it to say that there's a reciprocal relationship between the signal-to-noise ratio and the proximity of the coil from the magnetization inducing that signal. Closer is better (in SNR terms).

For brain imaging, then, it follows that signal from frontal lobe will primarily be detected by loops at the top of a head RF coil, whereas signal from occipital lobe will primarily be detected by loops at the bottom of the coil. Midbrain regions are where things get most interesting, from an electrical engineering perspective, because we need all the coil's elements combined to get appreciable sensitivity. Thus, we can state another general property of phased array coils: at the spatial scales defined by brain anatomy, a phased array coil offers a heterogeneous receive profile. How heterogeneous? is the important question.

The figure below, taken from Wiggins et al., demonstrates the SNR that can be expected from a typical brain for three different phased arrays. These sensitivity maps don't depict precisely how the Siemens product 12-channel and 32-channel head coils will perform, but we can use this comparison to give us a good idea of what we should be expecting to see in our EPIs because the general properties are consistent: the larger the phased array (i.e. the higher the number of independent elements) the smaller the individual detecting loops, the more heterogeneous the receive profile:

(Click to enlarge.)

We see spatial heterogeneity in the SNR because the signals from the individual elements don't add up uniformly. The spatial bias of each element is preserved to some extent in the final image, once all the elements have been added together. Incidentally, the final image is usually attained by taking the square root of the sum-of-squares (root-SoS) of all the individual coil elements/channels. (See Note 2.) Although the combination method will determine the actual spatial signal and noise properties of the overall images, the particular method of combination isn't germane to today's post. All we need to recognize for the time being is that the combination, however it's done, doesn't eliminate the receive field heterogeneity.


Let's take a look at receive field heterogeneity for two Rx coils that are available today, on a Siemens Trio. The following figure shows 3 mm axial EPI slices through a spherical, homogeneous gel phantom (the FBIRN phantom). The images on the left were acquired with a 12-channel Rx coil, those on the right using a 32-channel Rx coil. A single color scale (Rainbow in Osirix, increasing intensity BGYR) is used for the display so that the signal levels can be compared directly:

Left: 12-channel coil. Right: 32-channel coil. Same color scale for both parts. The slice locations don't match precisely in the logical frame (because the phantom had to be repositioned for the coil swap), but identical acquisition parameters were used and the slice packet was positioned as similarly as possible. (Click to enlarge.)

I tried to place the slices in a consistent position relative to the phantom so that individual slices can be compared between the coils, as well as the extent in the slice direction (the magnet Z axis, the head-to-foot direction). I generally succeeded but only with a slight offset in the lab frame. Sorry about that. I got close!

The receive field bias is especially strong in the posterior and superior directions - in-plane for these axial slices - for the 32-channel coil, in good agreement with the figure from Wiggins et al. The 12-channel EPIs have pronounced in-plane heterogeneity, too, however, and the pattern established reveals the location of the individual coil elements.

Axial heterogeneity - in the head-to-foot direction - is also clearly visible for both coils. Sensitivity low down, where the cerebellum and brainstem would be located in a typical adult brain, is considerably lower than for cortical regions. Again, this drop-off is consistent with the SNR plots in the figure from Wiggins et al. And we should of course expect the sensitivity to decrease markedly towards the front of either coil, for the simple reason that the coil elements terminate there to allow insertion of the subject!

Now, you might be wondering why someone might want to increase the number of elements in a phased array, when doing so means decreasing the size (and receive field) of the individual elements, which results in a more pronounced receive field heterogeneity. The periphery - cortical regions for a brain - clearly benefits from a larger phased array. And, although it is difficult to see in either of the above two figures, it turns out that there is usually an SNR boost for deeper regions, too. At least, that's the theory.


Removing receive field heterogeneity with prescan normalization

It's time for an ugly truth: phased array RF coils were not designed specifically to do fMRI. (Yeah, I know. I was shocked, too.) It turns out that there's a fair bit of money selling MRI scanners to do stuff like radiology for health care. Not everyone who buys an MRI scanner uses it (almost exclusively) for fMRI. Sadly for us in fMRI-land, the critieria to get better diagnostic images aren't necessarily the same as those that are optimal for fMRI. For instance, the SNR boosts available from the separation of Tx and Rx, coupled with the further boost from a phased array Rx coil, make for considerably better anatomical scans than were ever feasible with the best single-channel Tx/Rx coils. Furthermore, phased array coils permit parallel imaging methods (such as GRAPPA and SENSE) that can greatly accelerate time-consuming radiological scans. In short, then, the benefits to medicine of larger phased array coils have been considerable. And that's where the money is.

However, some tricks have been necessary to ensure that radiologists aren't having to fight through Rx coil heterogeneity before they can determine whether a particular feature is biological or not. Specifically, the Rx field heterogeneity is removed in a step called "prescan normalization." This operation is enabled by default for conventional anatomical scans, including the ubiquitous MP-RAGE scan that most fMRIers use underneath their activation maps.

The normalization process involves acquiring an additional pair of low resolution scans, one with the head coil receiving signals and the other with the body coil receiving signals instead. (These scans usually use a fast gradient echo acquisition, such as FLASH. I may write more about the details of the prescan acquisition in a later post.) The body coil is used for RF transmission in both cases. Then, under the assumption that the very large body coil's receive profile is homogeneous across a head-sized object, when the prescan head coil image is divided by the prescan body coil image - perhaps after some smoothing to ensure a good match - the resultant is essentially an image of the receive field of the head Rx coil. This image may then be used to normalize (divide into) a target image, such as an MP-RAGE or an EPI, thereby removing the receive field heterogeneity in the final image. (See Note 3.)

Since this post series is focused on recognizing EPI artifacts I'm not going to show any anatomical scans with and without prescan normalization. (Maybe in a separate post, as I mentioned.) So let's look at EPIs acquired with and without prescan normalization. Again, I have used the FBIRN phantom to provide what should be a homogeneous image.

Here are images acquired with the 12-channel head coil on my Siemens Trio. On the left are the raw EPIs, on the right are the same images normalized by Siemens' default prescan normalization technique (Siemens users see Note 4):

12-channel head coil receive patterns before (left) and after (right) a prescan normalization. (Click to enlarge.)

And here are regular and normalized EPIs obtained from the same phantom but using the 32-channel head Rx coil:

32-channel head coil receive patterns before (left) and after (right) a prescan normalization. (Click to enlarge.)

The normalized images from the two coils aren't identical but they are more similar after normalization than before. In both cases, however, the normalized images have a circularly heterogeneous "residue" contrast. We don't quite end up with the perfectly smooth, homogeneous image of a (homogeneous) gel phantom we might want. The short answer to why this arises is that it's the transmit field heterogeneity (plus, perhaps, a tiny bit of magnetic field susceptibility). And it's not especially problematic; I contrasted the images independently and intentionally to highlight the pattern here. But if you're curious why the circular patterns are slightly different for the 12-channel and 32-channel coils, see Note 5.


Prescan normalizing EPI of the brain

Okay, so now you can recognize Rx field heterogeneity in a phantom. What about in a brain, where it's more likely to count? That is the point of this series, after all.

Below are EPIs acquired from a volunteer. (Please ignore the small black spot. It's benign and has been checked out, the subject is fine. We refer to it as his "internal fiducial.") As with the phantom images presented in the previous section, I set the gray scale intensity separately for each block of EPIs, to accentuate the Rx field heterogeneity before (left) and after (right) normalization:

12-channel head coil receive patterns before (left) and after (right) a prescan normalization. (Click to enlarge.)

32-channel head coil receive patterns before (left) and after (right) a prescan normalization. (Click to enlarge.)

As expected, the results from a brain support the prior observations made in the phantom data. The 12-channel EPIs are more homogeneous than those obtained with the 32-channel coil without prescan normalization. It's difficult to make out much receive field heterogeneity in the raw 12-channel images now, because anatomical contrast dominates. We know it's there, though, even if it's difficult to see by eye. How? Because the prescan normalized images are different, and flatter!

The 32-channel coil data clearly benefits a great deal from prescan normalization. But there is more residual heterogeneity in the normalized images from the 32-channel coil than from the 12-channel coil. The prescan normalization is helping, but it's not a perfect fix.


Using prescan normalization for fMRI

And so we get to the end of the post. You should have a burning question in mind by now: "Should I be using prescan normalization of EPI for fMRI?"

There is a small but growing literature on the use of prescan normalization for fMRI, but as yet the benefits haven't been proven beyond a reasonable doubt. (And you know my feelings on avoiding poorly validated methods!) For example, Kaza et al. show that there can be a statistical benefit for deep brain structures, but a penalty for cortical regions when using normalized 32-channel coil EPI versus 12-channel coil EPI for fMRI. And an abstract from Hartwig et al. shows that the effects of motion on 32-channel time series data can be reduced somewhat with prescan normalization, although no fMRI data was presented in that work. (See Note 6.)

At this juncture there is increasing evidence that prescan normalization should be used for fMRI if a phased array coil is used for reception. In a future post by my colleague over at MathematiCal Neuroimaging, we hope to show that using prescan normalization is expected to be of benefit to both 12-channel and 32-channel data, in particular when motion correction (rigid body realignment) is used subsequently on the time series. Without wanting to get too distracted with the punchline in the absence of any supporting data today, our tentative conclusion (at the moment) is that if the typical affine motion correction algorithms are applied to the time series data then, in the presence of appreciable subject motion, a prescan normalization can be useful for 12-channel data and might be considered essential for 32-channel (and higher) data. But, attempting to fix Rx field heterogeneity is a broader topic than simply recognizing that it exists, so I'm going to leave it there for the time being. Consider this post as an introduction to a more involved subject that will be covered in-depth in the months to come.

____________________



Notes:

1.  Parallel imaging methods such as GRAPPA take advantage of these multi-element, or multi-channel, phased array coils, using the inherent receive field heterogeneity as a component of spatial encoding. There is a brief introduction to the GRAPPA method in this post, but you should be warned that GRAPPA-EPI (and other parallel imaging variants of EPI) is more motion-sensitive than single-shot, unaccelerated EPI.

2.  There are alternative channel combination methods to the root-SoS approach, but apparently root-SoS is considered to be best for fMRI applications. I don't really know very much about it, to be honest. I was warned off the Siemens "adaptive combine" method by an applications scientist and haven't yet got around to testing it out properly. However, one general point that should be made is that no matter how the individual channels are combined to yield the final image, there will always be a receive field heterogeneity. It's there by design! The whole idea of a phased array coil is to use small coil elements with each element detecting signal (and noise) from a confined region near to that element. Spatial bias is thus inherent.

3.  There are a host of assumptions underlying the prescan normalization process. The biggie, in common with any other prescan technique, is that there is minimal or no motion, either between the two reference scans or between the reference scans and the target image(s). Motion will cause some amount of mismatch, thereby imperfectly removing the receive field heterogeneity. A related assumption is that the prescans obtain signals from all the points in space for which signal exists in the target image. This assumption can be satisfied by using prescans that have only slight T1 weighting and minimal T2 (or T2*) weighting so that all brain tissues exhibit signals. Note that it's not imperative for the target image to have signal in all brain regions. Dividing zero by a positive number is acceptable; the reverse is not! 

4.  To enable prescan normalization for EPI, go to the Resolution > Filter tab and select the Prescan Normalize option. You can save to the database the raw, un-normalized time series as well as the normalized data by selecting the "Unfiltered images" option underneath Prescan Normalize. I would strongly advise doing this because then you have a risk-free decision with regard to prescan normalization. You get raw data and normalized data, meaning you can decide during post-processing which one to use. There is one caveat: this option won’t save the un-normalized data if you have the MoCo option selected on the BOLD card. In that case both time series saved to the database will be prescan normalized, but the second time series will also have been motion-corrected via a realignment algorithm.

5.  The astute amongst you will spot my sleight of hand just now. If the residual pattern is Tx heterogeneity, and the same Tx coil - the body coil - is used in both cases, then why isn't the circular pattern the same after normalization? Well, it's because the Rx field of the 32-channel coil varies in space far more quickly - it has more pronounced heterogeneity - than the Rx field for the 12-channel coil. Think in one dimension for simplicity. Imagine the Rx profile was linear for a moment. Then, division by a linear correction profile, even an imperfect one, will leave a linear residual function. But as the profile becomes more complicated, with more non-linear terms, errors in the division will leave points of inflection in the residual. So, the 32-channel residual heterogeneity contains the Tx field profile and errors that are slightly larger than for the 12-channel images.

6.  An issue with the prescan normalization of EPI, as presently implemented on Siemens scanners, concerns distortion. The prescans are acquired distortion-free whereas we know the EPI suffers from severe distortion in the phase encoding dimension. Hartwig et al. solve this problem in an obvious way: they derive their normalizing image from two reference sets of EPI, instead of FLASH, thereby matching the distortion of the normalizing image and the target data. I don't see how using EPI can be any worse than using FLASH, and in the absence of motion I can see distinct advantages to using EPI intead of FLASH. However, a caveat concerns the "regions of support" in the normalizing image. If the EPI is acquired at a typical TE for fMRI then there will be regions of dropout - no signal, therefore no information on the Rx field - in some brain regions. What seems to be needed is an ultra-short TE version of the EPI that will be subsequently acquired for fMRI. This would both match the distortion properties yet retain (we hope) signal in all the brain regions that would produce signal in a FLASH scan.


References:

GC Wiggins et al. "96-channel receive-only head coil for 3 tesla: design optimization and evaluation." Magn. Reson. Med. 62(3), 754-62 (2009).

E Kaza, U Klose & M Lotze, "Comparison of a 32-channel with a 12-channel head coil: are there relevant improvements for functional imaging?" J. Magn. Reson. Imaging 34, 173-83 (2011).

A Hartwig et al., "A simple method to reduce signal fluctuations in fMRI caused by the interaction between motion and coil sensitivities." Proceedings ISMRM 19th Annual Meeting, p 3628 (2011).
 
 

12 comments:

  1. Now I get your earlier questions. I think you're overlooking TSNR vs SNR in this post. If prescan normalization is just a division of a constant term across all volumes, it won't change voxel-wise TSNR at all. The images would be visually more appealing, but the time series would be the same.
    A coil with greater SNR inhomogeneity might be more sensitive to motion since a spatial shift of the brain would alter the magnitude of data collected at the same tissue location. Normalization might help with that because the frame of reference is the coil rather than the head, but there are a bunch of assumptions that might limit the benefit.

    ReplyDelete
  2. The crux is when motion correction is used (which nearly everybody does). Then, the presence of receive field heterogeneity "confuses" the algorithm and makes it appear like the coil is moving about the head. It's not just a cosmetic tweak. (The receive field has an "anchoring" effect on the motion correction algorithm.)

    There definitely can be a TSNR benefit in the time series post-motion correction with prescan normalization, and we'll show that soon. But the big question is whether there's a benefit to fMRI stats. That's where the ambiguity in the current (small) literature resides.

    But I don't want to get too deep into the benefits (or otherwise) of prescan norm on time series in this post because we don't have the full story yet. I only mentioned the normalization as a precursor to a much bigger subject! It will build upon the ability to recognize the problem, which is the main intent of the post.

    Cheers!

    ReplyDelete
  3. "The receive field bias is especially strong in the posterior and superior directions - in-plane for these axial slices"...

    I think you mean posterior and anterior here?

    ReplyDelete
  4. Knew I should have stayed away from "medical-speak," James! I was trying to indicate that in-plane the bias is strong towards the top of the coil in some slices but also strong towards the bottom in others, and that transition happens as the slices get "hotter" overall towards the back of the coil. Not very easy to describe, hence the pic! Apologies for any confusion! I should stick to pix and not try to use radiological terms, clearly.

    Incidentally, you have reminded me of a point I didn't make concerning the 32-ch profile especially: it very much depends on sample position. Had the phantom been placed higher up in the coil, where a subject's forehead might sit (because a subject's head is bigger than the FBIRN phantom), the signal from anterior brain regions (frontal lobes) becomes very "hot" indeed relative to the rest of the brain. (There is a concomitant bias against the occipital lobe... much lower signal there than suggested by my one FBIRN phantom result.)

    ReplyDelete
  5. There is absolutely no physical reason why you should or should not use prescan normalize. The tsnr, fmri analysis etc will be completely unaffected. EXCEPT, as you point out, for registration - both motion correction and registration to an anatomical scan. This is not a physics/image recon question, this is a post-processing problem and fmri analysis packages need to step up and change the cost-functions they are using for motion correction. For your tests, you should retro-recon a few data sets with/without (or save the unfiltered data), and then run exactly the same data through your pipeline. If the answers are not the same, then the analysis pipeline needs be fixed.

    ReplyDelete
    Replies
    1. Anon.

      There are two different effects being produced by the receive field heterogeneity in the presence of motion and motion correction.

      The first is the anchoring effect and it has consequences with respect the performance of motion correction algorithms. This effect is due to the fact that the receive field contrast is fixed relative to the scanner. So the images that are produced are the product of the contrasts fixed relative to the brain and the contrast fixed relative to the scanner. As you know, the retrospective motion correction algorithms try to align all the volumes of a time series to some chosen reference volume. This is done by moving the volumes of the time series until they line up according to some cost function. When a significant part of the overall contrast is motionless (the scanner-fixed contrast) while the rest of the contrast is moving then the motionless contrast, according the the presently used retrospective motion correction methods, would tend to "anchor" the time series - ie make them appear to the motion correction algorithm to have moved less. Perhaps a different cost function based on edge detection for example could eliminate this anchoring effect but my understanding is that there is a good reason why those methods are not used. They may not be reliable.

      The second effect occurs even when motion correction is perfect. We call this the RFC-MoCo effect. When the motion correction is perfect then the times series of image volumes has a multiplicative time-varying contrast due to the receive contrast, motion and motion correction. This time-varying contrast can potentially have consequences with respect to tSNR and spatial temporal correlations.

      Now, motion correction is far from perfect so the degree to which the RFC-MoCo effect is occurring in any given "motion-corrected" data set is not easily determined. So if the motion correction is not working well then the RFC-MoCo effect will not result (This is not good news because the other bad motion problems are then there.) and consequently prescan normalization would not be expected to make much difference.

      BUT if motion correction were perfect then prescan normalization COULD have a beneficial effect depending upon how much motion there was. For motion that is on the scale of a voxel I would expect some benefit but for motion less than the scale of a voxel I would expect little benefit. This is because motion less than a voxel can cause significant reduction in tSNR and erroneous correlations. And if the prescan and the time series data have resolutions that are not capable of seeing such small motion then no benefit from applying the prescan norm can be expected.

      Delete
    2. In that last paragraph I should better have written: BUT if motion correction were perfect or close to perfect then ....

      Delete
  6. @ Anon: I was trying to avoid getting into the deeper question of Rx heterogeneity as it pertains to fMRI - I probably should have stopped with a demonstration of the phenomenon of Rx heterogeneity, for recognition purposes only - but the implication is that the motion correction that people use in the processing stream does seem to be a problem if it conflicts with the anchoring effect of the Rx field. I wanted to avoid pointing the finger at the image processing. But if that's what people are doing, and it's conflicting with the way the data are acquired, someone needs to be looking at it! Way outside of my expertise, I'm afraid!

    ReplyDelete
  7. make your subject swallow the coils ...
    http://onlinelibrary.wiley.com/doi/10.1002/mrm.20365/full

    ReplyDelete
  8. @ basile: Yeah, I guess if you can get someone to accept a shim coil in the mouth then adding an active Rx coil element to a "bite bar" could work! Though I should imagine the number of willing participants would drop... Good thinking, though!

    ReplyDelete
  9. We recently encountered this signal gradient issue with a 64-channel coil, acquiring fMRI datasets without prescan normalize. We would like to acquire future fMRI datasets with prescan normalize. Is it possible to retrospectively estimate and apply this normalization to fMRI datasets already acquired without this parameter enabled?

    Thank you for the very detailed post regarding this topic.

    ReplyDelete
    Replies
    1. Oh boy, that will be tough. I can think of only one scenario whereby you could do a prescan normalization correctly now, but there are several requirements. The first is that the 64ch coil would have had to be located at *precisely* the same point inside the magnet (i.e. relative to the gradient isocenter) for all scans, and you would need to know that position. So if you always placed the coil exactly (to within a mm or two) of isocenter then maybe you're still in business.

      Next, you would need to be able to get from the DICOM headers the gradient coordinates used for each scan, assuming you didn't use a slice prescription fixed in the magnet frame of reference. Common procedure is to move to the patient brain reference frame, so you will need to get those rotations and translations relative to the magnet (lab) reference frame.

      If you had all of the above then, in principle, you could put a phantom into the 64ch coil, being careful to get signal in all regions that are detected in any of the EPI data sets - in other words, you need signal to literally fill the entire coil, to be safe - and then you could acquire a prescan normalizing map that could be applied to your old data. You would be able to match correctly the receive field map with the receive field that existed when you acquired the EPI data.

      If you know roughly where the 64ch coil was in the magnet, say to within 10-20 mm, which is typical, and you can still decode the brain reference frames from the real data to determine the lab reference frame for the image planes, then you could certainly give this a go. You might do a simple test first, e.g. by intentionally acquiring a new set of EPI without prescan norm (note the image positions to save having to grab them from the header later), then acquire a post hoc receive field map with a phantom and a new setup. Acquiring your own prescan norm map - the receive field - is an easy step. If you decide to give it a go do please drop me a line, I'll gladly help you test it out!

      Delete