Saturday, April 9, 2011

Shim and gradient heating effects in fMRI experiments

Another week, another tangent. At least this one is directly related to the artifacts that I promise to get back to soon!

In this post I will review the nature and typical magnitudes of heating effects in a scanner being used for fMRI. Ever wondered why you sometimes observe discontinuities, or 'steps,' in a time series comprising the concatenation of multiple blocks of EPI data? What causes these discontinuities? Are they a problem for fMRI? And are there ways to reduce or eliminate these discontinuities at the acquisition stage? To begin with, some background.

Electrical energy in, thermal and vibrational energy out

When you run the gradients to generate images, a lot of heat is produced through vibrations (friction) of the gradient coils - the Lorentz forces that result from putting electrical current through copper wires immersed in a magnetic field - as well as through direct (resistive) electrical mechanisms. Much of that heat is removed via water cooling inside the gradient set. Water typically enters at about 20 C and may exit the scanner as high as 30 C. Modern gradient designs are pretty efficient at removing heat from the gradient coil. (I've done throwaway tests on my Siemens Trio that suggest the steady state temperature of the return cooling water is achieved after about 15 minutes of continuous scanner operation.) But - and this is the crux of this post - the heat imparted to the scanner isn't removed at precisely the same rate that it is being produced. In other words, the scanner is unlikely to be in a truly steady thermal state while you're using it.

The heating and active cooling of the gradient coils is the first consideration. There are also passive shims in the magnet. These passive shims are long sections of iron placed along the magnet axis, between the inner surface of the cryostat and the outer surface of the gradient coil, and used (along with superconducting shims inside the magnet itself) to provide the first level of homogeneity correction when the magnet is installed. We then use the "room temperature" resistive copper shims of the gradient coil (which are actually the X, Y and Z gradients for the linear shim terms) to clean things up to the few parts-per-million homogeneity that exists in the center of the magnet before a subject is inserted.

So far so good? The problem is, there's no active cooling of those passive iron shims. Instead, the only cooling is also relatively passive, in the form of conductive cooling via the magnet cryostat or back into the gradient coil itself. Of course, the gradient coil is also the source of the heat, so when the scanner is running the amount of conductive cooling available to the passive shims is pretty low. (Convective cooling is very inefficient too because there are few ways for air to get in and out of the passive shim trays.) And all this means that the heating of the passive shims can take a long, long time to build up, and even longer to dissipate.

Slowly does it.

Let's look at some temporal stability data from real systems. The first example, from a paper by El-Sharkawy et al., shows the temporal stability of a 1.5 T GE scanner at "rest," i.e. no gradient pulsing, just the small constant (DC) currents that are powering the room-temperature shims:

From the figure legend in the paper: "The periodic nature is attributable to the air conditioning cycle in the electronics room." The thermal stability and hence the drift of the magnetic field is pretty good, just fractions of a ppm per hour. (The trend in the data is probably the true magnetic field drift, caused either by very slight temperature variations inside the cryostat or perhaps by changes in the atmospheric pressure outside the lab. Yes, superconducting magnets can track the weather!)

Now let's look at the situation when the scanner is acquiring images. In the second figure, from Foerster et al., we see that running the scanner heats it up. Two different EPI sequences were used, one "loud" and one "quiet," i.e. two different gradient patterns that generated different vibration effects (and hence, heating) as well as different acoustics:

Even though the data in the second figure are from a 4 T magnet, it's interesting to note the magnitudes of the y axes on the two figures above; the frequency drift due to the EPI is two orders of magnitude larger than the innate drift of the scanner when it's not running! Note also that the drift continues to increase for the two hours the scanner is running - the true thermal steady state isn't reached - and that it then takes 6 hours from the time imaging ceases until the heat is finally removed from the passive shims and the drift returns to its baseline.

Why should the magnet's frequency shift with heating? Essentially, it's because any component of the magnetic field homogeneity that is altered in the Z direction is also shifting the magnet's center frequency, B0. Heating is, in effect, changing the shimming of the magnet as well as causing the center frequency to move. 

Why might this be an issue for fMRI?

As you might suppose, whether or not a heat-induced frequency shift and shim field alteration is a concern for fMRI depends on the magnitude of the effect. Consider the bandwidth of the phase-encoded axis of your EPI. A typical EPI might be acquired with an echo spacing of 0.5 ms and 64 samples (pixels) in the phase encoding direction. That works out to a bandwidth of (1/0.0005)/64 ~ 31 Hz/pixel. Now look again at the y axis of the figure immediately above. Smell a rat? That's right, the frequency drift due to heating could be at the pixel level. And as soon as you recognize that this frequency shift is equivalent to an actual shift of the image by that amount in the phase encoding axis, I'm sure I've got your attention. Heating is going to cause the image to move (translate) in the phase encoding direction.

The dynamic tests conducted on the 4 T magnet and reported in Foerster et al. used scan times that were several hours long. That's not typical for fMRI, of course. What about the handful of minutes that typical fMRI experiments use? And what about the gaps in between EPI runs, when the gradients aren't pulsing but the water cooling is still active?

In the next figure, courtesy of Ariel Rokem, are the so-called motion parameters (the report produced by a rigid body realignment motion correction algorithm, performed in this case in FSL) for a spherical water-filled phantom experiment where five blocks of 150 volumes of EPI (TR=2000 ms, axial oblique slices) were acquired in a row, allowing exactly one minute of 'rest' (scanner inactivity) between blocks:

The blue trace in the top part of the figure represents the phase encoding axis of the images. The steps are obvious. Note also the fairly consistent within-run drift in each of the five 150-volume blocks.

Why does the step appear at the start of a block, whereas the signal variation within a block is usually less than the steps? It's because the scanner does an automatic on-resonance adjustment immediately prior to starting each EPI block. This adjustment compensates for that part of the perturbed magnetic field that has a Z component, i.e. the B0 frequency is being reset periodically. The magnetic field has shifted slightly from the previous block so that if it was exactly 123 MHz at the start of the prior block it might now be 123.00001 MHz, say. The next on-resonance adjustment makes the new reference frequency 123.00001 MHz, essentially removing the 10 Hz off-resonance shift.

As described above, an off-resonance condition corresponds to a small difference in spatial location in the phase encoding direction. Placing the scanner back on-resonance at a new frequency (123.00001 MHz) eliminates the frequency shift, thus eliminating the translation of the image in the phase encoding axis. The actual frequency drift being caused by heating (or any other source) isn't affected per se. You can see that in the data above if you look carefully. Imagine that the on-resonance adjustments hadn't been performed before each block and mentally remove the steps to join up the traces. There's a clear trend down over time, and it is relatively smooth. (The 1 minute gaps between blocks cause the exact shapes of the intra-block variation to change slightly across the entire run.) In other words, all that the on-resonance adjustment is doing is periodically bringing the image back to center in the phase encoding direction, whereupon it continues its drifting in precisely the same way.

So if the on-resonance adjustment is creating these discontinuities that would otherwise be nice, smooth curves in the concatenated time series, should we turn it off? No! The on-res adjustment helps to assure that the Nyquist ghost level remains as low as possible. If a single on-res adjustment was performed at the start of the first block only, you would see that the Nyquist ghost level would slowly increase with time from the adjustment, thereby slowly degrading your statistical power as the session progresses. The on-res adjustment is helping to preserve data quality.

Step variability

One interesting observation is that the magnitude of the steps is dependent on the time allowed between EPI blocks. There is a lot of variability, as already demonstrated above, but it turns out that if the scanner is run continuously, without gaps between EPI blocks, so that it approaches some sort of thermal steady state, the steps decrease in magnitude.  But they aren't totally eliminated:

(The green trace in the top part of the figure is the phase encoding axis of the EPIs.)

I won't bore you further with explicit description of what happens when gaps of differing durations are allowed between runs. Suffice it to say that the gradient cooling is really quite efficient, and allowing differing periods of inactivity between your EPI blocks - as is almost guaranteed to happen in a real experiment, where the subject may want to talk to you, or your stimulus script throws a wobbly - will produce discontinuities between blocks that will vary in magnitude. As a rough rule of thumb, the steps are likely to increase in size the longer the idle period between blocks.

Why should there be so much variability in the steps' magnitudes? As shown earlier in the post, the passive shims heat and cool with very slow time constants (hours). The gradient coils, which include the (so-called) room-temperature (or active) shims heat and cool with a much more rapid time constant (minutes). And as if that isn't enough, the chillers providing the cooling have their own dynamics, they don't run in any sort of true steady state. (Review the oscillations shown in the very top figure of this post. I've seen similar fluctuations on my Trio. My chillers seem to cycle with a period of about 200 seconds as they attempt to maintain a narrow temperature range in the return water, not a specific target value.)

There is one more source of step variability you're likely to encounter: different parameter settings create different heating and vibration profiles. TR, number of volumes, number of slices, echo spacing, even slice angle, all will determine which gradients are being used and how much. Thus, it is to be expected that users with different protocols should see different thermal instabilities.

So, is it broke? Do you need to fix it?

If you're still awake, this is what you've been waiting for. You've seen how thermal instability in the shims (passive as well as active) produces dynamics that affect temporal stability of your EPI signal. Fortunately this isn't the total disaster that you might think, even if it looks ugly. In a typical fMRI processing stream you will perform a realignment (as done with FSL, above) to correct for some of the subject motion. The apparent translations in the phase encoding dimension that arise from heating, whether the steps between blocks or the slow drifts within blocks, look like and are treated as rigid body motion by the realignment algorithm. For typical inter-block steps and intra-block drifts there will be little to no residual effect in the data once it has been realigned; motion correction does its job. This was verified on my scanner by determining the TSNR across phantom data. The TSNR was found to be consistent between five EPI blocks with no gap and five EPI blocks with a 1 minute gap. It's also worth noting here that the shifts seen in my phantom experiments were sub-pixel; typically 10% or less of a pixel, and almost certainly in the noise level when it comes to realignment.

Unless your scanner has very poor water cooling of the gradient coil you shouldn't need to worry about heating effects. Your facility physicist should be able to tell you if your scanner has this problem or not. (Most modern scanners - say 5 yrs old or less - won't exhibit a problem unless something has failed and hasn't been detected in preventative maintenance or QA.)

If you have residual concern about your scanner's thermal stability, simply take a phantom and do the following test. Take your standard fMRI protocol and run it five times in a row, no gaps, on the phantom. Then run a further five blocks but leave some gaps between blocks. The gaps can be constant duration or not, your choice; make them typical for an fMRI experiment, say 1-3 minutes. Take the two sets of data offline and run a realignment on the concatenated data. (Siemens users - you can activate the MoCo option and get a separate realigned time series on the scanner if you prefer.) You then have two ways to assess the data. If you ran the realignment offline, e.g. using FSL or SPM, then take a look at the magnitudes of the discontinuities and the intra-block drifts in the motion parameter graph generated by the realignment algorithm. If the shifts are a pixel or less then you are already noise-limited, there's nothing to worry about. The other way to assess the data is to compute TSNR images and compare them for the gap and no gap conditions. TSNR should be very similar after realignment. If you see a big difference - more than a few percent - then you might want to investigate further, at the very least mention it to your facility physicist! (Note: don't do this test on a brain! Subject motion will almost certainly dominate! It has to be done on a phantom.)

Let's leave the issue of heating there for today. If you are especially interested, in two appendices below I outline some approaches that might reduce the size of the discontinuities between blocks using acquisition strategies, rather than relying on a realignment algorithm in post-processing. While that fits with the stated objective of this blog, we are in danger of over-analyzing a relatively small instability in real brain fMRI data. Furthermore, as we will see in the next post, one of the strategies to combat heating effects may have joint utility in tackling the age-old, and larger, temporal instability problem: subject motion. Til then!


Appendix 1: Rudimentary approaches to mitigation of heating in EPI time series

It is always nice to be able to do something at the acquisition stage that might improve the signal quality. Why rely on the motion correction algorithm in post-processing if you can reduce the problem during the acquisition? (That's why you're reading this blog!) The first simple tactic to addressing thermal instability is one that's common to many imaging centers: you might try warming the scanner up for half an hour before the first real scan of the day, in an attempt to place the scanner (really, the passive shims) near to the thermal state they should attain during the scanning sessions. But, as the figures above show, 30 minutes might not be sufficient. And what happens if the scanner isn't used for a couple of hours? Must it be warmed up all over again? That's hardly practical for a busy imaging center, or when you have a schedule to keep to! I'm ambivalent. The passive shim heating is slow but it is also a secondary effect compared to the gradient (and room temp shim) heating. I can attain a thermal steady state with 15-20 minutes' continuous EPI, but as soon as I stop, even for a minute, my efficient water cooling plunges the temperature down towards 20 C again. In practicality there is no such thing as a thermal steady state for an fMRI experiment, it's a pipe dream! On my scanner (which runs > 12 hrs/day, 7 days/week) I don't see (in test data) any significant benefit to a warm up period. Better to use that time to do a daily QA check, imho.

What else might we do? As suggested by El-Sharkawy et al for a spectroscopy experiment, simply re-shimming between EPI blocks can remove a lot of the effects of heating, albeit only at discrete times. Some of you will have spotted that the on-resonance adjustment has already negated the Z component of the instability; it has the effect of resetting the shift back to the center of the magnet. Re-shimming between blocks provides a way to tackle the X and Y (and some second-order) effects of heating in a discrete fashion, as well as the Z component. But is it worth doing? I will address this question in my next post, because when we are dealing with real brain data we're not dealing with the thermal instability in isolation; we also have subject motion to consider.

Appendix 2: Advanced approaches to mitigation?

As pointed out by El-Sharkawy et al, a rigorous solution to the passive shim heating problem requires redesigning MRI scanners so that active cooling is used on the passive shims. Similarly, altering the dynamics of the heating and cooling of the gradient coil and the resistive (room-temp) shims to improve the thermal steady state during imaging would require significant modifications to the cooling circuitry. There's a limit to just how steady the steady state can be made! (Think of it like suspension on a car. At some point there is a diminishing return, the expense and complexity of the engineering overwhelming further benefit.)

In the absence of redesigned scanner hardware some groups have proposed active solutions that use modifications to the acquisition pulse sequence:
  • Foerster et al suggested a way to measure the frequency offset due to heating and incorporate that into the k-space prior to image reconstruction. That involves modifying the image reconstruction stream on your scanner, so it's not trivial.
  • Benner et al suggest determining the frequency drifts in real time and feeding the information back to the scanner so that the adjustment can be made before the next image is acquired. That involves modifications to the acquisition side of your scanner, although it would probably be simpler than an overhaul of the gradient/shim cooling hardware.
But there is a fundamental issue with both of these proposals: the approaches are inherently linear, i.e. they are trying to use a single frequency correction to offset the disruption of the magnetic field. But, as pointed out by El-Sharkawy et al, the true effects of passive shim heating are far more complicated than simple linear drifts. Heating the passive shims generates all manner of complex changes in the magnetic field, and while a linear approximation might suffice for many applications, it is a patch (like re-shimming). As a patch, re-shimming in between EPI blocks seems to do pretty well (as we'll see in the next post). And for fMRI we must always be cognizant of the magnitude of the problem. Unless and until thermal instabilities approach the level of typical subject motion, any benefit to real fMRI experiments is likely to be marginal. Cost-benefit analysis, anyone?


BU Foerster, D Tomasi & EC Caparelli, Magnetic field shift due to mechanical vibration in functional magnetic resonance imaging. Magn. Reson. Med. 54(5): 1261-7 (2005).

AM El-Sharkawy, M Schar, PA Bottomley & E Atalar, Monitoring and correcting spatio-temporal variations of the MR scanner's static magnetic field. MAGMA 19(5): 223-36 (2006).

T Benner, AJ van der Kouwe, JE Kirsch & AG Sorensen, Real-time RF pulse adjustment for B0 drift correction. Magn. Reson. Med. 56: 204-9 (2006).


  1. It's only taken me a about 2.5 years to stumble across this post, but I'm so glad I did!

    OK, first off, my physics is at a comparative Kindergarten level, so while I understand the basic principles, fully 80% of this sailed over my head, but what I did get reinforced site planning advice that I've been giving for years.

    The less steel you have in the magnet room construction (particularly the floor, as the closest part of the building to the magnet), the less passive shim material will be needed for that particular magnet. The less thermo-magnetic shim material there is, the less it can be subject to shift resulting from thermal gain.

    Granted more of my work is in clinical MRI settings than fMRI, or nMR spectroscopy, so I'm commenting on just a piece of the overall, I shudder when someone wants to place a clinical MRI system on top of a huge steel beam (which is the structural designer's first thought when you tell them you want to put a 10,000 kg load in the middle of the room). The magnet manufacturers have gotten better and better at shimming magnets that are place in lousy locations, so they've continuously relaxed their siting criteria. But we hear about magnets where doctors fight to get their patients in in the morning because image quality is noticeably poorer in the late afternoon (after 8 hours of duty cycles accumulating heat in the shim material).

    Siting your magnets smarter (with zero ferromagnetic material in the magnet room floor, if you can) help protects your data / image throughout the day. If you're spending a few million on a magnet, isn't it worth a few thousand to make sure that the the external influences (such as ferrous material and HVAC) are also set up to protect the quality of the data?

    1. You're correct, the thermal stability of ferrous material in the vicinity of an MRI is very important. Hence, as a general rule we don't like windows in the magnet room and if they must be included we do our best to avoid direct sunlight. Diurnal changes can be direct, from sunlight, and indirect, from use of HVAC and power anywhere else in the building (and assuming that you don't have the MRI on its own power conditioner).

      In academic environments especially, one also must consider effects in the other direction. I once had a small bore magnet relocated into a space with steel I-beams above. The beams went the length of the building where, at the other end, a lab had an electron microscope. The steel saturated immediately once the magnet was at field, and the lab's EM went waaaaaay out of spec. It was fixed but not before they had spent some time trying to figure out what on earth had silently and invisibly ruined their measurements!

  2. I've spend a few weeks discovering this on my own...running mcflirt on phantoms, making synthetic data. There's also the fundamental issue, that mcflirt is designed to align the same image to itself, not BOLD signal changed images to the reference. There's no built in regularization that I could see, to control for "motion" that is only BOLD signal change.

    Why should the phantom show motion...? It's not moving...the intensities may change along time, but there's no time axis in mcflirt.

    I found this today as well:

    1. Hi Daniel,

      So there are a few things going on that cause MCFLIRT, or any other realignment method, to find "motion" in a time series acquired from a stationary phantom. First, although there's no time axis in MCFLIRT, you're presumably applying it to a time series of EPIs and the output plots represent that same time series. These plots represent the (in)stability in the images from frame to frame (i.e. from TR to TR).

      There are significant drifts in thermal properties of the scanner gradients with time (i.e. with ongoing use) that manifest primarily as shifts in the phase encoding axis, i.e. as in-plane translations. However, the realignment algorithm may interpret other instabilities, too, e.g. changes in the center of mass of the signal from thermal instability in the RF amplifier (which on many older scanners is air-cooled). It doesn't differentiate between these different changes, it simply applies its cost function and returns the best fit. We might consider these "systematic shifts" because their underlying properties are real, physical effects that can be reduced with better hardware. Then there is the sampling issue. The typical voxel size is 3 mm on a side. Now check the size of the motion parameters being returned by MCFLIRT. Very much less than a mm! How can that be? It's sampling and statistics. The effects of noise (but probably mainly the systematic physical effects) cause small shifts in the designation of any given voxel as representing "this point in space at this instant in time." Really, any estimate of motion (or apparent motion) that is sub-pixel is an interpolation issue, and we see a lot of that in those relatively high frequency oscillations across the motion plots.

      As for physiologic drifts with very low frequencies, yep, there are plenty. The origins could be varied, too, depending upon the nature of the experiment. I am preparing a blog post on the most common physiologic confounds in fMRI experiments. But on top of all these is the potential for slow neural effects. The MRI scanner is a soporific environment, I see no reason to exclude a priori the possibility of slow neural drifts with attention, with prolonged supine posture, etc. It all goes into the pot, and then we apply motion correction to it to try to tell us what's in the soup. Tricky!