Education, tips and tricks to help you conduct better fMRI experiments.
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!
Friday, December 30, 2011
Common persistent EPI artifacts: Gibbs artifact, or ringing
Don't ask me why there's no apostrophe, it looks possessive to me. Perhaps it's (the) Gibbs artifact rather than Gibbs (his) artifact. Most people simply refer to the effect as ringing anyway, so let's move on. This post concerns a phenomenon that, like aliasing last time, isn't unique to EPI but is a feature of all MRIs that are obtained via Fourier transformation.
In short, ringing is a consequence of using a period of analog-to-digital conversion in order to apply a (discrete) FT to the signals and produce a digital image. Or, to put it another way, we are using a digital approximation to an analog process and thus we can never properly attain the infinite resolution that's required to fully represent every single feature of a real (analog) object. Ringing is an artifact that results from this imperfect approximation.
We had already encountered one consequence of digitization in the Nyquist criterion in PFUFA Part Six. However, for our practical purposes, ringing isn't a direct consequence of digitization like the Nyquist criterion, but instead results from the duration of the digitization (or ADC) period relative to the persistence of the signals being measured. In principle, a signal decaying exponentially decays forever, which is rather a long time to wait for the next acquisition in a time series, so we instead enable the ADC for a window of time that coincides with the bulk - say 99% - of the signal, then we turn it off. This square window imposed over the exponentially decaying signal causes some degree of truncation, and it's this truncation that leads to ringing. (See Note 1.)
An example of ringing in EPI of a phantom
Let's start with an unambiguous example of ringing by looking at the artifact in a homogeneous, regular phantom. Below is a 64x64 matrix EPI acquired from a spherical gel-filled phantom. You're looking for the wave-like patterns set up inside and outside the edges of the main signal region:
In the left image, which is contrasted to highlight ringing artifacts within the signal region itself, the primary ringing artifact appears as a series of concentric circles, each with progressively smaller diameter and lower intensity as you move in from the edge of the phantom. One section of the bright bands is indicated with a red arrow, but you should be able to trace these circles all the way around the image. Also visible is a strong interference pattern (blue arrows) that arises between the aforementioned ringing artifact and the overlapping N/2 ghosts. This is because the ghosts maintain the contrast properties of the main image; they are, after all, simply weak (misplaced) clones of the main image.
Tuesday, December 27, 2011
Another brief explanation of decoding
Here's another short video produced by UC's media people in which Jack Gallant explains in broad terms how his group's recent decoding experiment was conducted:
A good place to go next for more details is the Gallant Lab website. Read the FAQ on that page to gain a basic understanding of what the experiment was, and what it wasn't. Then go read the paper, it's written very accessibly!
A good place to go next for more details is the Gallant Lab website. Read the FAQ on that page to gain a basic understanding of what the experiment was, and what it wasn't. Then go read the paper, it's written very accessibly!
Wednesday, December 14, 2011
Common persistent EPI artifacts: Aliasing, or wraparound
In Part Eleven of the series Physics for understanding fMRI artifacts (hereafter referred to as PFUFA) you saw how setting parameters in k-space determined the image field-of-view (FOV) and resolution. In that introduction I kept everything simple, and the Fourier transform from the k-space domain to the image domain worked perfectly. For instance, in one of the examples the k-space step size was doubled in one dimension, thereby neatly chopping the corresponding image domain in half with no apparent problems. At the time, perhaps you wondered where the cropped portions of sky and grass had gone from around the remaining, untouched Hawker Hurricane aeroplane. Or perhaps you didn't.
In any event, you can assume from the fact that this is a post dedicated to something called 'aliasing' that in real world MRI things aren't quite as neat and tidy. Changing the k-space step size - thereby changing the FOV - has consequences depending on the extent of the object being imaged relative to the extent of the image FOV. It's possible to set the FOV too small for the object. Alternatively, it's possible to have the FOV set to an appropriate span but position it incorrectly. (The position of the FOV relative to signal-generating regions of the sample is a settable parameter on the scanner.) Overall, what matters is where signals reside relative to the edges of the FOV.
Now, on a modern MRI scanner with fancy electronics, aliasing is a problem in one dimension only: the phase encoding dimension. (Yeah, the one with all the distortion and the N/2 ghosts. Sucks to be that dimension!) The frequency encoding dimension manages to escape the aliasing phenomenon by virtue of inline analog and digital filtering, processes that don't have a direct counterpart in the phase encoding dimension. Instead, signal that falls outside the readout dimension FOV, either because the FOV is too small or because the FOV is displaced relative to the object, is eliminated. It's therefore important to know what happens where and when as far as both image dimensions are concerned. One dimension gets chopped, the other gets aliased.
I will first cover the signal filtering in the frequency encoding dimension and then deal with aliasing in the phase encoding dimension. Finally, I'll give one example of what can happen when the FOV is set inappropriately for both dimensions simultaneously. At the end of the process you should be able to differentiate the effects with ease. (See Note 1.)
Effects in the frequency encoding dimension
Below are two sets of EPIs of the same object - a spherical phantom - that differ only in the position of the readout FOV relative to the phantom. In the top image the readout FOV is centered on the phantom, whereas in the bottom image the FOV is displaced to the left, causing the left portions of the phantom signal in each slice to be neatly, almost surgically, removed:
Readout FOV centered relative to the phantom. |
Readout FOV displaced to the left of the phantom, resulting in attenuation of the signal from the left edge of each slice. |
Monday, November 28, 2011
Twitter. Damn.
@practiCalfMRI
I thought I could resist, I really did. (I've been off Facebook* for more than two years!) But when Neuroskeptic took the plunge in June I started thinking that maybe I should suck it up, too. I mean, Neuroskeptic blogs ten times more frequently than I do and he still has time to tweet.
Not sure exactly how it will go. I'm going to treat it as an experiment. I can guarantee that there won't be daily tweets let alone hourly ones. I'm not going to bring anyone's cellular network to its knees. But I do come across little things related to fMRI that aren't worth a full blog post. A micro-blog ought to fit the bill, eh? We'll see...
* Okay, so technically I am still on fb. I maintain an account so that I can post comments to websites, merge with other online media, etc. Turns out it's really, really hard not to have fb unless you don't mind registering separately for every online newspaper and music service yet invented. But I never actually look at my fb page, so I apologize if you have a "friend" request suspended somewhere in cyberspace.
Sunday, November 27, 2011
Understanding fMRI artifacts: "Good" coronal and sagittal data
Front, back, side to side
Now that you have an appreciation of "good" axial EPI time series data we should be able to zip through a review of "good" coronal and sagittal EPIs. This isn't the post to get deep into the reasons why you might want to acquire these prescriptions instead of axial or axial-oblique slices, but here's a short list (and some music) for you to be going on with:
Pros
There are, naturally, drawbacks to coronal and sagittal slices, just as there are for axial slices. I'll mention some of these in more detail below, as we consider the individual artifacts, but here's another brief list:
Cons
Okay then, that's the introduction over with. Let's now put aside the justification for using one prescription over another and look at what constitutes "good" data in the case of coronal and sagittal slices. The features should be immediately recognizable from what you saw in the axial data of the last post.
Pros
- coronal slices tend to exhibit less dropout of frontal and temporal lobes compared to axial slices.
- coronal slices might permit a smaller field-of-view and higher spatial resolution without signal aliasing than achievable with other prescriptions, assuming your gradient performance and other pulse sequence parameters can be driven sufficiently hard.
- sagittal slices may also show some improved signal in frontal and temporal lobes compared to axial slices, but the real benefit is the unique coverage afforded. You could acquire a single hemisphere, for instance; could be useful in a handful of situations. Alternatively, if you are interested in the whole brain, including cerebellum and perhaps even brain stem, these structures are naturally included in sagittal slices.
- sagittal slices tend to make the most common type of head motion - chin to chest rotations - an in-plane phenomenon which might lead to improved motion correction in post-processing.
There are, naturally, drawbacks to coronal and sagittal slices, just as there are for axial slices. I'll mention some of these in more detail below, as we consider the individual artifacts, but here's another brief list:
Cons
- safety limits on gradient switching (to avoid peripheral nerve stimulation) tend to force the phase encoding direction to be left-right for coronal slices, rendering the EPIs strongly asymmetric. While the absolute level of distortion may actually be very similar to that present in axial slices, the disruption of left-right symmetry can be a shock to your aesthetic sensibility.
- bizarre distortion is also a "feature" of sagittal slices where, as you'll soon see, the distortion can make the frontal lobes look like a duck's bill! But, as before, the absolute level of distortion may not be significantly different to that in axial slices; it's really the unnatural appearance that shocks us. (We ought to be just as outraged at the symmetric distortions in axial slices!)
- perhaps the biggest limitation to both coronal and sagittal prescriptions is the number of slices required to cover the entire brain in the given TR. Slicing along the longest axis of the brain, as done for coronal slices, is clearly the least efficient way to do it. The efficiency of sagittal slices falls somewhere between coronal and axial. And, of course, anything that leads to more (fixed width) slices means that TR might have to get longer. It all depends on your application.
Okay then, that's the introduction over with. Let's now put aside the justification for using one prescription over another and look at what constitutes "good" data in the case of coronal and sagittal slices. The features should be immediately recognizable from what you saw in the axial data of the last post.
Wednesday, November 16, 2011
Understanding fMRI artifacts: "Good" axial data
Good EPI data has a number of dynamic features that are perfectly normal once a few basic properties of the sample - a person's head - are considered. The task is to differentiate these normal features from abnormal (or abnormally high) artifacts and signal changes. We'll look at axial slices first because these are the most common slice prescription for fMRI. (Axial oblique slices will exhibit much the same features as the axial data considered here.)
The data we will consider in this post were acquired with a single shot, gradient echo EPI sequence on a Siemens Trio/TIM scanner, using the 12-channel head RF coil and a pulse sequence functionally equivalent to the product sequence, ep2d_bold. (See Note 1.) Parameters were typical for whole cortex coverage (the lower portion of the cerebellum tends to get cut off): 34 slices, 3 mm slice thickness, 10% slice gap, TR=2000 ms, TE=28 ms, flip angle = 90 deg, 64x64 matrix over a 22.4 cm field-of-view yielding 3.5 mm resolution in-plane, full k-space with phase encoding oriented anterior-posterior. (See Note 2 for advanced parameters.) The entire time series was 150 volumes in duration but in the movies and statistical images that follow I've considered only the first fifty volumes. (See Note 3 if you want to download the entire raw data and/or the movies and jpeg images.)
Let's start by simply looping through the volumes with the contrast set to reveal anatomy. Play this through a couple of times to familiarize yourself with it, then read on (click the 'YouTube' icon on the video to launch an expanded version in a separate tab/window):
Other than movement of the eyes and some large blood vessels in the inferior slices, at this resolution it's difficult to determine with certainty which regions are fluctuating and which are stationary. So let's zoom in on some of the central slices and replay the cine loop:
Now we can see that there's quite a bit of brain pulsation going on. Indeed, nothing appears stationary now! However, the edges of the brain don't appear to be moving very much so we can be reasonably confident that the pulsation is due to normal physiology and not a fidgety subject.
Tuesday, November 15, 2011
Understanding fMRI artifacts
Introducing the series
The workhorse sequence for fMRI in most labs is single-shot gradient echo echo planar imaging (EPI). As we saw in the final post of the last series, EPI is selected for fMRI because of its imaging speed (and BOLD contrast), not for its ability to produce accurate, detailed facsimiles of brain anatomy. Our need for speed means we are forced to live with several inherent artifacts associated with the sequence.
However, in addition to the "characteristic three" EPI artifacts of ghosting, distortion and dropout, when we're doing fMRI we are more concerned with changes over time than with the artifact level of an individual image. So, in this series we need to assess the sources of changes between images, even if the images themselves appear to be perfectly acceptable (albeit subject to the "characteristic three").
What's the data supposed to look like?
It would be rather difficult for you to determine when something has gone wrong during your fMRI experiment if you didn't have a solid appreciation of what the images ought to look like when things are going well. Accordingly, I'll begin this series with a review of what EPIs are supposed to look like in a time series. We'll look at typical levels of the undesirable features and assess those parts of an image that vary due to normal physiology. This is what we should expect to see, having taken all reasonable precautions with the subject set up and assuming that the entire suite of hardware (scanner and peripherals) is behaving properly.
Good axial data will be the focus of the first post in the series. (Axial oblique images will exhibit qualitatively similar features to the axial slices I'll show.) In the second post I'll show examples of good sagittal and coronal data. Artifacts may appear quite differently and with dissimilar severity merely by changing the slice prescription, so it's important to keep in mind the anisotropic nature of many EPI defects. Motion sensitivity is also different, of course. Motion that was through-plane for an axial prescription is in-plane for sagittal images, for example.
Ooh, that's bad. Is it...?
With a review of good data under our belts it will be time to look at the appearance of EPI when things go tango uniform. I will group artifacts according to their temporal behavior - either persistent or intermittent - and their origins - either from hardware, from the subject, or from operator error. You should then be able to understand and differentiate the various artifacts and be able to properly diagnose (and fix) them when it counts the most: during the data acquisition. Waiting until the subject has left the building before finding a scanner glitch is a bit like doing a blood test on a corpse. Sure, you might be able to determine that it was the swine flu that finished him off, either way he's dead. Our aim will be to do our “blood tests” while there is still a chance of administering medicine and perhaps achieving a recovery.
Tuesday, November 1, 2011
Physics for understanding fMRI artifacts: Part Twelve
Apologies for the lengthy delay getting this post out. New academic year, teaching, talks, etc. etc. Anyway, I hope that this opus will be the final post in the background physics series for the time being. I reserve the right to append further posts down the road, but with this post I hope you will be in a position to understand the origins of artifacts in real (EPI-based) fMRI data. So, after today we'll change tacks and start reviewing what "good" data should look like. First things first though. Time to put all your k-space knowledge to good use, and review the pulse sequence that the majority of us use for fMRI.
The Echo Planar Imaging (EPI) pulse sequence
In Part Ten we looked at a pulse sequence and its corresponding k-space representation for a gradient-recalled echo (GRE) imaging method. That sequence used conventional, or spin warp, phase encoding to produce the second spatial dimension of the final image. A single row of the k-space matrix was acquired per RF excitation, with successive rows of (frequency-encoded) k-space being sampled after stepping down (or up) in the 2D k-space plane following each new RF pulse.
One feature of the spin warp imaging scheme should have been relatively obvious: it's slow. Frequency encoding along kx is fast but stepping through all the ky (the phase-encoded) values is some two orders of magnitude slower, resulting in an imaging speed from tens of seconds (low resolution) to minutes (high resolution). That's not the sort of speed we need if we are to follow blood dynamics associated with neural events.
Instead of acquiring a single row of k-space per RF excitation - a process that is always going to be limited by the recovery time to allow the spins to relax via T1 processes - we need a way to acquire multiple k-space rows per excitation, in a sort of "magnetization recycling" scheme. Ideally, we would be able to recycle the magnetization so much that we could acquire an entire stack of 2D planes (slices) in just a handful of seconds. That's what echo planar imaging (EPI) achieves.
Gradient echo EPI pulse sequence
The objective with the EPI sequence, as for the GRE (spin warp) imaging sequence we saw in Part Ten, is to completely sample the plane of 2D k-space. That objective is unchanged. All we're going to do differently is sample the k-space plane with improved temporal efficiency. Then, once we have completed the plane we can apply a 2D FT to recover the desired image. Pretty simple, eh?
As before, sampling (data readout) need only happen along the rows of the k-space matrix, i.e. along kx. So we need a way to hop between the rows quickly, spending as much time as possible reading out signals under the frequency encoding gradients, Gx, and as little time as possible getting ready to sample the next row. EPI is the original recycled pulse sequence, so I'll color the readout gradient echoes in green:
To keep things simple I've omitted slice selection and indicated a 90 degree RF excitation; this could of course be any flip angle in practice. (See Note 1.) I've also shown just the first four (and a half) gradient echoes in the echo train. The full sequence repeats as many times as there are phase-encoded rows in the k-space matrix. A typical EPI sequence for fMRI might use 64 gradient echoes, corresponding to 63 little blue triangles in the train shown in the figure above. But for the example k-space plane below, the k-space grid is 16x16 so assume for the time being that the full echo train would consist of 15 little blue triangles separating eight positive Gx gradient periods and eight negative Gx gradient periods.
The Echo Planar Imaging (EPI) pulse sequence
In Part Ten we looked at a pulse sequence and its corresponding k-space representation for a gradient-recalled echo (GRE) imaging method. That sequence used conventional, or spin warp, phase encoding to produce the second spatial dimension of the final image. A single row of the k-space matrix was acquired per RF excitation, with successive rows of (frequency-encoded) k-space being sampled after stepping down (or up) in the 2D k-space plane following each new RF pulse.
One feature of the spin warp imaging scheme should have been relatively obvious: it's slow. Frequency encoding along kx is fast but stepping through all the ky (the phase-encoded) values is some two orders of magnitude slower, resulting in an imaging speed from tens of seconds (low resolution) to minutes (high resolution). That's not the sort of speed we need if we are to follow blood dynamics associated with neural events.
Instead of acquiring a single row of k-space per RF excitation - a process that is always going to be limited by the recovery time to allow the spins to relax via T1 processes - we need a way to acquire multiple k-space rows per excitation, in a sort of "magnetization recycling" scheme. Ideally, we would be able to recycle the magnetization so much that we could acquire an entire stack of 2D planes (slices) in just a handful of seconds. That's what echo planar imaging (EPI) achieves.
Gradient echo EPI pulse sequence
The objective with the EPI sequence, as for the GRE (spin warp) imaging sequence we saw in Part Ten, is to completely sample the plane of 2D k-space. That objective is unchanged. All we're going to do differently is sample the k-space plane with improved temporal efficiency. Then, once we have completed the plane we can apply a 2D FT to recover the desired image. Pretty simple, eh?
As before, sampling (data readout) need only happen along the rows of the k-space matrix, i.e. along kx. So we need a way to hop between the rows quickly, spending as much time as possible reading out signals under the frequency encoding gradients, Gx, and as little time as possible getting ready to sample the next row. EPI is the original recycled pulse sequence, so I'll color the readout gradient echoes in green:
The first four (and a half) gradient echoes in a gradient echo EPI pulse sequence. |
To keep things simple I've omitted slice selection and indicated a 90 degree RF excitation; this could of course be any flip angle in practice. (See Note 1.) I've also shown just the first four (and a half) gradient echoes in the echo train. The full sequence repeats as many times as there are phase-encoded rows in the k-space matrix. A typical EPI sequence for fMRI might use 64 gradient echoes, corresponding to 63 little blue triangles in the train shown in the figure above. But for the example k-space plane below, the k-space grid is 16x16 so assume for the time being that the full echo train would consist of 15 little blue triangles separating eight positive Gx gradient periods and eight negative Gx gradient periods.
Tuesday, October 11, 2011
Light relief (to buy me time).... This year's IgNobel in Medicine
Anyone who has ever experienced an fMRI scan knows two things about the effects of the method on a subject: (1) it's soporific, and (2) like a long car journey, you don't need to pee until five minutes after you've started. So this year's IgNobel Prize in Medicine, awarded jointly to two groups, caught my attention. Their work shows how the need to urinate can affect performance on some simple mental tests - just the sort of tests that we use in our fMRI experiments.
Implications for fMRI?
An enjoyable summary of the winning researchers' work is available on this Scientific American blog. According to this summary (Yeah, I haven't got around to reading the papers themselves yet. I'm training to be a mainstream science journalist ;-), needing to pee could have your subjects performing better (yes, better) on delayed gratification tasks, but worse on cognitive tasks. I take these results at face value - I have to, I've not read the papers - but I do want to think a little more about the implications for fMRI studies. It's hard enough keeping people awake, let alone motivated to do a task. And as for providing *additional* motivation for a task... The mind boggles!
"I feel the need, the need to pee!"
So, short of rejecting subjects who rush to the toilet the moment they get out of the scanner, what else could we do to control for the effects? Perhaps we could insist that subjects must be able to sit in a waiting room for 20 minutes post-scan - no pee - and only then opt to retain their scan data.
What else might produce a similar effects in subjects? General discomfort? You have to wonder, given the "need-to-pee" effect, whether a subject's general state of (un)happiness in the scanner might well be interfering with his mental performance. If so, having pressure points in a subject's lower back or whatever could have him significantly altering his task ability.
Alternatively, perhaps the need to pee and general discomfort merely increases a subject's propensity to move. We all know that this is one of the Stages of Having to Pee:
So perhaps this amusing research has some important ramifications for fMRI studies after all. As with so many other state factors - caffeine use, stress, menstrual cycle, etc. - it could just be another in a long litany of issues that contribute to our relatively poor inter-subject variability. You know my feeling on the matter: if you can control for it, control it. And if you can't control for it but you can measure it, MEASURE IT! Would it really be the end of the world if you were to ask your subject to rate her "need to pee" as she exits the scanner? How amusing would it be to see your effect disappear having regressed out the "need to pee" score?
PS I really will have a last post on EPI k-space along very soon! I promise!
Implications for fMRI?
An enjoyable summary of the winning researchers' work is available on this Scientific American blog. According to this summary (Yeah, I haven't got around to reading the papers themselves yet. I'm training to be a mainstream science journalist ;-), needing to pee could have your subjects performing better (yes, better) on delayed gratification tasks, but worse on cognitive tasks. I take these results at face value - I have to, I've not read the papers - but I do want to think a little more about the implications for fMRI studies. It's hard enough keeping people awake, let alone motivated to do a task. And as for providing *additional* motivation for a task... The mind boggles!
"I feel the need, the need to pee!"
So, short of rejecting subjects who rush to the toilet the moment they get out of the scanner, what else could we do to control for the effects? Perhaps we could insist that subjects must be able to sit in a waiting room for 20 minutes post-scan - no pee - and only then opt to retain their scan data.
What else might produce a similar effects in subjects? General discomfort? You have to wonder, given the "need-to-pee" effect, whether a subject's general state of (un)happiness in the scanner might well be interfering with his mental performance. If so, having pressure points in a subject's lower back or whatever could have him significantly altering his task ability.
Alternatively, perhaps the need to pee and general discomfort merely increases a subject's propensity to move. We all know that this is one of the Stages of Having to Pee:
So perhaps this amusing research has some important ramifications for fMRI studies after all. As with so many other state factors - caffeine use, stress, menstrual cycle, etc. - it could just be another in a long litany of issues that contribute to our relatively poor inter-subject variability. You know my feeling on the matter: if you can control for it, control it. And if you can't control for it but you can measure it, MEASURE IT! Would it really be the end of the world if you were to ask your subject to rate her "need to pee" as she exits the scanner? How amusing would it be to see your effect disappear having regressed out the "need to pee" score?
PS I really will have a last post on EPI k-space along very soon! I promise!
Tuesday, September 27, 2011
More on decoding - Jack Gallant radio interview
KQED radio had a half hour segment with Jack Gallant this morning, discussing the study published by Shinji Nishimoto et al. last week.
Here's a link to the KQED archive. An MP3 is also available.
Here's a link to the KQED archive. An MP3 is also available.
Thursday, September 22, 2011
"Reconstructing visual experiences from brain activity evoked by natural movies."
(Waiting for the next post in the physics series? Apologies. New academic year = teaching. Next post, on EPI, will be along within a few days...)
Gallant Lab strikes again!
Another shameless plug for fMRI at Berkeley! A study with the title of this post was published online today in Current Biology. Seems like it's generating as much media buzz as their previous study (Nature, 2008) using still images. Congratulations to Shinji, Joseph (An), Thomas and co. on another fine study. (And people said the Varian didn't work. Ha!)
Some links for more information:
UC Berkeley News Center.
Gallant Lab website. Best place to start, see the FAQ.
YouTube videos of some of the results:
Gallant Lab strikes again!
Another shameless plug for fMRI at Berkeley! A study with the title of this post was published online today in Current Biology. Seems like it's generating as much media buzz as their previous study (Nature, 2008) using still images. Congratulations to Shinji, Joseph (An), Thomas and co. on another fine study. (And people said the Varian didn't work. Ha!)
Some links for more information:
UC Berkeley News Center.
Gallant Lab website. Best place to start, see the FAQ.
YouTube videos of some of the results:
Monday, August 15, 2011
Physics for understanding fMRI artifacts: Part Eleven
Resolution and the field-of-view as seen in k-space
Understanding how distances in k-space manifest as distances in image space is quite straightforward. All you really need to remember is that the relationships are reciprocal. The discrete steps in k-space define the image field-of-view (FOV), whereas the maximum extents of k-space define the image resolution. In other words, small in k-space determines big in image space, and vice versa. In this post we will look first at the implications of the reciprocal relationship as it affects image appearance. Then we'll look at the simple mathematical relationships between lengths in k-space and their reciprocal lengths in image space.
Spatial frequencies in k-space: what lives where?
I mentioned in the previous post that there's no direct correspondence between any single point in k-space and any single point in real space. Instead, in k-space the spatial properties of the object are "turned inside out and sorted according to type" (kinda) in a symmetric and predictable fashion that leads to some intuitive relationships between particular regions of k-space and certain features of the image.
Here is what happens if you have just the inner (left column) or just the outer (right column) portions of k-space, compared to the full k-space matrix arising from 2D FT of a digital photograph (central column):
Inner k-space only:
The inner portion of k-space (top-left) possesses most of the signal but little detail, leading to a bright but blurry image (bottom-left). (See Note 1.) Most features remain readily apparent in the blurry image, however, because most contrast is preserved; image contrast is due primarily to signal intensity differences, not edges. If this weren't true we would always go for the highest signal-to-noise MRIs we could get, when in practice what we want is the highest contrast-to-noise images we can get! Imagine an MRI that had a million-to-one SNR but no contrast. How would you tell where the gray matter ends and the white matter begins? Without contrast no amount of signal or spatial resolution would help. So much for SNR alone!
Outer k-space only:
If we instead remove the central portion of k-space (top-right) then we remove most of the signal and the signal-based contrast to leave only the fine detail of the image (bottom-right). Strangely, though, it's still possible for us to make out the main image features because our brains are able to interpret entire objects from just edges. In actuality, however, there is very little contrast between the dark fuselage of the Hurricane, the dark shadow underneath it and the dark sky. Our brain infers contrast because we know what we should be seeing! If we were to try doing fMRI, say, on a series of edges-only images we would run into difficulties because we process the time series pixelwise. With a relatively low and homogeneous signal level you can bet good money the statistics would be grim.
Understanding how distances in k-space manifest as distances in image space is quite straightforward. All you really need to remember is that the relationships are reciprocal. The discrete steps in k-space define the image field-of-view (FOV), whereas the maximum extents of k-space define the image resolution. In other words, small in k-space determines big in image space, and vice versa. In this post we will look first at the implications of the reciprocal relationship as it affects image appearance. Then we'll look at the simple mathematical relationships between lengths in k-space and their reciprocal lengths in image space.
Spatial frequencies in k-space: what lives where?
I mentioned in the previous post that there's no direct correspondence between any single point in k-space and any single point in real space. Instead, in k-space the spatial properties of the object are "turned inside out and sorted according to type" (kinda) in a symmetric and predictable fashion that leads to some intuitive relationships between particular regions of k-space and certain features of the image.
Here is what happens if you have just the inner (left column) or just the outer (right column) portions of k-space, compared to the full k-space matrix arising from 2D FT of a digital photograph (central column):
Inner k-space only:
The inner portion of k-space (top-left) possesses most of the signal but little detail, leading to a bright but blurry image (bottom-left). (See Note 1.) Most features remain readily apparent in the blurry image, however, because most contrast is preserved; image contrast is due primarily to signal intensity differences, not edges. If this weren't true we would always go for the highest signal-to-noise MRIs we could get, when in practice what we want is the highest contrast-to-noise images we can get! Imagine an MRI that had a million-to-one SNR but no contrast. How would you tell where the gray matter ends and the white matter begins? Without contrast no amount of signal or spatial resolution would help. So much for SNR alone!
Outer k-space only:
If we instead remove the central portion of k-space (top-right) then we remove most of the signal and the signal-based contrast to leave only the fine detail of the image (bottom-right). Strangely, though, it's still possible for us to make out the main image features because our brains are able to interpret entire objects from just edges. In actuality, however, there is very little contrast between the dark fuselage of the Hurricane, the dark shadow underneath it and the dark sky. Our brain infers contrast because we know what we should be seeing! If we were to try doing fMRI, say, on a series of edges-only images we would run into difficulties because we process the time series pixelwise. With a relatively low and homogeneous signal level you can bet good money the statistics would be grim.
Saturday, August 6, 2011
Physics for understanding fMRI artifacts: Part Ten
(For the answer to the homework k-space diagram given at the end of Part Nine, see Note 1.)
K-space in two dimensions
As anyone knows who has encountered MRI professionally, whether in research or medicine, there seems to be an endless array of pulse sequences to choose between. The variety can be overwhelming at first. Nor is the situation helped by different vendors using different acronyms - we always use acronyms in MRI! - for what are essentially the same sequence.
It's little wonder, then, that most neophytes' eyes glaze over when it comes to comparing and contrasting any two pulse sequences if the taxonomy appears to be ad hoc. Where on earth to start? But it turns out that most pulse sequences can be categorized fairly easily, and their heritage traced, by separating the part(s) of the sequence that is responsible for spatial encoding, from the part(s) of the sequence that will provide the tissue or functional contrast. Occasionally there is overlap within the sequence of these two missions, but even then it's usually straightforward to understand the spatial encoding and interpret its genesis.
A useful pictorial representation of imaging pulse sequences
It turns out that there are only a handful of spatial encoding methods in common use these days, almost all with roots in the late 1970s or early 1980s. While new pulse sequences appear in the literature all the time, when you look at their k-space representations you'll be able to see how each new method has developed from a small number of key ideas from those early years. It's possible to categorize the encoding methods without k-space, but the k-space formalism makes comparisons trivial (in MR terms).
Spatial encoding methods can be separated into families derived from a central idea. For instance, following Lauterbur's original imaging paper in 1973 (which led to the family of projection reconstruction methods), in 1975 Richard Ernst's group came up with a sequence that utilized a 2D Fourier transform to yield the final image. (See Note 2.) It was a remarkable breakthrough and is the grandparent of nearly all medical/biological sequences still in common use today.
Still, even geniuses miss opportunities every now and then. And in 1980 a group at Aberdeen came up with a far more practical implementation of Fourier imaging, using amplitude-modulated gradients in a "constant time" pulse sequence, rather than the fixed amplitude, variable time scheme of Kumar, Welti and Ernst. It is this constant time scheme, which the Aberdeen group termed "spin warp" phase encoding, that provides the basis for most clinical (anatomical) scanning used today. It's also a good scheme to look at when first encountering 2D k-space, so we'll consider it in detail in this post.
The goal revisited
In the first part of the last post (see Part Nine) I used two examples of digital images to illustrate how the information content in a 2D plane of image pixels can be equivalently represented in reciprocal 2D space, or k-space. I mentioned that both the images and the k-space comprised 512x512 points, but later on when I started to draw (one-dimensional) k-space trajectories I did so on a k-space plane that was represented by just a set of axes, not discrete points. In case you think that image space and k-space in MRI are continuous, I'm going to spend a moment considering the digital k-space plane explicitly. (Like real space, k-space can also be continuous rather than digital, but that's not how MRI works.)
Here is a 16x16 plane of k-space points (see Note 3) overlaid on some actual signals to reinforce the point that we're digitizing a continuous process:
Courtesy: Karla Miller, FMRIB, University of Oxford. |
The goal is to traverse the entire k-space plane, i.e. to use our gradients to follow a trajectory that crosses every single point (as defined by the white grid itself), acquiring data (with our receiver coil), one point for each grid coordinate, as we go. Once we have traversed the entire 2D plane (and assuming a suitable data acquisition scheme) we will have 16x16 k-space data points and will then be in a position to apply a 2D FT and get a 16x16 image out. (See Note 4.)
Friday, August 5, 2011
Lessons from epidemiology
Ben Goldacre, psychiatrist, occasional fMRIer and critic of rubbish medical research over at BadScience.net, has produced a radio documentary that covers many of the pitfalls of modern medical science:
Science: From Cradle to Grave
It's aimed at a general audience but there are important reminders for us in fMRI-land.
Confounds abound
Epidemiology is a lot like fMRI when it comes to discriminating correlation from causation. As with many areas of research using human subjects, there are usually limits to the factors that can be controlled between groups, or even across time for an individual subject.
But there are often some simple things that we can measure - like heart and respiration rates during fMRI - and thus control for. Surely we should be measuring (and ideally controlling for) as many parameters as we can get our hands on, especially when the time and expense are comparatively minor. Get as much data as you can!
Science: From Cradle to Grave
It's aimed at a general audience but there are important reminders for us in fMRI-land.
Confounds abound
Epidemiology is a lot like fMRI when it comes to discriminating correlation from causation. As with many areas of research using human subjects, there are usually limits to the factors that can be controlled between groups, or even across time for an individual subject.
But there are often some simple things that we can measure - like heart and respiration rates during fMRI - and thus control for. Surely we should be measuring (and ideally controlling for) as many parameters as we can get our hands on, especially when the time and expense are comparatively minor. Get as much data as you can!
Resting state fMRI: a motion confound in connectivity studies?
Neuroskeptic has done us a favor and covered a recently accepted paper from Randy Buckner's lab concerning the role of motion when determining connectivity from resting state fMRI. Not only was the amount of motion found to differ systematically between male and female subjects, but this systematic difference was preserved across sessions, suggesting that it is a stable trait. The implications for group studies are discussed in the paper, and Neuroskeptic adds further perspective. It's a warning that all resting state fMRIers should heed.
Non-neural physiology.... again
There are some important limitations to consider, however. While ventricular and white matter regions were used as ways to remove some effects of heart rate and motion, the study did not acquire breathing or heart rate data and so the authors were unable to perform the more advanced BOLD-based model corrections developed by Rasmus Birn and Catie Chang (references below). Instead, they followed what might be considered the "typical" post-processing steps, including global mean signal removal. The methods are fine, my point is to highlight the limitations of the "typical" processing stream in the absence of independent physiological data.
So, could the gender differences be explained with improved physiological corrections? What about the motion correction methods in current use: might they not be up to the job we give them? We'll have to wait for further studies to find out. In the mean time, surely it only makes sense to acquire physiological data with resting state fMRI - heart rate and respiration at the very least, although there are suggestions that time course blood pressure might also be useful - and to try to explain as many confounds as possible before concluding there's a group difference due to brain activity.
References for physiological corrections:
Birn et al., Neuroimage 31: 1536 –1548, 2006.
Birn et al., Neuroimage 40: 644-654, 2008.
Chang & Glover, Neuroimage 47: 1381–1393, 2009. Also 1448 –1459 in the same issue.
Non-neural physiology.... again
There are some important limitations to consider, however. While ventricular and white matter regions were used as ways to remove some effects of heart rate and motion, the study did not acquire breathing or heart rate data and so the authors were unable to perform the more advanced BOLD-based model corrections developed by Rasmus Birn and Catie Chang (references below). Instead, they followed what might be considered the "typical" post-processing steps, including global mean signal removal. The methods are fine, my point is to highlight the limitations of the "typical" processing stream in the absence of independent physiological data.
So, could the gender differences be explained with improved physiological corrections? What about the motion correction methods in current use: might they not be up to the job we give them? We'll have to wait for further studies to find out. In the mean time, surely it only makes sense to acquire physiological data with resting state fMRI - heart rate and respiration at the very least, although there are suggestions that time course blood pressure might also be useful - and to try to explain as many confounds as possible before concluding there's a group difference due to brain activity.
References for physiological corrections:
Birn et al., Neuroimage 31: 1536 –1548, 2006.
Birn et al., Neuroimage 40: 644-654, 2008.
Chang & Glover, Neuroimage 47: 1381–1393, 2009. Also 1448 –1459 in the same issue.
Friday, July 29, 2011
Physics for understanding fMRI artifacts: Part Nine
Conjugate variables redefined
In this post I'm going to provide the first part of a recipe for generating 2D images. It's going to be somewhat algorithmic. I may occasionally mention what a particular step implies, but for the most part I'm going to step through a sequence of events, produce a final recipe for you to follow, then go back and explain what some of the parts mean physically. This isn't the traditional approach to learning about k-space; most text books assume that you need to understand what it all means before you get to learn "the rules of the game." As is my wont, I'm coming at it backwards. My hope is that you will then be able to go back to your text books - I'll tell you where to look for subsequent explanations - and cement a decent understanding of the "why" of k-space, not just the "how."
Conjugate variables revisited
In Part Five of this series I introduced the Fourier transform and conjugate variables. The post focused on the most common pair of conjugate variables: frequency and time. If we have the time domain representation and we want to transform it into its frequency domain equivalent, we apply a (one-dimensional) FT, and vice versa.
But there is another pair of conjugate variables that is more useful and intuitive for imaging applications. (In this case your intuition for one of the variables may not develop until the end of this post, or later! Bear with me.) Whether it's maps, MRIs or architectural plans, the axes of an image are best described in terms of length. If we choose the centimeter as our unit of length, then FTing an axis in cm will yield an axis in 1/cm. You happen to have an intuitive notion of time, frequency and space from everyday life. Don't worry about what the reciprocal of real space means, just accept for now that it exists. We call this reciprocal space k-space because another term for 1/cm is the wavenumber, and the wavenumber is given the symbol k.
Representing pictures in reciprocal space
Let's take a random picture, in this case it's a digital photograph of a Hawker Hurricane plane. It's clearly a 2D picture. We have a digital version of it so we can do mathematical operations on it with a computer. If we do a 2D (digital) FT of the picture we get its representation in 2D k-space:
In this post I'm going to provide the first part of a recipe for generating 2D images. It's going to be somewhat algorithmic. I may occasionally mention what a particular step implies, but for the most part I'm going to step through a sequence of events, produce a final recipe for you to follow, then go back and explain what some of the parts mean physically. This isn't the traditional approach to learning about k-space; most text books assume that you need to understand what it all means before you get to learn "the rules of the game." As is my wont, I'm coming at it backwards. My hope is that you will then be able to go back to your text books - I'll tell you where to look for subsequent explanations - and cement a decent understanding of the "why" of k-space, not just the "how."
Conjugate variables revisited
In Part Five of this series I introduced the Fourier transform and conjugate variables. The post focused on the most common pair of conjugate variables: frequency and time. If we have the time domain representation and we want to transform it into its frequency domain equivalent, we apply a (one-dimensional) FT, and vice versa.
But there is another pair of conjugate variables that is more useful and intuitive for imaging applications. (In this case your intuition for one of the variables may not develop until the end of this post, or later! Bear with me.) Whether it's maps, MRIs or architectural plans, the axes of an image are best described in terms of length. If we choose the centimeter as our unit of length, then FTing an axis in cm will yield an axis in 1/cm. You happen to have an intuitive notion of time, frequency and space from everyday life. Don't worry about what the reciprocal of real space means, just accept for now that it exists. We call this reciprocal space k-space because another term for 1/cm is the wavenumber, and the wavenumber is given the symbol k.
Representing pictures in reciprocal space
Let's take a random picture, in this case it's a digital photograph of a Hawker Hurricane plane. It's clearly a 2D picture. We have a digital version of it so we can do mathematical operations on it with a computer. If we do a 2D (digital) FT of the picture we get its representation in 2D k-space:
Saturday, July 16, 2011
Physics for understanding fMRI artifacts: Part Eight
I had initially planned to go into 2D imaging next, but after some consideration I've decided instead to tidy up a few loose ends that follow more naturally from the last post: gradient-recalled echoes and slice selection. Then, in Part Nine I promise to introduce the second in-plane dimension. This route should better allow me to bring everything together at the end of the next handful of posts and permit you to see, and understand, the EPI pulse sequence at a glance. That's the plan. Let's see if we can make it work! (See Note 1.)
Gradient-recalled echoes
In the last post I used a frequency encoding gradient, also called a readout gradient (because it's on while the signal is being recorded, or read out), to produce one-dimensional images - profiles - of water-filled objects. This isn't the typical way that the signal is acquired, however. Instead, it is typical to acquire a refocused, or echoed, signal that has a certain symmetry in time in order to obtain some experimental benefits. I'll mention these benefits later. First, let's see how the gradient echo works.
Here is a simple gradient echo pulse sequence that is adapted from the simple readout gradient-only sequence that was considered in Part Seven:
The first thing to note is that the period of data acquisition (analog-to-digital conversion) has been delayed and now occurs in concert with a readout gradient having a negative sign, rather than being coincident with the positive gradient period labeled 1 in the figure. Also, the duration of data acquisition has been doubled. So instead of acquiring a free induction decay (FID) almost immediately after the 90 degree excitation pulse, we are now acquiring an echo signal at a later time. How and why does this echo form?
Gradient-recalled echoes
In the last post I used a frequency encoding gradient, also called a readout gradient (because it's on while the signal is being recorded, or read out), to produce one-dimensional images - profiles - of water-filled objects. This isn't the typical way that the signal is acquired, however. Instead, it is typical to acquire a refocused, or echoed, signal that has a certain symmetry in time in order to obtain some experimental benefits. I'll mention these benefits later. First, let's see how the gradient echo works.
Here is a simple gradient echo pulse sequence that is adapted from the simple readout gradient-only sequence that was considered in Part Seven:
The first thing to note is that the period of data acquisition (analog-to-digital conversion) has been delayed and now occurs in concert with a readout gradient having a negative sign, rather than being coincident with the positive gradient period labeled 1 in the figure. Also, the duration of data acquisition has been doubled. So instead of acquiring a free induction decay (FID) almost immediately after the 90 degree excitation pulse, we are now acquiring an echo signal at a later time. How and why does this echo form?
Monday, July 11, 2011
Physics for understanding fMRI artifacts: Part Seven
Magnetic field gradients and one-dimensional MRI
Now that you have a basic understanding of the Fourier transform and some of the practical matters that arise from digital signals, it's time to look at a basic imaging pulse sequence and even make some simple images. We're going to use frequency encoding only for the time being, and for now we're going to make one-dimensional images (also called profiles) so that we can introduce an alternative form of timing diagram to represent a pulse sequence.
A magnetic field gradient alters the local resonance frequency
In a real image we might consider 64 different positions along x. These would define the voxels in one (in-plane) dimension of the image. But for the time being we'll consider just three points in the x direction: the central point, and one point either side.
Now that you have a basic understanding of the Fourier transform and some of the practical matters that arise from digital signals, it's time to look at a basic imaging pulse sequence and even make some simple images. We're going to use frequency encoding only for the time being, and for now we're going to make one-dimensional images (also called profiles) so that we can introduce an alternative form of timing diagram to represent a pulse sequence.
A magnetic field gradient alters the local resonance frequency
When a sample is placed into the magnet, all the protons (1-H nuclei) resonate at a near-identical frequency. At 3 T that resonance frequency is approximately 123 MHz, as given by the Larmor equation. If we then impose a magnetic field gradient across the sample - your subject's head, say - instead of having the same resonance frequency uniformly across the brain, there will now be a linear dependence in space (see Note 1):
In a real image we might consider 64 different positions along x. These would define the voxels in one (in-plane) dimension of the image. But for the time being we'll consider just three points in the x direction: the central point, and one point either side.
At the center of the magnet the gradient has no net effect, so the resonance frequency at that point is still 123 MHz. We call this point the null crossing, because all three linear gradients, X, Y and Z, are engineered to have no effect here. (See Note 2.) And to keep things symmetric, the gradient null crossing is placed in the geometric center of the magnet - the isocenter - because that's where the main magnetic field has been engineered to be most homogeneous, and we want to do all our imaging in that location to get the best scanner performance.
Saturday, July 2, 2011
MRI Claymation
It's a long weekend here in the US of A, it's hotter than Hades here in northern California (they said there would be a fog-cooled sea breeze! I want my money back!), and I am only halfway through the next post in the background physics series on account of having spent a very pleasant week in Quebec at the Human Brain Mapping conference. So, in lieu of anything more useful at short notice, I thought I'd share a truly awesome video I just found online, courtesy of Andre van der Kouwe and colleagues at MGH. The first two minutes demonstrate the method - surface renderings from MRIs of clay figures - and then it gets really fun: MRI making an image of itself.
427 views in two years simply doesn't do this work justice. Let's fix that!
427 views in two years simply doesn't do this work justice. Let's fix that!
Thursday, June 23, 2011
Physics for understanding fMRI artifacts: Part Six
Practical issues arising from the use of the Fourier transform in MRI
Here's the plan for this post. We will complete our look at functions undergoing Fourier transformation; there are some really useful relationships to see and to commit to memory (even when the figures are hand drawn for expediency!). Then we will look at the effects of the FT on real signals. We have two issues to consider: 1) a finite sampling window, and 2) digitization. Off we go!
Fourier pairs
We saw in the last post (Part Five) how a single frequency - a sinusoid - can be represented by a single line - a delta function - in a frequency domain plot. This is an example of a Fourier pair because the relationship holds both ways, i.e. if you take a delta function in the time domain and FT it you get a sinusoid in the frequency domain, and vice versa:
What about other useful Fourier pairs? Here's a useful relationship. An exponential decay Fourier transforms into what's known as a Lorentzian line:
Again, remember that the exponential can be in either the time domain or the frequency domain, although in MRI we generally deal with exponential decays (signals) in the time domain. It's also worth pointing out here that the faster the exponential decay, the broader the Lorentzian line in the other domain. This inverse relationship has a number of practical consequences for fMRI. I'll come back to this point below.
I found an interactive online tool on the National High Magnetic Field Lab's website that allows you to change the rate of decay as well as the frequency of an oscillation in the time domain, and see the resulting Lorentzian line in the frequency domain. Tinker with it here. (It's Java, it takes a couple of seconds to load.)
Next, let's look at what could be arguably the most useful Fourier pair for MRI. A boxcar (or top hat) function Fourier transforms into a sinc function, where sinc(x) = sin(x)/x:
Here's the plan for this post. We will complete our look at functions undergoing Fourier transformation; there are some really useful relationships to see and to commit to memory (even when the figures are hand drawn for expediency!). Then we will look at the effects of the FT on real signals. We have two issues to consider: 1) a finite sampling window, and 2) digitization. Off we go!
Fourier pairs
We saw in the last post (Part Five) how a single frequency - a sinusoid - can be represented by a single line - a delta function - in a frequency domain plot. This is an example of a Fourier pair because the relationship holds both ways, i.e. if you take a delta function in the time domain and FT it you get a sinusoid in the frequency domain, and vice versa:
What about other useful Fourier pairs? Here's a useful relationship. An exponential decay Fourier transforms into what's known as a Lorentzian line:
Again, remember that the exponential can be in either the time domain or the frequency domain, although in MRI we generally deal with exponential decays (signals) in the time domain. It's also worth pointing out here that the faster the exponential decay, the broader the Lorentzian line in the other domain. This inverse relationship has a number of practical consequences for fMRI. I'll come back to this point below.
I found an interactive online tool on the National High Magnetic Field Lab's website that allows you to change the rate of decay as well as the frequency of an oscillation in the time domain, and see the resulting Lorentzian line in the frequency domain. Tinker with it here. (It's Java, it takes a couple of seconds to load.)
Next, let's look at what could be arguably the most useful Fourier pair for MRI. A boxcar (or top hat) function Fourier transforms into a sinc function, where sinc(x) = sin(x)/x:
Wednesday, June 15, 2011
Physics for understanding fMRI artifacts: Part Five
An introduction to the Fourier transform - what it does and how it works
Moving along from bathroom design and complex numbers, it's time to look at one of the most fundamental mathematical relationships underpinning all of modern MRI: the Fourier transform (FT). Accordingly, I would strongly suggest that you make sure you fully understand everything in this post before continuing into future posts.
You may not believe it by looking at us, but take it from me that MR physicists do "mental FTs" every day as we compare pulse sequences, try to understand artifacts and so on. Many of us (me) are not mathematically inclined, either. But being able to switch your mind between domains, as they are called, is a really useful skill if you want to be good at artifact recognition. An MR physicist will see something in one domain and will immediately try to imagine how it would appear in the alternate domain. Thus, if an image contains an artifact - an intense stripe, say - the MR physicist tries to imagine what signal-acquisition domain feature is implied by it, then tries to track it down. Alternatively, when comparing two different pulse sequences - GRAPPA on versus GRAPPA off, perhaps - the MR physicist will project the implications of each into the image domain in order to comprehend the likely practical consequences. You don't have to remember the equations or even understand the maths itself, but it is really useful to grasp the concepts!
Fourier analysis: the art of decomposition
As you have seen in your introductory texts and in previous video lectures, MRI signals are actually time-varying voltages that are induced in receiver coils as a result of magnetization oscillating (precessing) about the polarizing magnetic field. The detection of time-varying signals has several implications for obtaining MR images, the very first of which is choosing a representation for the information content in the signals.
Consider the two waveforms on the left-hand side of the figure below, which have the same amplitude but differ in their frequency:
It is possible in principle - and in practice for simple examples such as these - to take a ruler and measure the amplitude and frequency and then draw a graph representing the time varying signals as a plot of amplitude (along y) against frequency (along x), as shown on the right-hand side of the above figure. Piece of cake, right?
Moving along from bathroom design and complex numbers, it's time to look at one of the most fundamental mathematical relationships underpinning all of modern MRI: the Fourier transform (FT). Accordingly, I would strongly suggest that you make sure you fully understand everything in this post before continuing into future posts.
You may not believe it by looking at us, but take it from me that MR physicists do "mental FTs" every day as we compare pulse sequences, try to understand artifacts and so on. Many of us (me) are not mathematically inclined, either. But being able to switch your mind between domains, as they are called, is a really useful skill if you want to be good at artifact recognition. An MR physicist will see something in one domain and will immediately try to imagine how it would appear in the alternate domain. Thus, if an image contains an artifact - an intense stripe, say - the MR physicist tries to imagine what signal-acquisition domain feature is implied by it, then tries to track it down. Alternatively, when comparing two different pulse sequences - GRAPPA on versus GRAPPA off, perhaps - the MR physicist will project the implications of each into the image domain in order to comprehend the likely practical consequences. You don't have to remember the equations or even understand the maths itself, but it is really useful to grasp the concepts!
Fourier analysis: the art of decomposition
As you have seen in your introductory texts and in previous video lectures, MRI signals are actually time-varying voltages that are induced in receiver coils as a result of magnetization oscillating (precessing) about the polarizing magnetic field. The detection of time-varying signals has several implications for obtaining MR images, the very first of which is choosing a representation for the information content in the signals.
Consider the two waveforms on the left-hand side of the figure below, which have the same amplitude but differ in their frequency:
Courtesy: Karla Miller, FMRIB, University of Oxford. |
It is possible in principle - and in practice for simple examples such as these - to take a ruler and measure the amplitude and frequency and then draw a graph representing the time varying signals as a plot of amplitude (along y) against frequency (along x), as shown on the right-hand side of the above figure. Piece of cake, right?
Monday, June 13, 2011
Discriminate equally when recruiting subjects
Inclusion and exclusion criteria possibly don't get the scrutiny in fMRI studies that they should. After all, who reports the complete criteria in the methods sections of articles? At best we get headline exclusions. Without the full questionnaires we, as readers (and reviewers), must trust that no biases were introduced accidentally. Yet, like subtle parameter (mis)settings on the scanner, precisely how experimental and control groups are established is going to have profound effects on your results. They'd better!
A study included in Neuroskeptic's extremely useful weekly roundup of neuroscience makes the point of group bias quite clearly with a hypothetical example, and highlights the possibility that in selecting healthy controls we might accidentally set the bar higher than for the target group, introducing a potential confound to the experiment: Beware the "super well" - why the controls in psychology research are often too healthy.
Obvious? It should be, to a careful experimentalist. But there are insidious ways this selection bias can creep into your studies, making the point worth repeating ad nauseam in my opinion, especially to the waves of newcomers to our field. (Preach this lesson to all incoming students!) I'm going to make the unsubstantiated statement that subject selection (and its alliteration!) ranks above both acquisition and post-processing methods when it comes to biases and the ability to get an incorrect result with fMRI. It's critically important to balance physiological as well as psychological profiles as closely as possible between experimental and control groups.
The insidious biases? Whenever one or other group is difficult to recruit, for demographic reasons or whatever, there is a tendency to let certain things slide in order to net the requisite total. Don't cut this corner! Match as many factors as you possibly can, then note any factors that you can't match and include them in your experiment as covariates of no interest. In this way you might avoid the embarrassment of interpreting a neural difference for something that is better explained by physiology; hematocrit levels, say.
Remember, your fMRI experiment starts when you start recruiting subjects. And the less rigorously you do this fundamental, crucial step the more likely you are to get big error bars, or worse. Even I can't help your data at that point!
A study included in Neuroskeptic's extremely useful weekly roundup of neuroscience makes the point of group bias quite clearly with a hypothetical example, and highlights the possibility that in selecting healthy controls we might accidentally set the bar higher than for the target group, introducing a potential confound to the experiment: Beware the "super well" - why the controls in psychology research are often too healthy.
Obvious? It should be, to a careful experimentalist. But there are insidious ways this selection bias can creep into your studies, making the point worth repeating ad nauseam in my opinion, especially to the waves of newcomers to our field. (Preach this lesson to all incoming students!) I'm going to make the unsubstantiated statement that subject selection (and its alliteration!) ranks above both acquisition and post-processing methods when it comes to biases and the ability to get an incorrect result with fMRI. It's critically important to balance physiological as well as psychological profiles as closely as possible between experimental and control groups.
The insidious biases? Whenever one or other group is difficult to recruit, for demographic reasons or whatever, there is a tendency to let certain things slide in order to net the requisite total. Don't cut this corner! Match as many factors as you possibly can, then note any factors that you can't match and include them in your experiment as covariates of no interest. In this way you might avoid the embarrassment of interpreting a neural difference for something that is better explained by physiology; hematocrit levels, say.
Remember, your fMRI experiment starts when you start recruiting subjects. And the less rigorously you do this fundamental, crucial step the more likely you are to get big error bars, or worse. Even I can't help your data at that point!
Friday, June 10, 2011
If Blogger designed bathrooms...
...you can bet they would insist on power outlets over the bathtub. And two in the shower. (You never know when your laptop might get low on battery.)
Until they move into the construction industry, however, we must content ourselves with their software design skills, such as this gem:
The "Screw your career six ways from Sunday button," a.k.a. the Publish Post button, is carefully placed well away from any button that you might want to use on a repeated basis for other reasons entirely. This layout is cunningly designed for blogging highly contentious posts; the sort where in your draft you might write reminder notes to yourself. Like, say, "Make sure you reference Mike Dood's crappy article on neologisms. Utter bollox!" These notes are, like Tweets from a congressman to female college students, designed to be confidential. You don't want them accidentally distributed to three billion random strangers just because you pushed your finger on the click pad a little too far to the left after that second glass of red wine, for instance. (Yeah, picky I know.) And you'd rather Mike Dood - a colleague in your department - didn't know your true feelings on his work, either. *
Ah, Blogger. Bless. Did James Bond ever have to put up with this sort of crap from Q, I wonder? I don't recall the eject button ever appearing in between the seek button on the radio and the cigarette lighter. Not even on the Lotus Esprit. And I'm sure James would have pointed it out if it had. Not exactly the most robust design, to be honest. ("Ah! Country music! I can't handle that. Let's see what else we can get out here in... Fuuuuuu...!")
* Recovering from accidental publication is as simple as rushing to the Edit Posts page and deleting the offending post, then starting again from scratch now that you have just trashed all your work, all the while praying that not too many people just got e-notified of your new post and managed to see it (and cache it!) before you were able to hit Delete.
Until they move into the construction industry, however, we must content ourselves with their software design skills, such as this gem:
The "Screw your career six ways from Sunday button," a.k.a. the Publish Post button, is carefully placed well away from any button that you might want to use on a repeated basis for other reasons entirely. This layout is cunningly designed for blogging highly contentious posts; the sort where in your draft you might write reminder notes to yourself. Like, say, "Make sure you reference Mike Dood's crappy article on neologisms. Utter bollox!" These notes are, like Tweets from a congressman to female college students, designed to be confidential. You don't want them accidentally distributed to three billion random strangers just because you pushed your finger on the click pad a little too far to the left after that second glass of red wine, for instance. (Yeah, picky I know.) And you'd rather Mike Dood - a colleague in your department - didn't know your true feelings on his work, either. *
Ah, Blogger. Bless. Did James Bond ever have to put up with this sort of crap from Q, I wonder? I don't recall the eject button ever appearing in between the seek button on the radio and the cigarette lighter. Not even on the Lotus Esprit. And I'm sure James would have pointed it out if it had. Not exactly the most robust design, to be honest. ("Ah! Country music! I can't handle that. Let's see what else we can get out here in... Fuuuuuu...!")
* Recovering from accidental publication is as simple as rushing to the Edit Posts page and deleting the offending post, then starting again from scratch now that you have just trashed all your work, all the while praying that not too many people just got e-notified of your new post and managed to see it (and cache it!) before you were able to hit Delete.
Physics for understanding fMRI artifacts: Part F(f)our
(Wondering why the title has F(four) in it? It's so that Blogger can't trash this post for a third time! I'm giving it a new, unique name. Ha!)
It's finally time to get back to the series of posts on the essential physics concepts that will allow you to interpret and differentiate between acquisition artifacts. There are another five or six posts in this background series, so bear with me. After that we will shift gears and look at "good data," taking some time to assess the normal variations that you can expect to see in time series EPI, and then I promise we'll look at artifacts themselves.
When reality meets imagination
Before we go any further there are a few mathematical properties we need to review. These are actually quite simple relationships that, for the most part, can be explained via a handful of pictures. Like this one....
Complex numbers
The name notwithstanding, complex numbers are quite straightforward to understand from a physical perspective, with a tiny bit of explanation. By the time you've finished reading this post you should have a basic idea of what complex numbers mean and where they come from (they arise quite naturally, as it happens), but for now I am simply going to define some relationships. Hang in there.
We start by defining a so-called imaginary number as any number for which the square is negative. The squares of real numbers - the ones you're used to in everyday life, such as 2, 8.73, -7, pi, and so on - are always positive, whether the number being squared is positive or negative. Thus 2x2 = 4, 8.73x8.73 = 76.2129, -7x-7 = 49 and so on. Squaring a negative number results in a positive number. So how could we possibly get an answer of -4 or -25 out of any square?
It's finally time to get back to the series of posts on the essential physics concepts that will allow you to interpret and differentiate between acquisition artifacts. There are another five or six posts in this background series, so bear with me. After that we will shift gears and look at "good data," taking some time to assess the normal variations that you can expect to see in time series EPI, and then I promise we'll look at artifacts themselves.
When reality meets imagination
Before we go any further there are a few mathematical properties we need to review. These are actually quite simple relationships that, for the most part, can be explained via a handful of pictures. Like this one....
Maths dude, chillin'. |
Complex numbers
The name notwithstanding, complex numbers are quite straightforward to understand from a physical perspective, with a tiny bit of explanation. By the time you've finished reading this post you should have a basic idea of what complex numbers mean and where they come from (they arise quite naturally, as it happens), but for now I am simply going to define some relationships. Hang in there.
We start by defining a so-called imaginary number as any number for which the square is negative. The squares of real numbers - the ones you're used to in everyday life, such as 2, 8.73, -7, pi, and so on - are always positive, whether the number being squared is positive or negative. Thus 2x2 = 4, 8.73x8.73 = 76.2129, -7x-7 = 49 and so on. Squaring a negative number results in a positive number. So how could we possibly get an answer of -4 or -25 out of any square?
Open letter to Blogger
Blogger, you suck. You utterly suck. You have some major bugs in your software which have caused me to waste inordinate amounts of time recreating posts that oscillate between draft and published status. And when I submit a technical help request I hear nothing. For weeks.
Just now I hit SAVE AS DRAFT on a published post. No big deal you would think, right? I simply went back and hit PUBLISH POST again. And voila! The post showed up on the blog still marked as having been published on Sunday 5th June.... Ah, except that now the post's *content* has reverted to a draft from before 15th May! This, even though I have the archive frequency set to "Daily" after tuifu (Google it) on Friday 13th May. WTF?????
Lucky for you, rather than get in my car and drive down to Mountain View to find out who is in charge of this fiasco, I have a backup of my own. So "all" I have to do is re-type the content and re-upload those images that your bug sought to send back into the ether. Doubly lucky for you, I have an extra few hours of time this morning, having canceled a meeting earlier on. So this isn't nearly the crisis that it could have been, and it is only mildly increasing my blood pressure.
Now that I know what a piece of crap your software really is, I shall be taking other remedial steps to avoid similar snafus in the future; such as never, ever actually publishing a draft with the same name as a real post. Instead, I shall create drafts with names like, oh "Draft," and then when I am ready to actually publish the post I shall create a brand new one, with the intended title, and push that puppy out there. I want to see you bite me in the ass then, Blogger. Come on, give it your best shot!
Love,
practiCalfMRI
Just now I hit SAVE AS DRAFT on a published post. No big deal you would think, right? I simply went back and hit PUBLISH POST again. And voila! The post showed up on the blog still marked as having been published on Sunday 5th June.... Ah, except that now the post's *content* has reverted to a draft from before 15th May! This, even though I have the archive frequency set to "Daily" after tuifu (Google it) on Friday 13th May. WTF?????
Lucky for you, rather than get in my car and drive down to Mountain View to find out who is in charge of this fiasco, I have a backup of my own. So "all" I have to do is re-type the content and re-upload those images that your bug sought to send back into the ether. Doubly lucky for you, I have an extra few hours of time this morning, having canceled a meeting earlier on. So this isn't nearly the crisis that it could have been, and it is only mildly increasing my blood pressure.
Now that I know what a piece of crap your software really is, I shall be taking other remedial steps to avoid similar snafus in the future; such as never, ever actually publishing a draft with the same name as a real post. Instead, I shall create drafts with names like, oh "Draft," and then when I am ready to actually publish the post I shall create a brand new one, with the intended title, and push that puppy out there. I want to see you bite me in the ass then, Blogger. Come on, give it your best shot!
Love,
practiCalfMRI
Sunday, June 5, 2011
Memorial to an old post
This used to be the post entitled "Physics for understanding fMRI artifacts: Part Four." I had so many problems getting the draft published that I decided I wouldn't risk deleting this actual post, even though I have changed the title and the content. Wherever this thing points inside the "cloud" at Blogger, I want to seal it off like a leaky nuclear reactor and leave it for eternity, hopeful that it will be unable to infect any subsequent posts once it's buried in metaphorical concrete.
The replacement post, "Physics for understanding fMRI artifacts: Part F(f)our" is here.
I hadn't realized til recently just how flaky cloud computing can be. Clearly xkcd is way ahead of me, as usual:
The replacement post, "Physics for understanding fMRI artifacts: Part F(f)our" is here.
I hadn't realized til recently just how flaky cloud computing can be. Clearly xkcd is way ahead of me, as usual:
The Cloud.
(Stolen with his blanket permission from xkcd.com)
Tuesday, May 31, 2011
Resting state fMRI: just what can we allow subjects to do?
I'm still waiting to see if Google/Blogger might recover the draft of that fourth post in the series on background physics for fMRI artifact recognition, so in the mean time I thought I'd take a closer look at the only part of the resting-state experiment that I didn't address in detail over the past few months: what we allow the subjects to do during the acquisition.
Potayto, potahto
I'd like to begin with a definition change to assist in understanding the limits of "resting state" fMRI. I'll continue to refer to rs-fMRI as the act of acquiring a block of fMRI data - let's say six minutes' worth - in the absence of any specific externally presented task during the acquisition, with one small exception: we'll assume that the subject is presented with a simple fixation cross and is asked to keep his eyes open. (More on visual and auditory effects on rs-fMRI below.)
Now, though, I'd like to rename the mental activity that is happening during the rs-fMRI acquisition period. The definition of "rest" is tricky because it depends on so many state-dependent factors. What if I'm worried about an upcoming exam? What if I'm hungry and distracted by the need for food? Just because I'm awake during both doesn't make them equivalent periods of "rest," even if I am lying in the scanner staring at a fixation cross in both cases. And, because I want to distinguish between such periods without an explicit task in the discussion to follow, I'm going to term what we do today as "free-thinking state" fMRI, a term used by Cindy Lustig from WashU in a 2003 interview about some of her work.
Okay, so now we have a new definition to work with. How might this state of free thinking be manipulated without fundamentally changing the goal, which is (I assume) to map the largest networks that arise intrinsically, in the absence of explicit, externally driven, goal-directed behavior (in the form of some sort of task presentation)?
Caveats: every fMRI experiment should have some!
Before we look at intentional manipulation of a subject's brain state, let's first review the confounds and limitations that have already been unearthed when it comes to conducting free-thinking state fMRI experiments. These are effects that have been shown to change the networks detected in standard rs-fMRI experiments. (See Note 1.)
Potayto, potahto
I'd like to begin with a definition change to assist in understanding the limits of "resting state" fMRI. I'll continue to refer to rs-fMRI as the act of acquiring a block of fMRI data - let's say six minutes' worth - in the absence of any specific externally presented task during the acquisition, with one small exception: we'll assume that the subject is presented with a simple fixation cross and is asked to keep his eyes open. (More on visual and auditory effects on rs-fMRI below.)
Now, though, I'd like to rename the mental activity that is happening during the rs-fMRI acquisition period. The definition of "rest" is tricky because it depends on so many state-dependent factors. What if I'm worried about an upcoming exam? What if I'm hungry and distracted by the need for food? Just because I'm awake during both doesn't make them equivalent periods of "rest," even if I am lying in the scanner staring at a fixation cross in both cases. And, because I want to distinguish between such periods without an explicit task in the discussion to follow, I'm going to term what we do today as "free-thinking state" fMRI, a term used by Cindy Lustig from WashU in a 2003 interview about some of her work.
Okay, so now we have a new definition to work with. How might this state of free thinking be manipulated without fundamentally changing the goal, which is (I assume) to map the largest networks that arise intrinsically, in the absence of explicit, externally driven, goal-directed behavior (in the form of some sort of task presentation)?
Caveats: every fMRI experiment should have some!
Before we look at intentional manipulation of a subject's brain state, let's first review the confounds and limitations that have already been unearthed when it comes to conducting free-thinking state fMRI experiments. These are effects that have been shown to change the networks detected in standard rs-fMRI experiments. (See Note 1.)
Wednesday, May 18, 2011
Blogger bites
So it transpires that Blogger had a Friday the 13th moment and as they are attempting to restore users' accounts and comments they have managed to trash a lot of other drafts in the process. The fourth installment of background physics for fMRI artifact recognition is presently resembling the first few notes I made back in March. Sunday's near-finished version is in the ether somewhere, perhaps. Wish I'd known there was an ongoing problem, I'd have found something else to do that day.
I guess we get what we pay for. Still, this is a powerful lesson for cloud users generally. Don't rely on the cloud!!! Make your own backups!!!!! I have pdf "backups" of all completed posts, I guess it's time to start copy-pasting my own backups as I go, too. Sigh. Opening beer.... :-|
I guess we get what we pay for. Still, this is a powerful lesson for cloud users generally. Don't rely on the cloud!!! Make your own backups!!!!! I have pdf "backups" of all completed posts, I guess it's time to start copy-pasting my own backups as I go, too. Sigh. Opening beer.... :-|
Wednesday, May 4, 2011
Using GRAPPA for fMRI in the presence of subject motion
I've received a few queries about my opinions on the use of GRAPPA for EPI time series, opinions which have been mentioned in passing in earlier posts. In my user training guide/FAQ are some sections that deal with GRAPPA features and performance, but I didn't include an in-depth illustration of the artifacts or the motion sensitivity. So, to help you make a decision on whether GRAPPA is something you should be using in your fMRI experiments, I'm going to post here a few more images and some movies to highlight the problems that can arise in the presence of significant head motion. I'll focus on R=2 accelerated EPI, but the principles hold for higher acceleration factors as well.
Whether or not you ultimately select GRAPPA for your experiment, it is important to make your determination objectively, taking into account your experimental needs, the benefits of the method, its failure modes and prior studies you can rely on for validation. (See "Beware of physicists bearing gifts!")
A brief review of the GRAPPA method
If you don't have even a rudimentary understanding of parallel imaging (PI) generally or the GRAPPA method specifically, I would encourage you to stop reading this post now and go read at least one of these articles: Larkman & Nunes (2007) or Blaimer et al. (2004). Then come back when you're ready to proceed with a speedy review.
Whether or not you ultimately select GRAPPA for your experiment, it is important to make your determination objectively, taking into account your experimental needs, the benefits of the method, its failure modes and prior studies you can rely on for validation. (See "Beware of physicists bearing gifts!")
** Please note that the following information pertains to the GRAPPA implementation available as product on the Siemens Trio/TIM platform with VB15 software. If you have a different Siemens platform or a different vendor's scanner there may be significant differences in the implementation. **
A brief review of the GRAPPA method
If you don't have even a rudimentary understanding of parallel imaging (PI) generally or the GRAPPA method specifically, I would encourage you to stop reading this post now and go read at least one of these articles: Larkman & Nunes (2007) or Blaimer et al. (2004). Then come back when you're ready to proceed with a speedy review.
Thursday, April 21, 2011
Tactical approaches to (re)shimming
In an earlier post I looked at the effects of heating on the temporal stability of EPI data. Particular attention was given to the translations in the phase encoding dimension that arise whenever the scanner drifts off resonance during imaging, through the heating and subsequent cooling of the gradient coil (rapid time constants) as well as of the passive iron shims between the gradient coil and the magnet cryostat (slow time constants). These frequency shifts are most apparent between blocks of EPI as discontinuities, or steps, in a concatenated time series, because of the on-resonance adjustment that precedes the start of each EPI block.
Fortunately, for a typical modern scanner there is little detrimental effect on the temporal SNR (and statistical power) of the total time series once it has been corrected for motion using a standard rigid-body realignment algorithm. But the outstanding question is this: must we rely so heavily on the realignment algorithm to fix what is really a hardware limitation? Surely, fixing it in software is a hack? (Save the jokes, I've almost certainly heard them! See Note 1, below.) And, as pointed out by El-Sharkawy et al., if the magnetic field is being perturbed sufficiently by heating to cause components with a Z spatial dependence to change, what about all the other spatial dependencies? If the shim is being compromised, why not do something about it?
The standard fMRI protocol
Let's start by reviewing what happens in a standard protocol. On Siemens scanners, at least, the usual approach to an fMRI experiment is to shim at the start of the session and then not re-shim unless there is a substantial change in the prescribed imaging volume (the stack of EPI slices). Shimming is initiated by the first scan that's not a localizer. (See Note 2.) So, if a 3D anatomical, such as an MP-RAGE, is acquired after the localizer and before the first EPI, say, there will be no further shimming during the session (unless requested by the operator).
Assessing the problem
Now, we could continue to investigate shimming as a means to mitigate the effects of heating using experiments on phantoms. That would be a full study in and of itself. To keep this post shorter and more relevant to you, I'm going to jump straight to brain data. That's because when we are talking about shimming (or re-shimming), we are going to mix the effects of scanner heating with our old chum, subject movement. We're going to lump everything together and look at the resultant. Put another way, there's no point in coming up with a putative solution to the heating issue if it could exacerbate the movement issue.
Experimental verification
Very briefly, as part of a vision experiment, shimming was performed (or not) between blocks of 150 volumes of EPI, TR=2 seconds. (Siemens users: See Note 3.) During the first session, shimming was performed between blocks for the first five blocks, then shimming was omitted between blocks for the next five. The time gaps between blocks weren't controlled rigorously; it was whatever was required to set up a new stimulus script plus, when appropriate, the 30-odd seconds to re-shim. A typical inter-block gap was between one and two minutes. In a second session on the same subject the ordering was reversed: shimming was omitted for the first five blocks, then performed between blocks for the final five blocks.
Fortunately, for a typical modern scanner there is little detrimental effect on the temporal SNR (and statistical power) of the total time series once it has been corrected for motion using a standard rigid-body realignment algorithm. But the outstanding question is this: must we rely so heavily on the realignment algorithm to fix what is really a hardware limitation? Surely, fixing it in software is a hack? (Save the jokes, I've almost certainly heard them! See Note 1, below.) And, as pointed out by El-Sharkawy et al., if the magnetic field is being perturbed sufficiently by heating to cause components with a Z spatial dependence to change, what about all the other spatial dependencies? If the shim is being compromised, why not do something about it?
The standard fMRI protocol
Let's start by reviewing what happens in a standard protocol. On Siemens scanners, at least, the usual approach to an fMRI experiment is to shim at the start of the session and then not re-shim unless there is a substantial change in the prescribed imaging volume (the stack of EPI slices). Shimming is initiated by the first scan that's not a localizer. (See Note 2.) So, if a 3D anatomical, such as an MP-RAGE, is acquired after the localizer and before the first EPI, say, there will be no further shimming during the session (unless requested by the operator).
Assessing the problem
Now, we could continue to investigate shimming as a means to mitigate the effects of heating using experiments on phantoms. That would be a full study in and of itself. To keep this post shorter and more relevant to you, I'm going to jump straight to brain data. That's because when we are talking about shimming (or re-shimming), we are going to mix the effects of scanner heating with our old chum, subject movement. We're going to lump everything together and look at the resultant. Put another way, there's no point in coming up with a putative solution to the heating issue if it could exacerbate the movement issue.
Experimental verification
Very briefly, as part of a vision experiment, shimming was performed (or not) between blocks of 150 volumes of EPI, TR=2 seconds. (Siemens users: See Note 3.) During the first session, shimming was performed between blocks for the first five blocks, then shimming was omitted between blocks for the next five. The time gaps between blocks weren't controlled rigorously; it was whatever was required to set up a new stimulus script plus, when appropriate, the 30-odd seconds to re-shim. A typical inter-block gap was between one and two minutes. In a second session on the same subject the ordering was reversed: shimming was omitted for the first five blocks, then performed between blocks for the final five blocks.
Tuesday, April 19, 2011
Administrative Post: 19 April, 2011 (2/2)
Siemens users may be interested in a user training guide & FAQ that we use at Berkeley to initiate newbies into the ways of the dark side. (Using the Force is often the only way to get an fMRI experiment to work. What, you thought the f stood for functional? Ha!)
The guide is a bit rough - sorry for English-isms and typos - is updated fairly regularly based on popular misconceptions and the like, and is worth exactly what you pay for it. It's free. Use and abuse it however you like. It's a Word document so that you can reorder things, add your own notes, etc. I would appreciate constructive feedback, especially if you find mistakes or have suggestions to improve it, but there's no need to ask permission to use it, change it, replicate it, sell it...
The most recent version of the training guide/FAQ is available from this web page:
http://bic.berkeley.edu/scanning
Locate the file attachment towards the bottom of the page, it's called 3T_user_training_FAQ_19April2011.doc. The most recent contents appears below.
Caveat emptor.
The document is only a component of user training, don't expect to learn how to scan by reading it! Rather, use the tips to extend your understanding, refine your experimental technique and so on. Note also that this document is for a Siemens TIM/Trio (with 32 receive channels) and running software VB15. There may be subtle or not-so-subtle differences for the Verio and Skyra platforms, for software VB17, VD11, etc. so keep your wits about you if you're not on a Trio with VB15!
You may have local differences, e.g. custom pulse sequences, that allow you to do things that contradict what you find in this user guide. Talk to your physicist and your local user group before taking anything you find in this guide/FAQ too literally.
Finally, you wont find many (any?) references in this guide/FAQ. It's for the training of newbies, not a comprehensive literature review! If you are seeking further information on something I mention in the guide and you can't find a suitable reference yourself, shoot me an email and I'll do my best to point you in a useful direction.
User guide/FAQ contents (as of 19 April, 2011):
The guide is a bit rough - sorry for English-isms and typos - is updated fairly regularly based on popular misconceptions and the like, and is worth exactly what you pay for it. It's free. Use and abuse it however you like. It's a Word document so that you can reorder things, add your own notes, etc. I would appreciate constructive feedback, especially if you find mistakes or have suggestions to improve it, but there's no need to ask permission to use it, change it, replicate it, sell it...
The most recent version of the training guide/FAQ is available from this web page:
http://bic.berkeley.edu/scanning
Locate the file attachment towards the bottom of the page, it's called 3T_user_training_FAQ_19April2011.doc. The most recent contents appears below.
Caveat emptor.
The document is only a component of user training, don't expect to learn how to scan by reading it! Rather, use the tips to extend your understanding, refine your experimental technique and so on. Note also that this document is for a Siemens TIM/Trio (with 32 receive channels) and running software VB15. There may be subtle or not-so-subtle differences for the Verio and Skyra platforms, for software VB17, VD11, etc. so keep your wits about you if you're not on a Trio with VB15!
You may have local differences, e.g. custom pulse sequences, that allow you to do things that contradict what you find in this user guide. Talk to your physicist and your local user group before taking anything you find in this guide/FAQ too literally.
Finally, you wont find many (any?) references in this guide/FAQ. It's for the training of newbies, not a comprehensive literature review! If you are seeking further information on something I mention in the guide and you can't find a suitable reference yourself, shoot me an email and I'll do my best to point you in a useful direction.
--------------------------------------
User guide/FAQ contents (as of 19 April, 2011):
Administrative Post: 19 April, 2011 (1/2)
I have renamed the three posts entitled "Diagnosing artifacts in fMRI data: Part x" to be "Physics for understanding fMRI artifacts: Part x." I am developing new posts in the series and through post seven at least the content is all quite theoretical; I'm not actually discussing artifacts or showing data! (But don't worry, I'm limiting the content to the essential concepts required to understand and differentiate fMRI artifacts. It's not going to be an entire MRI physics course!)
Once I've concluded this background series of physics posts (there are another eight or nine posts to come) I'll start a new series that will be entitled something suitable for actual artifact recognition (with data!), along the lines of the original title of the series. Hopefully this re-categorization will allow future readers to establish suitable paths through the posts, when a strictly chronological path probably won't be the best one.
Once I've concluded this background series of physics posts (there are another eight or nine posts to come) I'll start a new series that will be entitled something suitable for actual artifact recognition (with data!), along the lines of the original title of the series. Hopefully this re-categorization will allow future readers to establish suitable paths through the posts, when a strictly chronological path probably won't be the best one.
Saturday, April 9, 2011
Shim and gradient heating effects in fMRI experiments
Another week, another tangent. At least this one is directly related to the artifacts that I promise to get back to soon!
In this post I will review the nature and typical magnitudes of heating effects in a scanner being used for fMRI. Ever wondered why you sometimes observe discontinuities, or 'steps,' in a time series comprising the concatenation of multiple blocks of EPI data? What causes these discontinuities? Are they a problem for fMRI? And are there ways to reduce or eliminate these discontinuities at the acquisition stage? To begin with, some background.
Electrical energy in, thermal and vibrational energy out
When you run the gradients to generate images, a lot of heat is produced through vibrations (friction) of the gradient coils - the Lorentz forces that result from putting electrical current through copper wires immersed in a magnetic field - as well as through direct (resistive) electrical mechanisms. Much of that heat is removed via water cooling inside the gradient set. Water typically enters at about 20 C and may exit the scanner as high as 30 C. Modern gradient designs are pretty efficient at removing heat from the gradient coil. (I've done throwaway tests on my Siemens Trio that suggest the steady state temperature of the return cooling water is achieved after about 15 minutes of continuous scanner operation.) But - and this is the crux of this post - the heat imparted to the scanner isn't removed at precisely the same rate that it is being produced. In other words, the scanner is unlikely to be in a truly steady thermal state while you're using it.
In this post I will review the nature and typical magnitudes of heating effects in a scanner being used for fMRI. Ever wondered why you sometimes observe discontinuities, or 'steps,' in a time series comprising the concatenation of multiple blocks of EPI data? What causes these discontinuities? Are they a problem for fMRI? And are there ways to reduce or eliminate these discontinuities at the acquisition stage? To begin with, some background.
Electrical energy in, thermal and vibrational energy out
When you run the gradients to generate images, a lot of heat is produced through vibrations (friction) of the gradient coils - the Lorentz forces that result from putting electrical current through copper wires immersed in a magnetic field - as well as through direct (resistive) electrical mechanisms. Much of that heat is removed via water cooling inside the gradient set. Water typically enters at about 20 C and may exit the scanner as high as 30 C. Modern gradient designs are pretty efficient at removing heat from the gradient coil. (I've done throwaway tests on my Siemens Trio that suggest the steady state temperature of the return cooling water is achieved after about 15 minutes of continuous scanner operation.) But - and this is the crux of this post - the heat imparted to the scanner isn't removed at precisely the same rate that it is being produced. In other words, the scanner is unlikely to be in a truly steady thermal state while you're using it.
Tuesday, March 15, 2011
Go faster MRI at Berkeley!
With apologies for the continued delay to the artifact recognition series of posts - I've been distracted with some scanner problems - I thought I'd do a quick post on a recent methodological development that's generated some buzz in the field as well as in the media. The media buzz:
ABC 7 News video
UC Berkeley news center story
And in case you want to read the actual publication, it was published at PLoS ONE in early January. The work is part of the Human Connectome Project, an NIH-funded consortium involving Washington University (St Louis), Oxford, Minnesota and Berkeley. David Feinberg is the Berkeley representative.
The implications of these methodological developments could be quite substantial, possibly allowing better interpretation of brain dynamics than is currently permitted with the typical fMRI temporal resolution of two seconds or so. Of course, there are caveats. One is that the BOLD response is still low-pass filtered. And another is that the new "go faster" method involves several separate steps, each of which tends to exacerbate head motion sensitivity. Still, it looks good on highly motivated volunteers!
ABC 7 News video
UC Berkeley news center story
And in case you want to read the actual publication, it was published at PLoS ONE in early January. The work is part of the Human Connectome Project, an NIH-funded consortium involving Washington University (St Louis), Oxford, Minnesota and Berkeley. David Feinberg is the Berkeley representative.
The implications of these methodological developments could be quite substantial, possibly allowing better interpretation of brain dynamics than is currently permitted with the typical fMRI temporal resolution of two seconds or so. Of course, there are caveats. One is that the BOLD response is still low-pass filtered. And another is that the new "go faster" method involves several separate steps, each of which tends to exacerbate head motion sensitivity. Still, it looks good on highly motivated volunteers!
Saturday, February 19, 2011
Physics for understanding fMRI artifacts: Part Three
Coffee break! Time for a few tangents
In this post we're going to do a whistle-stop tour of some background concepts that you should have seen before. None of the information in today's series of videos is essential to understanding what's coming up later, when we get to k-space, the EPI pulse sequence and artifacts, but it's interesting and useful to review. Besides, these videos are well made, entertaining and are available free so we might as well use them! So, if you have the time, go grab a coffee and spend the next hour being reminded of things you probably knew at some point in a dim and distant past. You might even learn something about scanner hardware you didn't know before.
The anatomy of a miniature scanner
Don't worry too much about following every detail in today's first video, which dissects a miniature MRI scanner. It contains the same basic components as your fMRI scanner. Below, I've given a few explanatory notes on the coils and components that are most relevant to us.
In this post we're going to do a whistle-stop tour of some background concepts that you should have seen before. None of the information in today's series of videos is essential to understanding what's coming up later, when we get to k-space, the EPI pulse sequence and artifacts, but it's interesting and useful to review. Besides, these videos are well made, entertaining and are available free so we might as well use them! So, if you have the time, go grab a coffee and spend the next hour being reminded of things you probably knew at some point in a dim and distant past. You might even learn something about scanner hardware you didn't know before.
The anatomy of a miniature scanner
Don't worry too much about following every detail in today's first video, which dissects a miniature MRI scanner. It contains the same basic components as your fMRI scanner. Below, I've given a few explanatory notes on the coils and components that are most relevant to us.
Subscribe to:
Posts (Atom)