Education, tips and tricks to help you conduct better fMRI experiments.
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!

Sunday, January 13, 2019

Arterial carbon dioxide as an endogenous "contrast agent" for blood flow imaging


I nearly called this post Low Frequency Oscillations - part III since it closely follows the subject material I covered in the last two posts. But this is a slight tangent. Following the maxim "One scientist's noise is another scientist's signal," in this post I want to look at the utility of systemic LFO to map blood flow dynamics, an idea that was suggested in 2013 by Lv et al. based on the earlier work from Tong & Frederick that I reviewed last post. There is also at least one review of this topic, from 2017.

Let me first recap the last post. There is sufficient evidence, supported by multiple direct and indirect lines of inquiry, to suggest a blood-borne contrast mechanism that produces a prominent fluctuation at around 0.1 Hz in resting-state fMRI data. (Here, I assume a standard T₂*-weighted EPI acquisition for the resting-state fMRI data.) Furthermore, the same fluctuation can be found anywhere in the body. That is, the fluctuation is truly systemic. The best explanation to date is that non-stationary arterial CO₂ concentration, brought about by variations in breathing rate and/or depth, produces changes in arterial tone by virtue of the sensitivity of smooth muscle walls to the CO₂ dissolved in arterial blood. I shall assume such a mechanism throughout this post, while noting that the actual mechanism is less critical here than whether there is some utility to be exploited.

In the title I put "contrast agent" in quotes. That's because the CO₂ isn't the actual contrast agent, but a modulator of contrast changes. When the smooth muscle walls of an artery sense a changing CO₂ concentration, they either expand or contract locally, modulating the blood flow through that vessel. In the brain, a change in a blood vessel's diameter causes a concomitant change cerebral blood volume (CBV), hence cerebral blood flow (CBF). There may be a local change in magnetic susceptibility corresponding to the altered CBV in the arteries and capillaries. But the altered CBF will definitely produce the well-known change in magnetic susceptibility in and around the venous blood that can be detected downstream of the tissue, i.e. the standard BOLD effect. The actual contrast we detect is by virtue of changes in T₂* (for gradient echo EPI), plus the possibility of some flow weighting of the arterial blood depending on the combination of flip angle (FA) and repetition time (TR) being used. As a shorthand, however, I shall refer to arterial CO₂ as the endogenous contrast agent because whenever an artery senses a change in CO₂ concentration, there will be a concomitant change in vessel tone, and we will see a cascade of signal changes arising from it. (See Note 1 for some fun with acronyms!)


Time shift analysis

Most published studies attempting to exploit systemic LFO have used fixed time shifts, or lags, in their analysis. You just need a few minutes' worth of BOLD fMRI data, usually resting state (task-free). The analysis is then conceptually straightforward:
  1. Define a reference, or "seed," time course;
  2. Perform cross correlations between the "seed" and the time course of each voxel, using a set of time shifts that typically spans a range of 15-20 seconds (based on the expected brain hemodynamics);
  3. Determine for each voxel which time shift gives the largest cross correlation value, and plot that value (the delay, in seconds) to produce a lag map.

There are experimental variables, naturally. The duration of the BOLD time series varies, but most studies to date have used the 5-8 min acquisition that's common for resting-state connectivity. Some studies filter the data before starting the analysis. Different studies also tend to choose different seeds. There are pros and cons for each seed category that I assess in the next section. Time shifts are usually increments of TR, e.g. the lag range might be over +/- 5 TRs for a common TR of 2 sec. And, in producing the final lag maps, some studies apply acceptance criteria to reject low correlations.

Let's look at an example time shift analysis, from Siegel et al. (2016). The raw data were filtered with a pass-band of 0.009 - 0.09 Hz. For cross correlations, they used as their seed time course the global gray matter (GM) signal. Cross correlations were computed voxel-by-voxel for nine delays of TR = 2 sec increments, covering +/- 8 sec, followed by interpolation over the lag range. The time shift corresponding to the maximum cross correlation was assigned that voxel's lag value in the final map, as shown here:

Fig. 1 from Siegel et al. (2016).


They define negative time shifts as maximum cross correlations which lead the mean GM signal - the light blue regions in part (c) - and positive time shifts as correlations that lag the mean GM signal. Dark blue represents zero lag, i.e. mostly the GM region used as the seed time course.

What are we to make of the heterogeneity in the lag map in part (c) above? An asymmetry we can understand because this is from a stroke patient. Even so, there doesn't seem to be any clear anatomical distinction in the image. Certainly, some of the red-yellow voxels could represent large draining veins on the brain surface, but there are deeper brain regions that also show up red. What's going on? We need to explore the seed selection criteria in more detail.


How should we choose the seed time course?

With a fixed seed time course to be used as a voxel-wise regressor for the whole data set, there are essentially three situations to consider: a venous seed, an arterial seed, or a brain tissue seed.

Taking the venous seed first, the superior sagittal sinus (SSS) offers a robust BOLD effect and, being on the surface of the brain, can be identified and segmented reasonably easily. Here is a group average lag map produced by Tong et al. (2017) using a seed in SSS (top row) compared to a time-to-peak (TTP) map derived from the first pass kinetics of a bolus injection of gadolinium contrast agent (i.e. dynamic susceptibility contrast imaging):

Fig. 5 from Tong et al. (2017).

Notice how there are more late - that is, venous - voxels (red-yellow) in the lag map produced from the BOLD data (top row) compared to the DSC data (bottom row). The DSC is more heavily weighted towards early arrival, that is, towards the arterial side; more blue areas. And this makes sense because the DSC method is aimed at extracting a perfusion index. The kinetic model used in DSC imaging aims to map the blood arriving in the brain tissue, not the blood leaving the tissue. In other words, DSC is intentionally weighted towards the arterial side of the hemodynamics. The problem with the SSS signal is that it is already quite far removed from whatever happened in the brain upstream. After all, it is blood that has already transited brain tissue and is being directed down towards the jugular veins where it will leave the head entirely. Making strong correlations with arterial flow on the upstream side of the brain is thus a tricky proposition. It can be done, but the complications introduced by the brain tissue in between suggests caution.

What happens, then, if we select an arterial seed instead of a venous seed? Such a comparison was presented recently by Tong et al. (2018) using the MyConnectome data from Russ Poldrack: 90 resting-state fMRI scans collected over a two year period. The internal carotid arteries (ICA), the internal jugular veins (IJV) and the SSS were identified on T₁- and T₂-weighted anatomical scans, since these high-resolution 3D images cover the neck as well as the whole brain. Six time courses were used in time shift analyses: left and right ICA, left and right IJV, SSS, and the global mean brain signal (GS). The time shift range was +/- 15 sec, to ensure full passage of the blood through the head. On average, over the 90 sessions, the maximum cross correlations arose for ICA signals leading GS by between 2.8 and 3 seconds, while the SSS time course lagged the GS time course by 3.6 seconds, and the IJV signals lagged GS by around 4.3 seconds. (There was more scatter in left IJV data than in right IJV.) The accumulated delay from ICA to IJV was 7 to 7.5 sec, consistent with full passage of blood through the head.

Fig. 3 from Tong et al. (2018).

There was, however, an interesting finding. While there was good cross correlation between the ICA and other signals, the ICA was always negatively correlated with the GS, the SSS and IJV (see figure above). That is, the contrast change on the passage of the CO₂ was a signal decrease, not a signal increase as in the downstream regions. This must be a consequence of the particular form of BOLD contrast in the internal carotids. Tong et al. speculate that it is a small change in CBV producing a small extravascular (negative) BOLD signal change from the volume magnetic susceptibility difference between the artery (containing blood near 100% saturated with oxygen) and surrounding neck tissue. This is an interesting technical finding, and it has implications if we want to change the acquisition (see later), but it's also perfectly understandable as a conventional, albeit unusual, form of BOLD contrast.

So, using arterial seeds instead of venous seeds works in a test case. Great! What are the implications for using an arterial seed for perfusion mapping more generally? As with the venous seed, I am primarily concerned with the dynamics once the seed reaches the brain. Clearly, all the blood that is flowing through the internal carotid artery at any moment in time isn't destined for the same brain location or even the same tissue type. Some of the blood in our arterial reference signal ends up in GM, some in WM. The passage of blood is different through these two tissues, imposing different subsequent delay characteristics that are carried through to the venous blood. This is a well-known problem in arterial spin labeling (ASL), where the mean transit time (MTT) is known to differ between GM and WM, as well as with age, and with pathology. In ASL methods, one remedy is to use multiple post-labeling delays and measure a range of MTT rather than relying on a single delay and assuming the entire brain has the same response. Keep this point in mind because I will argue that a fixed lag analysis suffers from the same fundamental problems. Thus, while there are features of an arterial seed that "survive passage of the brain" into the venous system and the draining veins, the brain tissue adds complexity and ambiguity in the form of many potential sources for modulation of the dynamics along the way.

Which brings us to brain tissue as a seed time course. Some groups have used the global mean signal. I am against this on basic physiological grounds: we shouldn't combine the time courses of GM and WM because we know that in a healthy brain the blood flow in GM is 3-5 times higher than in WM. Using a combined GM + WM signal is tantamount to temporal smoothing.

An alternative is to use the GM signal only. This is better, but still not ideal because the GM is modulated by both the sLFO signals that we are trying to measure, plus all sorts of neurovascular modulations due to ongoing brain activity that are the focus of fMRI studies. With a GM seed there is the possibility of feedback effects across the entire GM from changes in arousal, through sympathetic nervous system responses. There will also be local fluctuations depending on the underlying brain activity. Doubtless, some of these fluctuations will be averaged away over the many minutes of a typical acquisition, but we can't assume they will average to zero. Thus, if we take as our reference time course a signal that has neurovascular effects already "baked in," our regression is going to be working simultaneously to assess systemic effects plus at least some fraction of ongoing brain activity. The neurovascular activity is considered "noise" in this interpretation! Lags in GM directly attributable to neural causes are around a second according to Mitra et al. This could be sufficient to cause regional variations that could appear as pathology when assessing patient groups.


Recursive time shift analysis

There is one approach that allows us to overcome many of the aforementioned limitations. And that is to move away from a single seed time course altogether. We need more temporal flexibility, a bit like using multiple transit delays in ASL to compensate for variations in MTT. For lag analysis, the recursive approach developed by Tong & Frederick is an elegant way to "ride along" with the systemic fluctuation as it propagates through the entire vascular system. The basic logic is to look upstream or downstream one TR at a time.

The time course from a single voxel in a large vessel is designated the reference regressor: the regressor with zero lag. After voxel-by-voxel cross correlations with the reference regressor, a new time series regressor is determined. It is the average of the time series of all voxels satisfying a particular cross correlation threshold. The new reference time series has the highest cross correlation with the original (zero lag) regressor at a temporal offset of one TR. This “moves” the regressor through time by one TR, tracking the propagation of the fluctuations inherent in the original time series. The spatial origins of the new regressor don’t matter. The new regressor simply comprises the time series of all voxels that obey an appropriate threshold criterion. A second cross correlation is then performed, searching for voxels that give the highest correlation with the second regressor time series, but at a further offset of one TR (which is now two TRs away from the original time series). The process repeats until the number of voxels selected as the strongest cross correlation, offset by one TR, is less than some predefined number. The algorithm appears in part a of the figure below.

The iterative procedure can be applied in reverse; that is, the temporal offset between the reference regressor and the next time series is set to be –TR. A negative lag simply means that the cross correlation will be maximized for fluctuations in the search time series that precede fluctuations in the reference time series. Thus, one may iterate forwards (positive TR lags) or backwards (negative TR lags) in time, relative to the start point. Refinement of the initial seed selection can also be made based on the results of a first pass through the data. One can even use the time series corresponding to the highest number of voxels obtained in a first pass as the optimal seed regressor for a second analysis; a form of signal averaging. The recursive approach is robust against initial seed conditions. That is, the recursive correlations tend to converge to a similar result whether one starts with mean GM signal, an SSS seed or almost any random seed. In part b of the figure below, a blue circle indicates that the number of voxels sharing fluctuations with a single voxel seed is quite small; only 200-300 voxels. A black circle indicates the set of voxels to be used in a second, optimized analysis. There is a set of 5000 voxels that have common fluctuations in the band 0.05 – 0.2 Hz.

Once a full set of regressor waveforms has been produced recursively, the entire set of regressor time courses is used in a GLM to produce a set of z maps of the voxel locations obtained at each time shift. The entire recursive procedure is shown in the figure below. Example z maps produced from the GLM appear in part c.


Fig. 2 from Tong & Frederick (2014).

 
To view the passage of the systemic flow through the brain, each z map in the set is normalized and can then be played as a movie, one frame for each TR increment assessed. In the movie below we see the z maps obtained at 2.5 frames per second (fps), i.e. TR = 0.4 sec, played back at 6.7 fps, for a changing three-plane view through the brain. The top row was produced with the optimal seed, the bottom row was produced with a local seed. As expected, the results of the recursive procedure converge to similar results regardless of the starting seed.


 (The original Supplemental Movie 1 can be downloaded here.)


The flow pattern in the movie is described by Tong & Frederick thus:
"The LFOs are “piped” into the brain though big arteries (e.g., internal carotid artery) with no phase shift. They then follow different paths (arterioles, capillaries, etc.) as branches of the cerebral vasculature diverge. It is expected that each signal would evolve independently as it travels along its own path. The observation that some of them have evolved in a similar way, and at a similar pace, is probably due to the uniformity in the fundamental structures of the cerebral blood system, likely reflecting the self-invariant properties of fractal structures found throughout biological systems."

An alternative way to view the data is as a lag map which plots the arrival time in seconds, relative to the mean arrival time assigned zero:



 
The regions fed by middle cerebral arteries appear in blue and have the earliest arrival times, while the venous drainage is colored red-yellow. Note also how symmetric the arrival times appear. For a normal, healthy brain, this is as we should expect.

At this point we can go back and revisit the issue of seed selection: fixed time shift analysis or recursive approach? Is there really a benefit to the recursive approach? Aso et al. recorded three 5-minute blocks of BOLD data under conditions of rest, a simple reaction time task (ITI of 6-24 sec), or 10 second breath holds with 90 sec normal breathing. The arrival time maps (for which they use a reversed sign convention; negative values are later arrival) for the three conditions are somewhat similar but have noticeable differences. This is the group averaged response (N=20) using the recursive time shift method:

Fig. 6D from Aso et al. (2017)

The distribution of arterial (early arriving) regions, displayed above in yellow-red, are clearly different even as the general patterns are preserved across conditions. The intra-class correlation coefficient is above 0.7. This fits with our general assumptions about BOLD data: there's a lot going on and sorting out the parts is unmaking a sausage!

The most striking result is in their comparison of the recursive procedure to a fixed SSS seed analysis. Here, they show map of the intra-class correlation coefficient for the three conditions. The recursive analysis (right column) yields ICC values significantly greater than with the SSS seed analysis (left column):

Fig. 7 from Aso et al. (2017)

The recursive procedure maintains an ability to track the hemodynamics even as there are behavioral differences imposed on the time series. The SSS seed produces more variable results, consistent with the idea that low frequency fluctuations in a large venous vessel are quite different to the spatial-temporal spread imposed by brain tissue. The recursive method, while still biased towards the venous side of the brain due to greater BOLD sensitivity, does a better job of tracking the blood dynamics upstream, into the brain tissue.


Applications of time shift analysis

Assessing the blood flow patterns in normal brain is very interesting. The extensive work that went into establishing sLFO as a major source of BOLD variability is highly relevant to the many approaches that try to account for physiological variations as sources of "noise" in resting-state fMRI data in particular. And we've seen that the recursive procedure is able to find differences between rest, a simple task and breath holding. So far so good. What else can we do with it?

To date, I have found only three studies that have used the recursive analysis: the original Tong & Frederick paper and Aso et al., both reviewed in the previous section, and a paper by Donahue et al. that I review in the next section because it uses a gas challenge rather than normal breathing. Here, I'll quickly summarize the clinical applications of the fixed seed analysis.

The earliest reference I can find to clinical application is the work of Lv et al. mentioned in the introduction. In addition to the early work from Lv et al. on stroke patients, Amemiya et al., Siegel et al., Ni et al. and Khalil et al. also assessed stroke or chronic hypoperfusion. Chen et al. used time shift analysis to look at reperfusion therapy after acute ischemic stroke. Christen et al. looked at lags in moyamoya patients, and Satow et al. looked at idiopathic normal pressure hydrocephalus. All these studies observed interesting findings in the patient groups, and many compared the time shift analysis of BOLD data to other imaging methods (e.g. MRA, DWI, DSC) for validation. I would encourage you to read the studies if you are interested in the particular pathologies. But as a representative example, I'll dig into the study by Siegel et al. because they compared the time shift analysis of BOLD data to pulsed arterial spin labeling (PASL). (See Note 2.) In regions of hypoperfusion, the regional CBF measured by pulsed ASL (PASL) was observed to decrease monotonically with the BOLD hemodynamic lag in patients at ~2 weeks after a stroke, as shown in part (b) below:

From Siegel et al. (2016).

But what changes might have persisted a year after the stroke? Would the CBF and time shift relationship be the same?
"These results raise the question of whether hypo-perfusion in the acute post-stroke period recovers in parallel with lag. To address this question, we measured change in lag (1 year minus 2 weeks) versus change in rCBF for all ROIs showing lag >0 subacutely. Although a significant relationship was present between recovery of lag, and recovery of rCBF (Pearson’s r = -0.12; P = 0.039), the variance explained by this relationship was small (r² = 0.015). This may be because overall, measures of perfusion did not change significantly between two weeks and one year post-stroke (two-week average = 85.7% of controls, one-year average = 86.4% of controls; paired t-test P = 0.3719). Thus, while a strong relationship between lag and rCBF is present sub-acutely, areas in which lag recovers do not necessarily return to normal perfusion."
The CBF remains depressed, relative to controls, but the lags resolve somewhat, as illustrated below. In this sample, four out of five patients have radically different lag maps at 1 year compared to 1-2 weeks post-stroke:

Part of Fig. 2 from Siegel et al. (2016). Lag maps for five patients at 1-2 weeks (left) and 1 year (right) after stroke.

That is very interesting. It implies that the net delivery of blood - recall that CBF has units of ml blood per 100 g tissue per minute - remains impoverished but the velocity of that blood through the ischemic region has normalized somewhat. Why might this be? If we consider CBF as a rough proxy for metabolic rate, then a simple explanation is that the metabolism of the tissue affected by the stroke is as low at 1 year as it was 2 weeks. There is probably an infarct - cells that died in the hours after the stroke - creating a persistent lower demand for glucose (and oxygen) within the broader region affected by the stroke. The vascular control mechanisms themselves, on the other hand, appear to have recovered somewhat, so the blood dynamics appear more normal even as the regional CBF remains low. (See Note 3.)

This example illustrates that time shift analysis offers different, complimentary information on a vascular disorder than is measured in PASL. Similar utility was found in the other clinical investigations where other forms of imaging, including DSC, diffusion imaging and MR angiography, were compared to the time shift analysis. There really does seem to be some unique information on offer in the time shift analysis. (See Note 4 for a bonus example, using caffeine.)


Can we increase sensitivity to blood dynamics?

The work presented so far has used standard BOLD data. Admittedly, some studies used multi-band EPI to shorten the TR, but the parameter settings were standard for a typical fMRI acquisition. That is, TE was set to generate sensitivity to T₂* changes, the flip angle was typically set to the Ernst angle, and so on. No special consideration was given to the venous bias in the acquisition. As a consequence, the data being analyzed for time lags is always likely to do better on the venous side of the brain than the arterial side, even with the more rigorous recursive time delay method. Does it have to be this way? Can we boost the sensitivity so that the recursive procedure can track arterial and venous dynamics with something approaching equal sensitivity? There are three broad approaches to ponder.

1. Change the arterial CO₂ concentration:

Rather than relying on the endogenous fluctuations of CO₂ during normal breathing, Donahue et al. used a transient hypercabia challenge to boost arterial and venous changes simultaneously. They delivered alternating 3-minute periods of medical grade air or carbogen (5% CO₂ + 95% O₂) through a mask, a procedure that has been used extensively to study cerebrovascular reactivity. The 3-minute periods during which blood gases are controlled necessitates a change in the temporal lag search window. Donahue et al. assessed lags over the range -20 to +90 seconds relative to the boxcar that describes the five 3-minute periods, using the recursive time shift method with the boxcar as the initial time series.

As we might expect for long duration events, the resulting delay maps are more homogeneous with the carbogen challenge than we've seen using endogenous BOLD fluctuations. The inherent variability of the ongoing physiology is dominated by the response to carbogen. A delay map from a normal volunteer yields almost uniform time-to-peak (TTP) for GM, and a slightly delayed TTP for some WM regions:

Fig. 2 parts (b) and (c) from Donahue et al. (2016).

But the relatively flat normal brain response makes it easy to see changes due to major disruption of the blood supply. The reduced flow through certain arteries in moyamoya patients is immediately evident as changes in TTP:

Fig. 3 parts (3) and (f) from Donahue et al. (2016).


Do we have to use long gas challenges? The 3-minute periods used by Donahue et al. are well suited to major vascular pathology such as stroke and moyamoya disease, where the transit delays can be severely abnormal and conventional measures like ASL are limited. Note the delays of 20+ seconds in the moyamoya patients compared to controls in the last two figures. That sort of disruption would be invisible to ASL methods because the label decays with T₁. If we expect the blood dynamics of interest to be more subtle, such as might arise from a pharmaceutical or a foodstuff, might shorter respiratory challenges be used in order to preserve the dynamics over a shorter range of time delays? I don't see why not. Breath holding might provide an easier alternative than the delivery of gases, too. Another consideration is the pattern of challenge used. Regularizing respiratory responses into a boxcar might not be as informative as using some amount of temporal variability. Perhaps a breathing task that samples short and long breath holds, with changes of normal breathing pace and depth, or a gas delivery paradigm that is more "stochastic." There are plenty of options to test.

2. Change the true contrast agent:

In mapping systemic LFO with BOLD we're using the paramagnetic properties of deoxyhemoglobin on the venous side, and the weak diamagnetism of arterial blood plus perhaps a small amount of inflow weighting on the arterial side. What if we moved to an exogenous contrast agent instead? For example, we might try to measure the re-circulation of a gadolinium contrast agent once the agent is assumed to have attained a steady state blood concentration. The standard approach in DSC is to map the uptake kinetics on the first pass, immediately after contrast injection. At that point, the measurement is considered complete. But it would be interesting to look at the fluctuations arising from changes in respiration rate and depth - the CO₂ sensitivity should be the same - in the minutes afterwards. The blood signal would be fully relaxed by the gadolinium, eliminating BOLD. Essentially, we would have a CBV-weighted signal. This would probably shift the bias from the venous to the arterial side. Sensitivity might take a hit as a result, but that would likely depend on the signal level surrounding arteries and capillaries. The sensitivity to CBV changes could be quite high, given the presence of gadolinium in the blood.

3. Change the pulse sequence parameters:

This is probably where most of us would start: with a standard BOLD resting state approach, no respiratory challenge or exogenous contrast agent. What options do we have in the pulse sequence parameters? Many fMRI studies use the Ernst angle for GM when establishing the flip angle (FA) at the TR being used. Can we boost the arterial signal by increasing inflow sensitivity with higher FA and/or shorter TR? (For an excellent review on inflow effects see Gao & Liu (2012).) We might use MB-EPI to attain a sub-second TR yet maintain a 90 degree excitation for maximum T₁ weighting. On the other hand, MB-EPI has a rather complex excitation pattern along the slice dimension whereas conventional EPI can be applied as either interleaved or contiguous (descending or ascending) slice ordering. Multiband forces a form of spatial interleaving so that the spin history of blood moving along the slice direction is complicated. Still, it's worth a look.

For gradient echo EPI we generally aim to set TE ~ T₂* for maximum BOLD sensitivity. For lag mapping, shorter TE may reduce the venous bias while simultaneously boosting the SNR. Spin echo EPI is another possible option. SE-EPI is used to refocus extravascular BOLD arising from large veins (check out the recent paper by Ragot & Chen for a comprehensive analysis of SE-EPI BOLD), leaving the intravascular and small vessel extravascular BOLD responses. (The BOLD signal in SE-EPI is typically about half that for GE-EPI at 3 T.) Using spin echoes also changes the T₁ recovery dynamics, something which might help add inflow sensitivity to the final signal. Now, SE-EPI does generally reduce the brain coverage per unit time, because the minimum TE is longer for SE-EPI than for GE-EPI, but multiband approaches could render the coverage acceptable. It may even be the case that 180 degree refocusing at short TR is inefficient, as well as driving up SAR, so lower excitation and refocusing FAs would be worth exploring.

Another acquisition issue given only partial consideration in time shift analysis work so far is the duration of a standard BOLD acquisition. How long should we acquire? With ASL methods, a single CBF map typically requires about 4-5 mins of data to attain reasonable SNR. Over this time we assume (usually only implicitly) that the neural activity variations are averaged so that the CBF is a reasonable reflection of the subject's baseline perfusion. For highly aroused or highly caffeinated subjects this assumption could be challenged, but whatever is true for ASL measures should apply equally well (or equally badly) to time shift analysis of BOLD data. Until someone shows us differently, then, I would suggest at least 4 minutes of data.


Conclusions

This post has looked at a method to image vascular dynamics. My intent wasn't for fMRI applications per se, even though there is a lot of overlap when the starting point is resting-state fMRI data. Rather, it's a different interpretation of resting-state data that could be informative for comparison with other blood imaging methods such as ASL. That's what I'm going to be doing with it near term. If your interests are strictly on the neuronal side, however, and you think mapping sLFO has potential for de-noising purposes, I suggest you read the sections entitled "How do systemic LFOs affect resting functional connectivity?" and "How to deal with systemic LFOs in fMRI" in my last post, and then look at the papers by Jahanian et al., Erdogan et al., Anderson et al., and, of course, the recursive method paper by Tong & Frederick. I think the recursive approach has advantages over the fixed seed approach, as I've explained in this post. Code for the recursive lag method is available from Blaise Frederick's github. At least one person took the plunge after my last post.

I'm going to be comparing the recursive lag mapping method to pseudo-continuous ASL (PCASL) in 2019. I'll try to post regular updates as I progress.

_________________________



Notes:

1.  We have Blood Oxygenation-Level Dependent (BOLD) contrast, so shouldn't we simply define Arterial Blood Carbon Dioxide-Level Dependent contrast? That would give us ABCD-LD. Doesn't exactly trip off the tongue.

What about Arterial Blood CO₂-Level Dependent contrast, ABCO₂LD? Messy.

Or, how about ARterial CArbon DIoxide-LEvel DEpendent contrast, ARCADILEDE? Sounds better, but also sounds like a new drug for irritable bowel syndrome. ("Ask your doctor if ACADILEDE is right for you!" *Side effects may include vacating a lucrative career in industry, multiple grant disappointments, and frequent criticism from Reviewer 3.)


2.  At some point I will do an "introduction to ASL" blog post because there doesn't seem to be as widespread understanding of the method as I'd once dared to hope:


And there I was thinking you were all just being stubborn! There is sufficient evidence to suggest that a baseline CBF map, computed from a good ASL acquisition, can be a useful normalizing step for fMRI across populations when one expects systematic changes in perfusion, e.g. with aging, disease or on administration of drugs. I will be covering - eventually - this normalizing procedure in the blog post series on modulators of fMRI responses. But I'll do an intro to ASL before it, based on some work I'm doing separately with pseudo-continuous ASL (PCASL).


3.  If you're not familiar with CBF as a measure of perfusion, these last few sentences may appear contradictory. The choice of cerebral blood "flow" as the term describing the volume delivery of blood per unit time to a fixed volume of tissue - that is, perfusion - is a rather unfortunate one, since it is easily confused with the sort of laminar flow we think about for fluids in pipes. Perfusion - CBF - isn't a velocity but a rate of mass replacement. If you're confused, think about the difference of blood delivery that happens in normal GM and WM. GM has 3-5 times the metabolic demand of WM, so its CBF is around 3-5-fold higher. But the GM dynamics, as assessed by time shift analysis, aren't 3-5-fold faster. The mean transit time into WM is only a second or so longer than it is to GM. There's simply less volume replacement of blood happening in the WM tissue. How does that come about? Mostly, it's due to lower vascular density. There are fewer capillaries in the WM. The net speed of blood through GM and WM capillaries can therefore be essentially the same, but the GM perfusion is considerably higher by virtue of the greater density of capillaries.


4.  For those of you who might be interested in pharmacological manipulations, a very recent study by Yang et al. shows that time shift analysis can detect changes in blood dynamics due to caffeine ingestion. This study again utilized 90 scans available from the MyConnectome project. Yang et al. compared the 45 scans obtained on days when the subject had consumed coffee to the 45 scans conducted caffeine-free. (I leave open the possibility that Russ was simply more grumpy sans coffee and that drove the results ;-) The analysis used the same procedure as described above for the arterial to venous seed comparison (see the third figure in this post, from Tong et al. (2018)). Using seeds in superior sagittal sinus (SSS), internal carotid arteries (ICA) and the global signal (GS), Yang et al. found the transit time from ICA to SSS was almost a second longer without caffeine, comprising a delay of approximately half a second between the ICA and GS and another half a second between the GS and the SSS:

Fig. 2 from Yang et al. (2018).

The response was reasonably uniform across the brain. The results were consistent with vasoconstriction, an expected response to caffeine.

In the discussion section of the paper the authors dig into the implications of slowed blood dynamics. In particular, they try to reconcile the slower dynamics with the reduced CBF that has been reported in earlier studies on caffeine consumption. (There are no ASL data in MyConnectome to make direct comparisons so the comparisons are necessarily between studies.) There is a suggestion that mapping CBF with ASL and blood dynamics with BOLD data will greatly enhance our understanding of the neural and vascular effects under a variety of conditions. Lots of complimentary information!


No comments:

Post a Comment