Education, tips and tricks to help you conduct better fMRI experiments.
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!

Thursday, February 22, 2024

Core curriculum - Mathematics: Linear algebra V

 

With some understanding of basic matrix manipulations, we're ready to begin using matrices to solve systems of linear equations. In this post, you'll learn a few standard tools for solving small systems - system defined by a small number of equations - by hand. Naturally, larger systems as found in fMRI will use computers to solve the equations, but you should understand what's going on when you push the buttons.


A11. Elementary row operations and elimination

 
This is just your standard algebraic manipulation to solve multiple simultaneous equations, e.g. dividing both sides of an equation by some constant to be able to simplify, but where the equations are represented as matrices:

 


A12. Cramer's Rule for solving small linear systems

According to Wikipedia:

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations.

 



________________




Sunday, February 18, 2024

Core curriculum - Mathematics: Linear algebra IV

 

Before getting back to the lectures from 3Blue1Brown, try this part review, part preview:



Now let's get back into the meaning with a little more detail.

 

A9. The dot (or scalar) product 

The dot product is a way to estimate how much two vectors interact in a common dimension. If the vectors are orthogonal to each other, they don't interact in a common dimension so their dot product is zero. This is like asking how much north-south movement is involved in an east-west heading: none. But if two vectors are perfectly parallel then this is equivalent to the two vectors lying on the number line and we can use our standard (scalar) multiplication rules. In between, we use a little trigonometry to determine their (dot) product.

 


Still lacking an intuition? This excellent summary from Better Explained (slogan: "Learn Right, Not Rote") should do the trick.


A10. The cross (or vector) product

Both the dot and cross products affect dimensionality. With the dot product, we find how much two vectors interact in one dimension. The cross product of two vectors is perpendicular to them both, telling us how much rotation arises in a third dimension.





A useful real world example use of the cross product is to compute the torque vector. Torque is the rotating force generated by pulling or pushing on a lever, such as a wrench or a bicycle crank. The lever moves in one plane but produces a rotation orthogonal to that plane. 

 

 

Torque is also fundamental to the origins of the MRI signal. We will encounter it later in the physics section. Can you take a guess how torque might be relevant to the MRI signal? Hint: it has to do with the interaction of a nuclear magnet (the protons in H atoms) with an applied magnetic field.

This article from Cuemath covers the rules for computing dot and cross products. And here are a couple of useful visualizations:

 


 

________________



 

 

Saturday, February 17, 2024

Core curriculum - Mathematics: Linear algebra III

 

Now we start to think about transformations between dimensions, e.g. taking a 2D vector into a 3D space. Non-square matrices come up frequently in engineering and research applications, including fMRI analysis, so you'll want a good understanding of their meaning. 

 

 A8. Non-square matrices

Let's look at a simple physical interpretation of changing the number of dimensions.



We previously saw how to invert a square matrix. But how do we invert a non-square matrix?



 

________________



Friday, February 9, 2024

Core curriculum - Mathematics: Linear algebra II


Continuing the series on linear algebra using the lectures from 3Blue1Brown, we are getting into some of the operations that will become mainstays of fMRI processing later on. It's entirely possible to do the processing steps in rote fashion as an fMRI practitioner, but understanding the foundations should help you recognize the limits of different approaches.


A4. Matrix multiplication as composition

In this video we see how to treat more than one transformation on a space, and how the order of transformations is important.

 



Q: While brains come in all shapes and sizes, we often seek to interpret neuroimaging results in some sort of "average brain" space, or template. We need to account for the variable position and size of anatomical structures. However, we also have the variability of where that brain was located in the scanner, e.g. because of different amounts and types of padding, operator error, and so on. When do you think it makes the most sense to correct for translations and rotations in the scanner: before or after trying to put individual brain anatomy into an "average brain" space? Or does it not matter?


 A5. Three-dimensional linear transformations

 Now we're going to move on from 2D to 3D spaces. Same basic rationale, just more numbers to track!

 


A6. The determinant 

 



A7. Inverse matrices, column space and null space

 


 

Perhaps it's not fully clear why we might need the inverse matrix. It turns out to be the way to achieve the equivalent of division using matrices. To galvanize this insight, let's look at the concept of an inverse matrix for solving an equation without division. Leaving aside the slightly goofy intro, it's a useful tutorial on the mechanics of determining an inverse matrix. 



________________



 



Saturday, February 3, 2024

Core corriculum - Mathematics: Linear algebra I

 

What is linear algebra? To get us going, I'm going to use the excellent lecture series by 3Blue1Brown and do my best to add some MRI-related questions after each video. Hopefully the connections won't be too cryptic. Don't worry if you can't answer my questions. It's more important that you understand the lectures. No doubt you'll find other material on YouTube and web pages to clarify things.

Let's start with a couple of definitions. While you'll find many examples online, for our purposes we can assume that a linear system is one where the size of the output or outputs scales in proportion to the input or inputs. The take-home pay of a worker paid an hourly rate is linear. They might receive their base amount, say 40 hours per week, plus some amount of overtime at twice their hourly rate. The total is still the linear combination of the base plus overtime amounts.

Non-linear systems don't have this simple proportionality. Gravity is the classic physics example. The strength of the interaction between two massive objects changes as the reciprocal of the squared distance (r^2) between them, that is, as 1/r^2. Finding yourself dangling ten meters in the air above the earth is very different from finding yourself ten more meters away from the earth at a height of 1000 km. In the first case you are about a second away from impacting the ground. In the second case you are in orbit and your more immediate health concerns are lack of oxygen and your temperature.

And what about the term algebra? It's just fancy speak for using symbols to represent the relationships between things that vary. We're going to be interested in changes at different positions in space - points in an image - and so we shall eventually use matrices to perform linear algebra. But we have to build up to a matrix from its skinnier cousin, the vector.


A1. Vectors: Essence of linear algebra

 


Q: We will use both a physicist's and a computer scientist's view of vectors at different points in the fMRI process. Given what you know today, can you guess where these different viewpoints might come up? Hint: fMRI is based on MRI, which is a physical measurement technique, while fMRI is typically the analysis of a time series of a certain type of dynamic MRI scans.

 

Q: Changes of basis are quite common in MRI. Even the way we usually label image axes involves a change of basis. The magnet bore direction is labeled the z-axis, while left-to-right is the x-axis and up-down is the y-axis. We refer to this assignment as the lab (or magnet) frame of reference. Now consider an axial MR image of a person's brain. An axial slice lies in the x-y plane in the magnet basis (or lab frame if you prefer). Yet we don't generally label the image with (x,y) dimensions. Instead we use (L-R, A-P) where L-R is left-to-right and A-P means anterior-to-posterior. This is an anatomical basis. How might an anatomical basis be more useful than using a magnet basis in MRI?


A3. Linear transformations and matrices:

 


Q: We usually label images using a basis (or reference frame) related to the subject's anatomy, i.e. with the (orthogonal) axes labeled head-to-foot (HF), left-to-right (LR) and anterior-posterior (AP). This means if a subject's head isn't perfectly straight in the magnet - let's say, the head is rotated 20 degrees to the left - the brain still appears straight in the 2D image. But here's the thing. The MRI hardware is controlled using the (x,y,z) "lab" reference frame. The anatomical and lab bases can be related to each other through a rotation matrix. Can you write down what a rotation matrix might look like that relates the subject's anatomical reference frame to the scanner's lab (x,y,z) reference frame?

________________