Understanding how distances in k-space manifest as distances in image space is quite straightforward. All you really need to remember is that the relationships are reciprocal. The discrete steps in k-space define the image field-of-view (FOV), whereas the maximum extents of k-space define the image resolution. In other words, small in k-space determines big in image space, and vice versa. In this post we will look first at the implications of the reciprocal relationship as it affects image appearance. Then we'll look at the simple mathematical relationships between lengths in k-space and their reciprocal lengths in image space.
Spatial frequencies in k-space: what lives where?
I mentioned in the previous post that there's no direct correspondence between any single point in k-space and any single point in real space. Instead, in k-space the spatial properties of the object are "turned inside out and sorted according to type" (kinda) in a symmetric and predictable fashion that leads to some intuitive relationships between particular regions of k-space and certain features of the image.
Here is what happens if you have just the inner (left column) or just the outer (right column) portions of k-space, compared to the full k-space matrix arising from 2D FT of a digital photograph (central column):
|An illustration of the effect of nulling different regions of k-space from a full k-space matrix, applied to a digital picture of a Hawker Hurricane aircraft. The full k-space matrix and corresponding image are shown in the central column.|
Inner k-space only:
The inner portion of k-space (top-left) possesses most of the signal but little detail, leading to a bright but blurry image (bottom-left). (See Note 1.) Most features remain readily apparent in the blurry image, however, because most contrast is preserved; image contrast is due primarily to signal intensity differences, not edges. If this weren't true we would always go for the highest signal-to-noise MRIs we could get, when in practice what we want is the highest contrast-to-noise images we can get! Imagine an MRI that had a million-to-one SNR but no contrast. How would you tell where the gray matter ends and the white matter begins? Without contrast no amount of signal or spatial resolution would help. So much for SNR alone!
Outer k-space only:
If we instead remove the central portion of k-space (top-right) then we remove most of the signal and the signal-based contrast to leave only the fine detail of the image (bottom-right). Strangely, though, it's still possible for us to make out the main image features because our brains are able to interpret entire objects from just edges. In actuality, however, there is very little contrast between the dark fuselage of the Hurricane, the dark shadow underneath it and the dark sky. Our brain infers contrast because we know what we should be seeing! If we were to try doing fMRI, say, on a series of edges-only images we would run into difficulties because we process the time series pixelwise. With a relatively low and homogeneous signal level you can bet good money the statistics would be grim.