Neuroimaging Data Processing/Processing/Steps/Realignment

Realignment / Motion Correction
When the head moves during an experimental run or between runs (also termed within-run vs. between runs movements), some of the images will be acquired with the brain in the wrong location. Consequently, motions can cause a given voxel to contain signal from two different types of tissue or a loss of data (e.g. at the edges of the imaging volume). Furthermore, movement of the head will alter the uniformity of the magnetic field that has been shimmed for one particular head position. Finally, head motion can have consequences for activation timing/pattern of excitation, given that each excitation pulse is targeted to one slice at a time and the head is moving through different slices during acquisition. Thus, the goal of motion correction is to adjust the series of images so that the brain is always in the same position. This is achieved by a process called co-registration.

Correction by Co-registration
The general process for spatially aligning two image volumes is known as co-registration. For motion correction, successive image volumes in the time series are co-registered to a reference volume. For this purpose a rigid-body transformation is used. Rigid-body transformation assumes that the size/shape of the two volumes that should be co-registered are identical (it is the same brain). Thus, the one volume can be superimposed upon the other by a combination of three translation and three rotation. Therefore the rigid-body transformation has three translational parameters and three rotational parameters.

Computer algorithms can identify the set of parameters that provide the best match to the reference volume. Via a cost function (voxel-by-voxel subtraction regarding the absolute intensity values) one can quantify how well an image matches another. The perfect co-registration between the reference- and the corrected volume would yield a difference of zero. After determining a set of realignment parameters, the original data have to be resampled in order to estimate the values that would have been obtained in case of no head motion. This process is called spatial interpolation.

What is the right reference?
The reference volume can be a particular volume of the session or run, e.g. the first volume, or it can be an average volume. The reason to use the first (usable) volume is that it is most times acquired directly after the anatomical scan which should result in minimal movement as compared to the anatomical scan. Average volume might however be preferable because it contains more information and all volumes are more or less similarly different to it. On the other hand, the average volume will be build from unaligned volumes, which is somehow problematic. This can be overcome by two-pass averaging in (e.g. in AFNI see below) where all volumes are first realigned to an unaligned average volume, then another "aligned" average is build and all volumes are aligned to this again.

Regressing out motion parameters
Alternatively, effects of motion can also be corrected by removing motion-related components from the data by the inclusion of calculated motion parameters (as regressors) in the GLM (e.g., six nuisance regressors corresponding to three directions of translation and three axes of rotation). In fact many people do both, realignement in preprocessing and including motion regressors to account for anything that hasn't been solved by realignment. (See for a comparison of different approaches, and for the effect of including motion parameters in the regression)

Resting state fMRI
In resting state fMRI motion correction is taken especially seriously due to the global effect of head movements. Especially when comparing healthy controls to children or patients which often show considerably more head movement. Therefore it is standard to apply both realignement and motion regression. However, recently the concern has emerged, that subtle movement artefacts (< 0.5 mm) that survive these corrections can bias functional connectivtiy comparison between individuals and groups. Therefore, it has been suggested to either exclude volumes which show micromovements ("scrubbing") and/or to include a mean movement parameter of each subject as a covariate in the group level analysis.

Controversies
For a debate about why simple motion correction might not be enough, see

SPM
Realignment works as intra-subject registration of time-series images by least square approach and a rigid-body transformation. Within SPM8, there are four modes provided under this purpose.

* Realign (estimate)
 * Estimation herein refers to how to get the optimal transformation (normally rigid-body transformation with six parameters) from individual images to the reference. There are several parameters in SPM provided for use to ajust the algorithm. After execution of this step, the header of each input file will be changed to reflect their relative orientation; then a file with name like rp_*.txt; besides in the Graphics window, the six motion parameters are drawn in image series. About how to set up these parameters however is not a trivial work, and SPM just suggests to keep the default setting if you are not quite clear with their meanings. The following table lists the seven parameters and their setting suggestions.

* Realign (reslice) Once the transformation parameters are estimated, there is no new image generated. During the procudure of reslice, a series of registered images will match the first image selected voxel-to-voxel and lead to new series of images. The new images will name based on their oringals but with a prefix of 'r'. Basically, reslice still belongs to the issue of interpolation of points in the original space to infer the value in new space, and in SPM there are several parameters provided to control the algorithm.

* Realign (Est & Res)
 * This is a combination of step one and two. More information could be found above.

* Realign & Unwarp

FSL
In FSL, MCFLIRT is used for motion correction (MC). This function is integrated into FEAT FMRI analysis protocol. The easiest way to perform MC is by clicking on the 'FEAT FMIR analysis', and then checking on the "MCFLIRT" option in the Pre-stats tab.



Behind such a simple operation, there are several adjustable parameters are set to default automatically in the GUI, which could be set up flexibly in the command line. These parameters involveː


 * -costː The cost function in MCFLIRT used for quantification of dissimilarity between the reference image and the target image. There are five different cost functions availableː mutual information (mutualinfo), correlation ratio (corratio), normalised correlation (normcorr), normalised mutual information (normmi) and least squares (leastsquares), with the default of normcorr.


 * -binː number of histogram bins, with the default value of 256


 * -dofː number of transform dofs, with the default value of 6


 * -refvolː number of reference volume, with the default of middle volume


 * -scalingː default value is 6.0


 * -smoothː default value is 1.0, which controls the smoothing in cost function


 * -rotationː specify scaling factor for rotation optimization tolerances


 * -verboseː a value no smaller than 0, and the default value is 0


 * -stagesː normally the registration of the target image to the reference one takes several stages, from three to four. The default setting is 3-stage, and it could be increased to 4-stage. In different stages, the interpolation algorithms are different, which could be either trilinear, sinc or nearest neighbor.


 * -initː an intitial transform matrix to apply to all voxels, the default is none


 * -gdtː run search on gradient images


 * -edgeː run search on contour images


 * -meanvolː register time series to mean volume instead of the reference volume


 * -statsː produce variance and std. dev. images


 * -matsː save transformation matricies in subdirectory outfilename.mat


 * -plotsː save transformation parameters in file outputfilename.par


 * -reportː report progress to screen


 * -helpː print this information

The command line for the GUI operation on motion correction isː

mcflirt -in input_func_data -out out_func_data_mcf -mats -plots -refvol 100

The result includes three plots corresponding to the trend of six parameters along time-series, and the mean displacement.



AFNI
3dvolreg applies rigid body transformation using 6 parameters. The reference is defined using the -base option. Another helpful feature is -1Dfile which creates a file containing the motion parameters. This can later be used for regression. The method of interpolation can be adjusted as well, the default is Fourier method, which is said to be the slowest but most accurate. For more details and options see manual page

To align to an average of the run, one can use 3dTstat to calculate this before and the use this as base in the 3dvolreg command. This is an example, which also saved the realignment parameters and the transformation matrix. This can be useful, when you want to combine all spatial transformations and only apply them all together in the end. As mentioned above, AFNI 3dvolreg contains the option -twopass, check the manual for more details. 3dTstat -mean -prefix MEAN_FILE INPUTFILE 3dvolreg -twopass -1Dfile PARAMETER.1D -1Dmatrix_save MATRIX.1D -base MEAN_FILE -prefix OUTPUTFILE  INPUTFILE

One should always check the motion parameters visually. This can be done with the following command: 1dplot -volreg PARAMETERFILE.1D

There is another function called 1d_tool.py which summarizes all motion parameters into a single timeseries. This can be helpful to detect overall motion and also be plotted using 1dplot. The resulting enorm file can be used to censor big outliers (those cannot be handled by realignment) by including them in the regression. This can be daone automatically using the command -regress_censor_motion 0.3 in afni_proc.py, where 0.3 is the threshold which of course can be adjusted. The same can be achieved by setting the motion censor limit in uber_subject.py.

It is also a good practice to look at the volumes in a video before and after the realignment to check if motion has been reduced. In the afni viewer, click on the image you want to see as a video and press v on your keyboard. Press Esc to stop the video mode.

In afni_proc.py realignment is a standard step with by default aligning to the 3rd volume of the first run, using cubic interpolation and preparing motion parameters for regression across all runs at once. To change these use: -volreg_align_to -volreg_interp