Analysis of A FMR Dataset (FIAC Dataset) Using General Linear Model Based Approach

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 11

431-483 Neuroimaging Methods & Applications

Analysis of a fMR Dataset (FIAC Dataset) using


General Linear Model Based Approach
By Shuai Deng, student ID 256515, results obtained with collaborative work with
Cheng-Wen Wang, student ID 255477 .

Abstract

Introduction

o General benefits of improved analysis


o Specific – literature review on structure function correlation
o Our aims was ……..BOLD, motion correction

The human voice is a very common and important sound structure


in our auditory environment. Each voice can be considered as the ID
of a person as it carries loads of personal information that is unique.
With billions of different voices in the world, human brains are able
to perceive voices with pretty good accuracy. However, due to
limited anatomical information, it is still not possible to fully
characterize the complex neural network involved in voice
perception ability [1].

Functional Magnetic Resonance Imaging, a type of specialized MRI


scanning technique that measures the hemodynamic response
related to neural activity in the brain, has become one of the
primary scanning technique in brain mapping since the early
1990’s. This technique has allowed significant progress in our
understanding of the neuronal bases of speech perception. A few
experiments were done and showed that several regions of
secondary auditory cortex, mostly located along the superior
temporal sulcus (STS), in which neuronal activity appeared not only
sensitive but also highly selective to sounds of human voice [2]. In
this article, our aim of this fMRI experiment is to investigate the
relationship between structure and function and identify the specific
regions of the brain that are sensitive to specific stimulus (sentence
and speaker), using General Linear Based Approach.

1
431-483 Neuroimaging Methods & Applications

Figure 1: Cortical discriminative maps for decoding of vowels and


speakers [2].

During the fMRI session, the 3D volume of the subject’s head is


scanned every 2.5 seconds while the stimuli were presented to the
subject. Hundreds of complete images with low resolution will be
produced during the course of the experiment. However, these
images are not the desired data and must be preprocessed using a
number of analysis steps such as, non-brain removal using BET,
motion correction using MCFLIRT, spatial filtering and etc. Details of
the preprocessing steps will be discussed later in the article. All the
preprocessing steps are used in order to make the data suitable
before analysis.

2
431-483 Neuroimaging Methods & Applications

Method

To set up this experiment, FIAC dataset is provided for five subjects (subject 0 –
subject 4). A total of four sessions were performed for each subject, two being block
experiment and the other two were event related experiments. The experimental
protocol follows a 2 by 2 factorial design: a factor ‘speaker’ (same or
different speaker) and a factor ‘sentence’ (same or different
sentences). By varying the factors during the experiment, we can
obtain a series of data and find out which part of the brain is
responsible for the specific activity by analyzing the data. Details of
the analysis will be shown in the results session.

Block experiment

Figure 2: Simple diagram showing the hemodynamic response to block event.

Block design involves combining similar tasks together in stimulus blocks and
provides excellent statistical power. By comparing the contrast of fMRI signals,
regions of response between different blocks can be clearly identified.

Table 1: Table of 4 different conditions by varying the two parameters.

Same Speaker (SSp) Different Speaker (DSp)


Same Sentence (SSt) 1: Same sentence-Same 2: Same sentence-Different
speaker (SSt-SSp) speaker (SSt-DSp)
Different Sentence (DSt) 3: Different sentence- 4: Different sentence-
Same speaker (DSt-SSp) Different speaker (DSt-DSp)

Condition 1 stimulus (SSt-SSp): A given sentence repeated by the same speaker 6


times over 20 seconds.

Condition 2 stimulus (SSt-DSp): A sentence being repeated by 6 different speakers (3


males and 3 females).

Condition 3 stimulus (DSt-SSp): A same speaker produced 6 different sentences.

Condition 4 stimulus (DSt-DSp): 6 different speakers produces 6 different sentences.

Event related experiment

3
431-483 Neuroimaging Methods & Applications

Figure 3: Simple diagram showing the hemodynamic response to event-related event.

Event related designs generally consist of rapidly presented interleaved trials of


multiple event types of short duration. These designs are extremely useful in the
exploration of cognitive processes as they provide a reduction in behavioural
confounds like neural habituation and anticipation, which is the main negative
consequences of block paradigm. Sentences were presented every 40 slices, meaning
every 3.33 seconds. So, event-related designs give a faster and weaker response than
in block designs, which will enable the possibility of looking for activation to a single
specific trial types (usually the average of many trials). Francisca et al showed that
there is a 35% loss of SNR in block design compared with a 17-25% loss of SNR in
event-related design [3]. That is almost 50% in loss of SNR.

Event-related designs involve rapid changing of between condition. There are 4 types
of possible transitions between two sentences.

• Transition '1' means no change with the previous sentence,


• Transition '2' means same sentence different speaker,
• Transition '3' means different sentence, same speaker,
• Transition '4' means different sentence, different speaker.

FMRIB Software Library (FSL)

FSL contains a set of tools which provide a friendly Graphic User Interface (GUI), it
allows users to conduct image and statistical analysis for FMRI, MRI and DTI image
dataset [4]. The following table contains a list of FSL tools that were used in this
experiment. Details of setup will be discussed later in the article.

Table 2: List of FSL tools used in this experiment

Brain Accurate segmentation tool for segmenting brain from nonbrain in


Extraction structural and functional data, also for modeling scalp surfaces and
Tool skull.
(BET)
FMRI A powerful model based FMRI analysis tool which pre-processes brain
Expert image dataset, applies general linear modeling for first level analysis,
Analysis registers data to structural/standard space images and provides
Tool statistical analysis for higher level. The following FSL tools are a
(FEAT) subset of applications integrated into FEAT.

MCFLIRT An intra-modal motion correction tool which realigns FMRI images to


a common reference, applied during the pre-processing stage of the
FEAT analysis
FILM Prewhitens each voxel’s time series for more accurate estimation of
first-level GLM.

FMRIB’s A more sophisticated tool in FEAT used for higher-level analysis, it


Local considers mixed effects for inter-sessions and inter-subjects general
Analysis linear modeling using Bayesian estimation techniques

4
431-483 Neuroimaging Methods & Applications

of Mixed
Effects
(FLAME)
FSL View 3-D/4-D image display, multiple orthogonal or lightbox views, 3-D
rendering, time-series display, image editing, and histogram viewing.

Analysis by General Linear Model Based Approach

General linear modeling sets up an estimated model and use it to check how well it
fits the FMRI time series data. It basically decomposes the measured data into effects
and error and form statistic using estimates of effects and error. By doing voxel-wise
time series analysis, regions of the brain that are responsive to stimulus can be
verified. A number of analysis steps will be discussed below to show how inferences
about the specific brain regions are made.

Preparing fMRI data for statistical analysis

As discussed earlier in the article, the data collected from scanner are all raw data.
Image preprocessing techniques must be performed to remove various type of
artefacts and image noises. Although statistical analysis is considered as the most
important part of fMRI analysis; however, without the preprocessing techniques, the
statistical analysis will be greatly reduced in power or even become invalid.

Motion Correction:
Even the subject has padding around the head during a fMRI scan, there is still some
minor movement that will cause major errors in data processing. So, when there is
movement, the data we are analyzing is not actual right point. FEAT’s inbuilt tool
MCFLIRT is able to provide correction steps such as translation, rotation, and scaling
[7]. This simple prestatistical processing will provide automatic parameter estimation
and optimization by just a click of a button.

Insert Picture !!!!!!!!!!!!! Page 150!!

Figure 4: Diagram illustrating the motion correction result

Brain Extraction
BET segments brain from nonbrain in structural and functional data as nonbrain is not
of experiment interest. The image is extracted with BET using ‘Robust brain centre
estimation’. The structural image of subject 3 before and after BET

5
431-483 Neuroimaging Methods & Applications

Figure 5: Structural image of BET result showing before extraction.

Figure 6: Structural image of BET result showing after extraction.

Spatial Filtering

Spatial filtering refers to diminishing the blurring of each volume as blurring can
increase signal-to-noise ratio in the data. Increasing the signal-to-noise ratio generally
is done by reducing the noise level while retaining the underlying signal. A blurring
function can be used to average and cancel out the random noises within the local
neighborhood of the activation region. However, it is required that the extent of the
blurring is not larger than the size of the activated region. Therefore, spatial filtering
step should not be carried out if the activation region is very small. Figure 6 below
shows the effect of spatial filtering.

6
431-483 Neuroimaging Methods & Applications

Insert Picture of Spatial Filtering !!! PAGE 232


Figure 7: Diagram showing the effect of spatial filtering.

Temporal Filtering

Unlike spatial filtering which works on each volume separately, temporal filtering
works on each voxel’s time series separately. The main goal of temporal filtering is to
remove unwanted components of a time series without damaging the signal of
interest. Voxel time series contains unwanted low frequency drifts due to the
physiological effects such as cardiac activities. These frequency drifts will act as
noise in the output signal. A high-pass filter (FIR-based) is selected in FEAT to
remove low frequency noise. The cut-off period of the filter is chosen to be 100
seconds. If it is too low, the signal of interest will be reduced or even eliminated.

Low-pass filtering attemps to reduce high frequency noise in each voxel’s time series.
In the event related experiments, signals often change rapidly due to brief stimulation.
Using low-pass filtering might suppress these signals, thus reducing the power of
statistical analysis.

Insert Picture of Spatial Filtering !!! PAGE 237

Figure 8: Diagram showing the effect of temporal filtering.

Statistical Analysis

To set up the full model, 8 explanatory variables were used for each design matrix. 4
of them are different conditions from Table 1 while the other 4 are the temporal
derivatives that will account for potential specification of the hemodynamic delay.

Each regressor in the design matrix generates a parameter estimate as a result of the
model-fitting. Defining contrasts of parameter estimates (COPEs) for EVs provides
how well the EV fits the data at each voxel. The resulting COPEs can then be tested
for statistical significance by converting into T and/or F statistics.

Primary areas of activation under the four different conditions (SStSSp, SStDSp,
DStSSp, DStDSp; contrasts C1–C4).

The effect of the speaker (under same sentence condition, different sentence
condition, and averaged over these; contrasts C5–C7). Of particular interest here is
contrast C6, the direct comparison of the DStSSp and DStDSp conditions (voice
repetition priming).

The effect of sentence (under same-speaker condition, different-speaker condition and


averaged; contrasts C8–C10). Of particular interest here is contrast C9, the direct
comparison of the SStDSp and DStDSp conditions (sentence repetition priming).

Positive (C11) or negative (C12) sentence-speaker interaction and the overall


maximum effect size of the repetition suppression (cognitive interaction: BOLD
amplitude decrease during SStSSp when compared to DStDSp; contrast C13).
In C1, setting 1st EV to 1 and the other EVs to 0 will show the primary activation

7
431-483 Neuroimaging Methods & Applications

region when same speaker says same sentence. Following the same principle, C2, C3,
and C4 will show the primary effects under condition 2, 3 and 4 accordingly. C5, C6,
and C9 are aimed to test for speaker effects, sentence is kept as a constant variable
and the differences in activation regions should show as a result of different speakers.
C6, different sentence is hold constant, this contrast effectively addresses for the
speak/voice repetition priming. C9 is an average effect over C5 and C6. C7, C8 and
C10 follows the same principle except that we are testing the sentence effect where
C8 gives information over sentence repetition priming. C11 tests for the maximum
effect of repetition priming. C12 measures the positive interactions of sentence and
speaker while C13 measures the negative interactions.

Figure 9: First level design matrices for GLM analysis. (Left figure graphically
represents GLM regressors for an event related sessions (subject 3 session 2; right
figure is for a block session (subject 3 session 3).

8
431-483 Neuroimaging Methods & Applications

Figure 10: Covariance of the design matrix. The values below the figure stands for
the contrast's efficiency, low values indicate an efficient design.

Registration

As discussed earlier in the article, series of low resolution functional images were
taken to show which part of the brain regions are activated by experimental
stimulation. However, low resolution images are not able to provide excellent
information to determine the exact location in the brain. So, if we have a single high
resolution structural MR scan, we can relate and compare the locations of an
interesting activation at a particular voxel coordinate, which is not possible when the
resolution is low. This technique is called registration, which is the main tool in
tructural analyses. The basic task of registration is to align two images by moving or
reshaping one image to match the other by finding a relation between the voxel
coordinates of one image and the other. Registration is extremely useful since it
allows the information to be compared, combining information from both images.

Figure 11: Final registration of low resolution image to a standard space for subject
3, session 3. A 2-stage registration procedure is used: 1st stage – intra-subject
registration (7 degrees of freedom). 2nd stage – transformation (12 degrees of
freedom).

Results

Figure 12: Contrast C6, the direct comparison of the DStSSp and DStDSp conditions
(voice repetition priming).

9
431-483 Neuroimaging Methods & Applications

Figure 13: Contrast C9, the direct comparison of the SStDSp and DStDSp conditions
(sentence repetition priming).

10
431-483 Neuroimaging Methods & Applications

Reference
1. Elia F, Federico D. M, Milene B, and Rainer G. (2008). “ ’Who’ Is
Saying ‘What’? Brain-Based Decoding of Human Voice and
Speech”. Science, Vol 322, Pp 970-973.

2. Pascal B, Robert J. Z, and Pierre A. (2002). “Human temporal-lobe


response to vocal sounds”. Cognitive Brain Research, Vol 13, Pp
17-26.

3. Francisca P. L, and Joseph B. M. (2006). “Characterization of


event-related designs using BOLD and IRON fMRI.” Neuroimage,
Vol 29, Pp 901-909.

4. Gary E, (2010). Analysis of a fMR dataset Assignment Instruction


Sheet. University of Melbourne. Retrieved on 16/05/2010.

5. Douglas C. N, (2010). Lecture 5 notes: Introduction to functional


MRI. Department of Biomedical Engineering. Functional MRI
Laboratory. Univeristy of Michigan. Retrieved on 16/05/2010.

6. Beckmann C. F., Jenkinson M., Woolrich M. W., Behrens T. E. J.,


Flitney D. E., Devlin J. T., and Smith S. M. (2006). "Applying FSL to
the FIAC data: model-based and model-free analysis of voice and
sentence repetition priming," Human brain mapping, Vol. 27, Pp
380.

7. Jenkinson M., Bannister P., Brady M., and Smith S.. (2002).
"Improved optimization for the robust and accurate linear
registration and motion correction of brain images," Neuroimage,
vol. 17, Pp. 825-841.

8. Peter J., Paul M. M., and Stephen M. S.. (2001). Functional MRI: an
introduction to methods. Oxford University Press Oxford.

9. Belin P., Zatorre R., Lafaille P., Ahad P., Pike B (2000): Voice-selective areas in
human auditory cortex. Nature 403:309 –312

11

You might also like