Title: FMRI Data Analysis II: Connectivity Analyses
1FMRI Data Analysis IIConnectivity Analyses
- Dr. George Andrew James
- Research Associate
- The Wallace H. Coulter Department of Biomedical
Engineering at the Georgia Institute of
Technology Emory University School of Medicine - Tuesday, November 17, 2009
2Overview of Connectivity Analyses
- Functional connectivity analyses
- Typically correlational
- Seed analyses
- Component analyses (PCA, ICA, SVD)
- Effective connectivity analyses
- Infer causality
- Structural equation modeling
- Granger causality analysis
- Dynamic causal modeling
3Connectivity Analyses
- Advantages
- Require no a priori hypotheses
- Can capture influences not relating to model
- Disadvantages
- Computationally statistically complex
- Less intuitive than model-dependent methods
4Correlational Analyses
- Measure correlation between a voxels timecourse
and all other voxels timecourses - Like GLM using a voxels timecourse as the
paradigm
Anterior Cingulate
Orbitofrontal cortex
Thalamus
Subgenual Cingulate
5Seed Analysis
- Pick a region of interest (ROI) as seed
- Make SPM where each voxel is color-coded by the
strength of correlation between voxel and ROI
seed
Correlations can be measured in absence of
task! Frequency filtering is essential for
resting correlations!
Peltier 2002
6Changing Correlations in Language Network with
Task
He et al, HBM 2003
7Changing Correlations in Language Network with
Task
Meaningless pinyins aloud
Meaningful pinyins aloud
Tongue movement
Pinyins chinese words written in English
letters (i.e. xiexie for thank you)
He, HBM 2003
8Changes in correlation with time
He, HBM 2003
9The Good, The Bad, and the Ugly
Brain activation during naturalistic viewing of
movie stimuli
Fig. 1. Intersubject correlation during free
viewing of an uninterrupted movie segment.( A)
Average percentage of functionally correlated
cortical surface across all pairwise comparisons
between subjects for the entire movie time course
(All), for the regionally specific movie time
course (after the removal of the nonselective
component, Regional) and for the darkness control
experiment (In darkness).( B) Voxel-by-voxel
intersubject correlation between the source
subject (ZO) and the target subject
(SN).Correlation maps are shown on unfolded left
and right hemispheres (LH and RH,
respectively).Color indicates the significance
level of the intersubject correlation in each
voxel.Black dotted lines denote borders of
retinotopic visual areas V1, V2, V3, VP, V3A,
V4/V8, and estimated border of auditory cortex
(A1).The face-, object-, and building-related
borders (red, blue, and green rings,
respectively) are also superimposed on the map.
Note the substantial extent of intersubject
correlations and theextension of the correlations
beyond visual and auditory cortices.
10The Good, The Bad, and the Ugly
Fig. 2. Nonselective activation across regions.(
A) Correlation between the averaged time course
of the VOT cortex in one cortical hemisphere
(correlation seed marked by the red contour) and
the rest of the cortex, shown on unfolded left
and right hemispheres.( B) The average
nonselective time course across all activated
regions obtained during the first 10 min of the
movie for all five subjects.Red line represents
the across-subject average time course.There is a
striking degree of synchronization among
different individuals watching the same movie.
Hasson, Science 2004
11The Good, The Bad, and the Ugly
Reverse correlation what is subject viewing
during timecourse below?
Hasson, Science 2004
12The Good, The Bad, and the Ugly
Reverse correlation what is subject viewing
during timecourse below?
Hasson, Science 2004
13The Good, The Bad, and the Ugly
14The Good, The Bad, and The Ugly
Intersubject correlation shows us what brain
regions are co-activated across subjects when
watching a movie AND (perhaps more
importantly) brain regions that are not
correlated across subjects!
15Component Analyses
- We have a correlation matrix for many ROIs
- How can we simplify or distill this correlation
matrix into a network of regions? - ex visuomotor learning most likely involves
several independent networks that are
simultaneously co-activated. - Well use PCA (principle components analysis) to
extract these networks
16Conceptualizing PCA
17Conceptualizing PCA
brains total spatial and temporal variance
variance of voxels individual timecourses
18Conceptualizing PCA
brains total spatial and temporal variance
variance of voxels individual timecourses
BUT, some voxels better explain the brains
overall variance than others! PCA asks How can
we cluster voxels into components to best
explain the brains variance?
19PCA visuomotor example
Visual Stimuli
Subject Response
time
Some brain regions (V1, M1, cerebellum, thalamus,
SMA) should have greater temporal variability
(more variance) than others (Brocas area,
sylvian fissure, amygdala, etc.)
20Total Variance
Voxels Variance
21How do we do Component Analyses?
- Linear Matrix Algebra
- Eigenvector given a linear transformation,an
eigenvector of that transformation is a nonzero
vector which, when a transformation is applied to
it, may change in length but not direction - Eigenvalue describes manipulation to
eigenvector - 2 same direction, 2x length
- 1 same direction, same length
- -1 opposite direction, same length
22How do we do Component Analyses?
Wikipedia, 2008
23Principal Components Analysis and Singular Value
Decomposition
- Given square matrix A with order r x r
- A principal components analysis of A yields
- USU' A
- where U containing the eigenvectors is r x r, S
is a diagonal matrix r x r containing the
eigenvalues - A U(1)S(1)U(1)' U(2)S(2)U(2)'
U(r)S(r)U(r)' - The computed principal components, or latent
variables (LV), are mutually uncorrelated - The first LV accounts for the largest part of A
(largest variance), and the next LV accounts for
the second largest variance not related to the
first LV
PLS Workshop 2008 University of Toronto
24Principal Components Analysis and Singular Value
Decomposition
- Eigenvalues
- Indicate the proportion of total variance in the
matrix that is captured by a each LV - If ?i is the ith eigenvalue from a PCA on a
matrix
PLS Workshop 2008 University of Toronto
25Principal Components Analysis and Singular Value
Decomposition
- Conceptual - Regression Analogy
- Step 1 Derive a latent variable (LV) that
accounts for as much of matrix A as possible - LV1 S1 u1X1 u2X2 u2X3
- where S1 is a constant scaling factor
(eigenvalue) and uj is the weight for the Xj in
LV1 - Step 2 Regress LV1 out of matrix A and repeat
step. Note that because we have remove LV1 from
the data, LV2 is necessarily orthogonal to LV1 - Note this will not work in practice, it's only
an analogy
PLS Workshop 2008 University of Toronto
26Principal Components Analysis and Singular Value
Decomposition
1.0000 0.5685 0.2558 0.5685 1.0000 0.2424 0.2558 0
.2424 1.0000
Correlation Matrix (A)
PLS Workshop 2008 University of Toronto
27In other Words
Helpful hint in Matlab, just use command
U,Seig(A)
A correlation matrix A U S U
1.0000 0.5685 0.2558 0.5685 1.0000 0.2424 0.2558 0
.2424 1.0000
0.6402 -0.2898 -0.7115 0.6357 -0.3202
0.7024 0.4313 0.9019 0.0207
1.7369 0 0 0 0.8318
0 0 0 0.4313
0.6402 0.6357 0.4313 -0.2698 -0.3202 0.6357 -0.711
5 0.7024 0.0207
LV1 U1 S1 U1
0.7118 0.7068 0.4796 0.7068 0.7019 0.4763 0.4796 0
.4763 0.3232
0.6402 0.6357 0.4313
1.7369
0.6402 0.6357 0.4313
28Principal Components Analysis and Singular Value
Decomposition
LVi UiSiUi'
0.7118 0.7068 0.4796 0.7068 0.7019 0.4763 0.4796 0
.4763 0.3232
LV1
0.0699 0.0772 -0.2174 0.0772 0.0853 -0.2402 -0.217
4 -0.2402 0.6766
LV2
1.0000 0.5685 0.2558 0.5685 1.0000 0.2424 0.2558 0
.2424 1.0000
0.2183 -0.2156 -0.0064 -0.2156 0.2128 0.0063 -0.0
064 0.0063 0.0002
LV3
PLS Workshop 2008 University of Toronto
29Principal Components Analysis and Singular Value
Decomposition
- For non-square matrices, we use singular value
decomposition (SVD) rather than principal
components analysis - Given matrix B, that is r x c, an SVD of B yield
- USV' B, where U is r x r, S is a diagonal
matrix r x r, and V is c x r
30PCA limitations
- If task-related fMRI changes are only a small
part of total signal variance capturing the
greatest variance in the data may reveal little
information about task-related activations.
(McKeown, 1998) - ex V1 and images of disgust vs. horror
- Components must be orthogonal, making
components difficult to conceptualize and less
significant as their order increases.
31Independent Component Analysis
- Related to PCA, ICA deconvolves a mixture of
signals into sources. - Generally accepted as more powerful and sensitive
than PCA. - GIFT, Matlabs FastICA
McKeown (1998)
32Another ICA illustration
(McKeown, 1998)
33Conceptualizing ICA
Axis 1
Axis 2 (PCA)
Axis 2 (ICA)
34ICA Comparisons
3 participants performed the Stroop test. ICA
yielded multiple components including one whose
timecourse closely matched the paradigm (shown
right)
(McKeown, 1998)
35(McKeown 1988)
Additional comments A voxel can contribute to
multiple components. ICA reveals non-task
specific components. ICA could be valuable for
masking unwanted voxels (i.e. slowly-varying
activity)
36Regional Homogeneity
- Regional homogeneity estimates how correlated a
voxel is with its immediate neighbors i.e. a
regions homogeneity
37Regional Homogeneity and Anesthesia
Peltier, Kerssens, Hamann, Sebel, Byas-Smith
Hu. (2005). NeuroReport, 16, 285-288.
Regional homogeneity describes how strongly a
brain region communicates with its immediate
neighbors. This analysis provides insight into
aberrant connectivity patterns within a neural
region. We have demonstrated progressive
reduction in the local coherence of frontal and
sensorimotor cortices with increasing anesthesia.
38Regional Homogeneity and Epilepsy
James Drane. (unpublished)
Preliminary findings The hippocampus in the
epileptogenic hemisphere shows less regional
homogeneity than its counterpart.
39Effective Connectivity
- Unlike correlational methods (aka functional
connectivity), effective connectivity attempts to
find causal relationships - Simultaneous influences among variables
(structural equation modeling, dynamic causal
modeling) - Temporal influences among variables (Granger
causality analysis)
40Structural Equation Modeling
- SEM is a statistical technique to assess both the
strength and directionality of interactions
between variables - SEM aka path analysis or causal modeling
- SEM is traditionally confirmatory
- SEM assess how well a model fits a given dataset
i.e. SEM tests the model, not the data!
41But correlation doesnt imply causality!(so how
does SEM work?)
42Interpreting Factor Analysis / SEM
Observed covariance matrix
Test1 Test2 Test3 Test4
Test1 1 .6 .2 .3
Test2 .6 1 .1 .3
Test3 .2 .1 1 .7
Test4 .3 .3 .7 1
43Interpreting Factor Analysis
Find values of a, b, etc. so that predicted
covariance best matches observed covariance.
Factor 1
d
b
a
c
Test 1
Test 2
Test 3
Test 4
u
v
w
x
Predicted covariance matrix
Observed covariance matrix
Test1 Test2 Test3 Test4
Test1 a2u ab ac ad
Test2 ab b2v bc bd
Test3 ac bc c2w cd
Test4 ad bd cd d2x
Test1 Test2 Test3 Test4
Test1 1 .6 .2 .3
Test2 .6 1 .1 .3
Test3 .2 .1 1 .7
Test4 .3 .3 .7 1
44Interpreting Factor Analysis
r
Factor 1
Factor 2
d
b
a
c
Test 1
Test 2
Test 3
Test 4
u
v
w
x
Predicted covariance matrix
Observed covariance matrix
Test1 Test2 Test3 Test4
Test1 1 .6 .2 .3
Test2 .6 1 .1 .3
Test3 .2 .1 1 .7
Test4 .3 .3 .7 1
Test1 Test2 Test3 Test4
Test1 a2u ab arc ard
Test2 ab b2v brc brd
Test3 arc brc c2w cd
Test4 ard brd cd d2x
45Interpreting Factor Analysis
r 0
Factor 1
Factor 2
d
b
a
c
Test 1
Test 2
Test 3
Test 4
u
v
w
x
Predicted covariance matrix
Observed covariance matrix
Test1 Test2 Test3 Test4
Test1 1 .6 .2 .3
Test2 .6 1 .1 .3
Test3 .2 .1 1 .7
Test4 .3 .3 .7 1
Test1 Test2 Test3 Test4
Test1 a2u ab 0 0
Test2 ab b2v 0 0
Test3 0 0 c2w cd
Test4 0 0 cd d2x
46Examples of SEM from neuroimaging
Path loading of PP?ITp significantly increases as
subjects learn object/spatial associations.
(Büchel 1999)
47Structural Equation Modeling
SMA supplementary motor area PM premotor
(R/L) M1 primary motor (R/L) Zhang et al., 2005
- Path weighting expresses strength of connection.
(Analagous to correlation... but directional!)
48Granger Causality Analysis
- Directly measures temporal associations
- Given two ROI timecourses
- X(t)x1 x2 x3 xN and Y(t)y1 y2 y3 yN
- Build autoregressive model so that past values of
X(t) can predict future values. - If including past values of Y improves the
ability for past values of X to predict current
value of X, then we say Y granger causes X
49Granger Causality Analysis
Experiment Subjects engage in motor fatigue
task sinusoidal contraction of a hand weight
Activation of many regions Motor, premotor, SMA,
cerebellum, S1, parietal
50Granger Causality Analysis
M1 SMA PM S1 C P
Window-1 Cin 19 15 7 9 16 11
Window-1 Cout 8 13 7 25 10 13
Window-2 Cin 15 21 15 9 16 8
Window-2 Cout 8 14 11 23 18 10
Window-3 Cin 11 13 9 9 15 8
Window-3 Cout 7 8 9 18 14 9
51References
- Huettel SA, Song AW McCarthy G. (2004).
Functional Magnetic Resonance Imaging. Sinauer
Associates Inc Sunderland, Massachusetts USA. - Peterson SE, Fox PT, Snyder AZ Raichle ME.
(1990). Activation of the extrastriate and
frontal cortical areas by visual words and
word-like stimuli. Science, 249(4972),
1041-1044. - SPM http//www.fil.ion.ucl.ac.uk/spm/
- AFNI http//afni.nimh.nih.gov/afni
52References
- McKeown MJ, Makeig S, Brown GG, Jung T-P,
Kindermann SS, Bell AJ Sejnowski TJ. (1998).
Analysis of fMRI data by blind separation into
independent spatial components. Human Brain
Mapping, 6, 160-188. - Zhuang, J. C., LaConte, S. M., Peltier, S.,
Zhang, K., Hu, X. (2005). Connectivity
exploration with structural equation modeling an
fMRI study of bimanual motor coordination.
NeuroImage, 25, 462-470.
53Thanks!
54BOLD-fMRI
Vasomotor Tone Adapting...? Fast or slow
Uncoupling
Neural Activation
Stim.
rCBO
rCBF
Vo2
( K, H, NO, PO2
Adenosine...? )
empirical
- deoxyHb ---gt rate of MR signal decay (1/T2)
- rf pulse sequence sensitive to the decay effect
A(t) A0 e - TE /T2(t)
55Analysis of Variance (ANOVA)
Interaction effects is the whole greater than
the sum of the parts? ex thalamic response to
simultaneous visual and auditory stimuli