Linear Algebra and Matrices

advertisement
Linear Algebra and Matrices
Methods for Dummies
FIL
November 2011
Narges Bazargani and Sarah Jensen
ONLINE SOURCES
Web Guides
– http://mathworld.wolfram.com/LinearAlgebra.html
– http://www.maths.surrey.ac.uk/explore/emmaspages/option1.html
– http://www.inf.ed.ac.uk/teaching/courses/fmcs1/
 Online introduction:
- http://www.khanacademy.org/video/introduction-tomatrices?playlist=Linear+Algebra
What Is MATLAB?
- And why learn about matrices?
MATLAB = MATrix LABoratory
Typical uses include:
• Math and computation
• Algorithm development
• Modelling, simulation, and prototyping
• Data analysis, exploration, and visualization
• Scientific and engineering graphics
• Application development, including Graphical User
Interface building
Everything in MATLAB is a matrix !
Zero-dimentional matrix
A Scalar - a single number is really a 1 x 1 matrix in Matlab!
1 dimentional matrix
A vector is a 1xn matrix with 1 row
4
[1 2 3]
n
m
A matrix is an mxn matrix
Even a picture is a matrix!
2
7
4
3
8
9
Building matrices I MATLAB
with [ ]:
A = [2 7 4]
2
7
4
2
7
4
3
8
9
2
A = [2; 7; 4]
7
4
; separates the different rows
A = [2 7 4; 3 8 9]
: separates collums
Matrix formation in MATLAB
X = [1 2 3; 4 5 6; 7 8 9]
=
æ 1 2 3 ö
ç
÷
ç 4 5 6 ÷
ç 7 8 9 ÷
è
ø
Submatrices in MATLAB
Subscripting – each element of a matrix can be addressed with a pair
of numbers; row first, column second
X(2,3) = 6
æ 1 2 3 ö
ç
÷
ç 4 5 6 ÷
ç 7 8 9 ÷
è
ø
X(3,:) = ( 7 8 9 )
X( [2 3], 2) =
æ 5 ö
ç
÷
è 8 ø
Matrix addition and subtraction
NB Only matrices of the same size can be added and substracted
Addition
Subtraction
Matrix Multiplication I
Different kinds of multiplication I MATLAB
Scalar multiplication
Matlab does all this for you!: 3 * A
Matrix multiplication II
Sum product of respective rows and columns
n
Matrix multiplication rule:
m
A x B is only viable if n=k.
Matlab does all this for you!: C = A * B
l
a11
a12
a13
a21
a22
a23
a31
a32
a33
a41
a42
a43
b11
b12
b21
b22
b31
b32
x
b11
b12
b21
b22
b31
b32
a11
a12
a13
a21
a22
a23
a31
a32
a33
a41
a42
a43
X
k
Elementwise multiplication
Matrix multiplication rule:
Matrixes need the exact same ‘m’
and ‘n’
Matlab does all this for you!: A .* B
a11
a12
a13
a21
a22
a23
a31
a32
a33
a41
a42
a43
b11
b12
b21
b22
b31
b32
x
X
b11
b12
b21
b22
b31
b32
a11
a12
a21
a22
a31
a32
Transposition – reorganising matrices
column
→
row
row
→
column
In Matlab: AT = A’
Identity matrices
Tool to solve equation
This identity matrix Is a matrix which plays a similar role as the
number 1 in number multiplication
1

0
0

0
1
0
0

0
1 
Worked example
A In = A
for a 3x3 matrix:
1
2
3
4
5
6
7
8
9
X
1
0
0
0
1
0
0
0
1
=
1+0+0
0+2+0
0+0+3
4+0+0
0+5+0
0+0+6
7+0+0
0+8+0
0+0+9
In Matlab: eye(r, c) produces an r x c identity matrix
Inverse matrices
Definition: Matrix A is invertible if there exists a matrix B such that:
1
1
-1
2
X
2
3
-1
3
1
3
1
3
=
2+1
3 3
-1 + 1
3 3
-2+ 2
3 3
1+2
3 3
=
1
0
0
1
 Notation for the inverse of a matrix A is A-1
 If A is invertible, A-1 is also invertible  A is the inverse matrix of A-1.
• In Matlab: A-1 = inv(A)
Determinants
 Determinant is a function:
 A Matrix A has an inverse matrix (A-1)if and only
if det(A) ≠0
• In Matlab: det(A) = det(A)
With more than 1 equation and more
than 1 unknown
 Can use solution
equation to solve
 For example
 In matrix form
1
xa b
from the single
2 x1  3 x 2  5
x1  2 x 2   1
2

1
3   x1   5 
    
 2   x 2    1
AX = B
Need to find determinant of matrix A (because X =A-1B)
det( A ) 
From earlier
a
b
c
d
 ad  bc
2 3 


1

2


(2 x -2) – (3 x 1) = -4 – 3 = -7
So determinant is -7
To find A-1:
A
1
 2


(7)   1
1
1 2
X  
7 1
 3 1 2
 
2  7 1
3 

 2
3  1 1  14 
2
     
 2  4 7   7   1
if B is
1 
 
4
So
scalars, vectors and matrices in
SPM
•Scalar: Variable described by a single
number – e.g. intensity of each voxel in MRI scan
•Vector: Physics vector is Variable described by magnitude and direction –
Here we talk about column of numbers e.g. voxel intensity at a different
times or different voxels at the same time
v

n
v 
e
v
 x1 
 
x2
 
 xn 
 Matrix: Rectangular array of vectors defined by number of rows and columns.
x11 x12 ………x1n
.
.
xn1……………xnn
Vectorial Space and Matrix Rank
Vectorial space: is a space that contains vectors and all the those that can be
obtained by multiplying vectors by a real number then adding them (linear
combination). In other words, because each column of the matrix can be
represented by a vector, the ensemble of n vector-column defines a vectorial
space for a matrix.
Rank of a matrix: corresponds to the number of vectors that are linearly
independents from each other. So, if there is a linear relationship between the
lines or columns of a matrix, then the matrix will be rank-deficient (and the
y
determinant will be zero). For example,
in the graph below there is a linear
relationship between X1 and X2,
4

and the determinent is zero.
x2
And the Vectorial space defined
2
x1
will has only 1 dimension.

x
1
2
Eigenvalues et eigenvectors
Eigenvalues are multipliers. They are numbers that represent how much linear transformation or
stretching has taken place. An eigenvalue of a square matrix is a scalar that is represented by the
Greek letter λ (lambda).
Eigenvectors of a square matrix are non-zero vectors, after being multiplied by the matrix, remain
parallel to the original vector. For each eigenvector, the corresponding eigenvalue is the factor by
which the eigenvector is scaled when multiplied by the matrix.
All eigenvalues and eigenvectors satisfy the equation Ax = λx for a given square matrix A. i.e.
Matrix A acts by stretching
the vector x, not changing its direction, so x is an
eigenvector of A.
One can represent eigenvectors of A as a
set of orthogonal vectors representing different
dimensions of the original matrix A.
(Important in Principal Component Analysis, PCA)
Matrix Representations of Neural
Connections
• Can create a mathematical model of the
connections in a neural system
• Connections are the excitatory or inhibitory
Excitatory Connection
Input Neuron
Output Neuron
Inhibitory Connection
Input Neuron
Output Neuron
Matrix Representations of Neural
Connections
Excitatory = Makes it easier for the
post synaptic cell to fire
#2
-1
#1
+1
#3
Inhibitory = Makes it harder for the
post synaptic cell to fire
We can translate this information into a set of vectors (1 row matrices)


Input vector = (1 1) relates to activity (#1 #2)
Weight vector = (1 -1) relates to connection weight (#1 #2)
Activity of Neuron 3
Input x weight
1 
1 1.   (1  1)  (1   1)   (1)  (  1)  0
  1
Cancels out! But it is more
complicated than this!
How are matrices relevant to fMRI data?
Basics of MR Physics
 Angular Momentum: Neutrons, protons and electrons spin about their axis.
The spinning of the nuclear particles produces angular momentum.
 Certain nuclei exhibit magnetic properties. A proton has mass, a positive
charge, and spins, it produces a small magnetic field. This small magnetic
field is referred to as the magnetic moment that is a vector quantity with
magnitude and direction and is oriented in the same direction as the angular
momentum.
 Under normal circumstances these magnetic moments have no fixed
orientation (so no overall magnetic field). However, when exposed to an
external magnetic field (B0), nuclei begin to align. To detect net
magnetisation signal a second magnetic field is introduced (B1) which is
applied perpendicular to B0, and it has to be at the resonant frequency.
How are matrices relevant to fMRI
data?
Y
=
X
.
β
+
ε
Observed = Predictors * Parameters + Error
BOLD
= Design Matrix * Betas + Error
Y
Y is a matrix of BOLD signals
Each column represents a single
voxel sampled at successive time
points.

Each voxel is considered as
independent observation

So, we analysis of individual voxels
over time, not groups over space
Time


Intensity
Response variable
A single voxel sampled
at successive time points.
Each voxel is considered as
independent observation.
Solve equation for
β – tells us how
much of the BOLD
signal is explained
by X
Explanatory variables
These are assumed to be
measured without error.
May be continuous, indicating
levels of an experimental
factor.
a
m
N of scans
b3
b4
=
b5
+
b6
b7
b8
b9
Y
Observed
=
X
Predictors
´
b
+
e
Pseudoinverse
In SPM, design matrices are NOT
square matrices (more lines than
columns, especially for fMRI).
So, there is not a unique solution, i.e.
there is more than one solution
possible.
SPM will use a mathematical trick
called the pseudoinverse, which is an
approximation, where the solution is
constrained to be the one where the
b values that are minimum.
How are matrices relevant to fMRI?
Image time-series
Realignment
Spatial filter
Design matrix
Smoothing
General Linear Model
Statistical Parametric Map
Statistical
Inference
Normalisation
Anatomical
reference
Parameter estimates
RFT
p <0.05
In Practice
 Estimate MAGNITUDE of signal changes and
MR INTENSITY levels for each voxel at various
time points
 Relationship between experiment and voxel
changes are established
 Calculation require linear algebra and matrices
manipulations
 SPM builds up data as a matrix.
 Manipulation of matrices enables unknown
values to be calculated.
Thank you!
Questions?
Download