Localisation microscopy with quantum dots using non-negative matrix factorisation Chris Williams

advertisement
Localisation microscopy with quantum dots
using non-negative matrix factorisation
Chris Williams
joint work with Ondřej Mandula, Ivana Šumanovac, Rainer
Heintzmann
Institute for Adaptive and Neural Computation
School of Informatics, University of Edinburgh
September 2014
1 / 22
The Problem
time !
d(x, t)
k
wk (x)
⇡
⇥
1
⇥
2
⇥
hk (t)
+
..
.
K
..
.
..
.
+
⇥
1
t
T
Localization microscopy with highly overlapping sources using
non-negative matrix factorization
2 / 22
I
Want to analyse a time-lapse sequence of images of a
specimen labelled with fluorophores switching between ON
and OFF states, in order to localize the sources
I
Conventional methods (PALM, fPALM, STORM, dSTORM)
actively drive a large majority of the fluorophores into an
OFF state
I
This avoids overlaps between individual point spread
functions (PSFs), but leads to low throughput
I
Quantum dots (QDs) are brighter than alternatives,
reducing acquisition times
I
However, QD blinking cannot be controlled so we need to
analyze overlapping sources
3 / 22
Quantum dots: blinking
0
Counts
Intensity [a.u.]
Quantum dots - blinking
50
Time [s]
100
0
0.5
Intensity [a.u.]
1
34
4 / 22
Outline
I
The NMF model
I
iNMF enhancements
I
Competitor methods
I
Quantitative Evaluation
I
Comparisons on Simulated and Real Data
I
Localization in Depth
I
Conclusions
5 / 22
The NMF Model
time !
d(x, t)
k
wk (x)
⇡
⇥
1
⇥
2
⇥
hk (t)
+
..
.
K
..
.
..
.
+
⇥
1
d(x, t) '
t
K
X
T
wk (x)hk (t)
k =1
D ' WH
I
I
D is N × T , W is N × K , H is K × T
P
Scale so that j wjk = 1
6 / 22
Fitting the Model
I
Poisson likelihood is the natural choice for microscopy
!
K
K
X
X
X
log p(D|W , H) =
dxt log
wxk hkt −
wxk hkt +const
xt
k =1
k =1
I
Corresponds to Kullback-Leibler divergence used by Lee
and Seung (2001)
I
Multiplicative updates
h
i
wxk
(D W H)H >
wxk = PT
xk
t=1 hkt
h
i
hkt
hkt = PN
W > (D W H) .
kt
x=1 wxk
where denotes the element-wise division of matrices
7 / 22
Iterative NMF (iNMF)
I
Multiplicative updates are convex wrt W and H separately,
but non-convex jointly
I
Multiple restarts can be used, but we did not find good
solutions with this method
I
We exploit prior knowledge that wk s (PSFs) are likely to
have compact structure
I
Rank columns wk of W according to their L2 norm
I
Larger L2 scores tend to have sparser structure
I
Hoyer (2004) used target L2 sparseness, rather than as a
ranking
8 / 22
Winit , Hinit
(Random positive matrices.)
j = 1
W, H
NO
NMF
j = j +1
j≤K
YES
Sort columns of W .
(Permute rows of H.)
Replace first j columns of Winit
with first j columns of sorted W .
(Correspondingly for rows of Hinit .)
9 / 22
Choosing K
I
We use a over-estimate based on PCA
I
We demonstrate that iNMF recovers the optimal number of
emitters if K is over-estimated
10 / 22
iNMF
in NMF
Action(iNMF)
iterative
60
11 / 22
Handling many sources
P11
K=51
P21
K=41
P22
K=41
P31
P41
P71
I
P13
K=63
P23
K=50
P32
K=30
P42
P51
K=28
P61
P12
K=51
P24
K=49
P33
K=37
P43
K=52
P52
P62
P53
K=30
P34
K=60
P26
K=47
P35
K=51
P45
K=47
P54
K=31
P64
K=39
P73
P15
K=54
P25
K=60
P44
K=52
P63
K=37
P72
P14
K=48
P46
K=35
P55
K=23
P65
K=32
P74
P66
K=44
P75
P16 P17
K=87 K=17
P27
K=31
P36 P37
K=62
P47
P56 P57
K=29 K=24
P67
K=31
P76
P77
iNMF applied to each patch, then the results are stitiched
back together
12 / 22
Competitor Methods
I
CSSTORM: (Zhu et al, 2012). Acts on each frame
separately, uses ideas from compressed sensing re spatial
sparsity of sources
I
3B (Bayesian Blinking and Bleaching, Cox et al, 2011).
Fits a hidden Markov chain for each source. Expensive
MCMC approximations over location, blur, and brightness
of each source, and jump moves over number of sources
I
bSOFI balanced Super-resolution Optical Fluctuation
Imaging (Geissbuehler et al, 2012). Does not localize
emitters but analyses higher order statistics of intensity
fluctuation
13 / 22
Simulations: Quantitative Evaluation
I
Scatter sources randomly at a given density, time series
generated by down-sampling a telegraph process
I
For each method measure localization precision and ability
to recover individual sources
I
Use Precision-Recall curve and calculate the Average
Precision (AP)
I
Use methodology from PASCAL VOC competition to define
TPs, FPs, FNs
#$"
!"
%$"
%&"
#$%"&"#'%"
#$%"&"#'("
!"
14 / 22
1
precision
0.8
0.6
Average precision
0.4
0.2
0
0
Precision/recall curve
Interpolated precision
0.2
0.4
0.6
0.8
1
recall
I
Ranking of sources according to mean intensity meant (hkt )
15 / 22
16 / 22
Average Precision [%]
100
iNMF
CSSTORM
bSOFI
3B
80
60
40
20
0
0
10
20
30
40
True density [µm−2]
50
60
17 / 22
Comparisons on Simulated and Real Data
a
b
(b) is tubulin fibres of a HEp-2 cell immuno-labelled with QDs
18 / 22
b
w (iNMF)
a
Data
iNMF vs ICA
1µm
1µm
w (ICA)
c
1µm
19 / 22
Localization in Depth
PSF$
Axial$posi0on$[µm]$
1.0$
0.0$
5µm$
A neurone with neurotransmitter receptor subunits labeled with
QD605. Data kindly supplied by Anja Huss
20 / 22
Conclusions
I
NMF is a natural formulation for localization microscopy
with QDs
I
Local optima problems in fitting led to the iNMF algorithm
I
Outperforms competitors on localization and detection task
(assessed on synthetic data)
I
Promising results on real data
I
Access to shape of each PSF allows localization in 3D
I
Code at https://github.com/aludnam/inmf
21 / 22
Acknowledgments
I
I
Work supported in part by grants EP/F500385/1 and
BB/F529254/1 to the University of Edinburgh School of
Informatics DTC in Neuroinformatics and Computational
Neuroscience (www.anc.ac.uk/dtc) from EPSRC, BBSRC
and MRC
Thanks to:
I
I
I
Stefan Geissbuehler and Marcel Leutenegger for providing
the bSOFI algorithm and help with bSOFI evaluation
David Baddeley and Anja Huss for providing us with data of
three dimensional samples
Susan Cox, Martin Kielhorn and Kai Wicker for interesting
discussions
22 / 22
Download