Accelerated parameter estimation of gravitational wave sources Priscilla Canizares, Cambridge (IoA)

advertisement
Accelerated parameter estimation
of gravitational wave sources
Priscilla Canizares, Cambridge (IoA)
(1) Scott E Field, Jonathan Gair, Manuel Tiglio
(2)
(1) + Rory Smith and Vivien Raymond
10th International LISA Symposium
1
Priscilla Canizares (IoA)
2
sp are
broadly
applicable
to
any
experiment
where=fast
Bayesian
:= hn|ni
hs(t)
) |s(t)is desirable.
h (t;
)i
(correlation) with
✓
◆ h (t;analysis
A.
Reduce
for discretely sampled
noisy
data 1 hs( ) h ( , ⇥) |s h ( , ⇥)i ,
P s|⇥
/ exp
Motivation
lication of ROQs to
2
mbers:
Once we have a waveform tem
(1)
mple model of a sine- X
L ˜⇤
~ fk ) template that depends on of the parameter space where w
d
(f
)
h̃(
✓;
k
where
h(
,
⇥)
is
a
waveform
~chosen
(d|h(
✓))
= 4<
. characterise
(2) the phys- eter estimation, we are promp
Match
filtering
model
is
asf m of! parameters that
•
atechniques:
number
Sn (fk )
~n( (RB).
~h(
k=1
;
✓)
anced
LIGO
(aLIGO)
[1]
and
adS(
)
=
h
(
,
✓
)
+
)( =
t or to =
) we
In order
do fso,
ical
system,
⇥
=
{✓
,
.
.
.
✓
},
and
also
depends
on
eit
t
lthough
such
wave1
p
The detector signal (GW + noise) is correlated
parameter space and compute
ther
time routinely
= t or frequency
= f . The function
[2]
are
expected
to
start
s (see,
forsingle
example,
TR
against
GW
templates
over
long
observation
points - know
as⇤ the training p
˜
~
h(
,
⇥)
approximates
the
true
GW
signal,
h
, that is
L ˜
this
equation
d(f
)
and
h̃(
✓;
f
)
are
the
discrete
X
k
k
~spac
W)
searches
within
two
years
and
d
(f
)
h̃(
✓;
fk
is
the
weighted
norm
of
the
noise
realization
n(t),
will
conform
the
training
k
concealed
in
the
noise,
n,
present
in
the
detected
data,
i.
aluations
is
not
sigL
⇤
~
(continuous
GW signals)
and
over
large
riertimes
transforms
at
frequencies
{f
, a denotes
(d|h(✓)) = 4<
f [PCM: elaborate] For
k }k=1
the RBs.
e. s( decade.
) = hTRfined
+ n(Com).by
etections
in
the
next
the weighted
inner product
(see
e.g. S[24])
n (fk )
t this
application
is
mplex
conjugation,
and
the
power
spectral
density
~
RBs
for
Numerical
Relativity
k=1
parameter
space
!
= {✓1 , .promising
. . ✓p }
ces (CBCs) are the✓ most
generated waveforms or altern
D)
Sn (fk )we
characterizes
thetodetector’s
noise.
owever,
demonIn
order
evaluate
Eq.
(1)
and
perform
parameter
Z
(searches
target unknown
systems). a
fmax
xpected
detection
rates between
⇤
PN)waveform
training
space t
˜
~
estimation
studies,
we
need
to
repeatedly
compute
the
ã(f
b̃
(f
)
In this equation d(fthe
)
and
h̃(
✓;
f
)
model
the
speed-up
k parameter space,
k andare
then
[3]. E↵ective parameter
estimation
scalar product
hs( ) h ( , ⇥)ha
|s |b hi (=
, ⇥)i,
which in
4<
df,
L
Fourier
transforms
atNR
frequencies
{fk }k=1
waveforms.
ect
that
comparable
practice
means
that
we
have
to
evaluate
the
following
L
S̃
(f
)
f
⇤
n
X S̃ (fk )h̃(fk , ✓)
min
~
emonstrated [4–6], but approaches
In
Reduced spec
Order
complex
conjugation, andapplying
the power
~ =expression
eten
for
more
complex
hS(f
)|h(f,
✓)i
4<
f
(3)
unrealistic computational costs
rithms, to parameter estimatio
S
(f
)
n
k
(PSD)
S
(f
)
characterizes
the
detector’s
n
⇤hh|hi 2<{hs|hi} , n k
k=1
hs|si
+
(2)
be
known
at
a
given
set
of
trai
with al-denoting complex conjugation and S̃n (f )
est, even when using efficient
For a given observation
time
T greedy
= 1/ algorith
f a
as input
for the
s.
In
section
II
we
power
density
the detector’s
noise. Ow
rkov
chainobservation
Monte
Carlo
(MCMC)
either
in time
= 1/
t or spectral
inf frequency
= f . Whenof
exploror• aParameter
given
time
T =
and
detection
estimation
requires
repeated
evaluations
Basis
(RBs)
frequency
window
(fhigh flow ) there are
ing
the
parameter
space
of
searches,
this
inner
product
approach.
In
sec7].
For
the
advanced
aLIGO
and
to
the
form
of S̃n (f ) in GW physics, the lower limit
uency
window
(f
f
)
there
are
high
low
of the likelihood across
the
parameter
space
is repeatedly computed, say M times, for each differ⇥
⇤T := {⇥i
nt
approaches
will
lead
to
months
uilding blocks ofent
the
integration
in Eq.
sometimes
i = 1,(5)
..., Mis
. However,
min >
L = int replaced
fhigh fby
⇥set of parameters
⇤ ⇥i , being
low fT
onal
wall (clock)
time
for product
thefWhen
analyL =Empirint
fhigh
T does not depend
(4) on the sys- with ⇥i a point in the trainin
the inner
hs|si,
low
delling,
the
dealing
with
highjustdimensional problems,
tem
parameters
and
then,
it
needs
to
be
computed
gnal.
Given Quadrathe expected detection
the
parameters.
TheseL
training
sampling
points
in
the
sum
(2).
When
is
la
ced Order
2
one
time.
The
therms
<hs|h
( ,as
⇥)in
and
kh ( ,likelihood
⇥) k =
process
of
mapping
the
(or the
posterior)
s
mpling
points
in
the
sum
(2).
When
L
is
large,
the
training
set
of
waveforms
{h(
need
for new
approaches
can are evaluated
cases that
wepoint
consider
here, there are two m
hhVI
( , ⇥) |h which
( , ⇥)i, which
at each
in
.es Finally,
in
Sec.
[PCM:
Introduce
briefthe
descr
face
become
very
expensive.
MCMC
algorithms
ar
that we
considerthe
here,
there
arecan
two
major are
bottleUsually
theapresence
of noise
reduce
parameter
space
of searches,
computed
using
easible
timescales.
necks: (i)convergence
evaluationofof
theOn
GW
model
at the
ea
rithm]
the
other
hand,
rm
a
MCMC
search
numerical
integrations.
ks: (i) evaluation of
the In
GW
model
at
each frequency
ROQ.
this
way, the
computational
time
of the likeli- through such large spac
useful
technique
for
searching
rithm is given
by
a hierarchic
point
f
and,
(ii)
computing
the
likelihood
ation
studies,
the
posterior
probak
hood
is
significantly
reduced
from,
say
days
to
a
couple
wing
that(ii)
ROQ
can thebylikelihood
nt fk and,
computing
itself
(1).
following
a
random
walk
in
parameter
{⇥1 , ⇥2 , · · · ,space,
⇥m } ✓ Twith
(with
2
~
In
general,
smoothly
parameterized
of
hours
[see
Sec.
(??)]
[?
].
n (PDF) of a set of parameters, ✓,
n general,
smoothly parameterized models are
even m ⌧ M if the problem is
2
sp are
broadly
applicable
to
any
experiment
where=fast
Bayesian
:= hn|ni
hs(t)
) |s(t)is desirable.
h (t;
)i
(correlation) with
✓
◆ h (t;analysis
A.
Reduce
for discretely sampled
noisy
data 1 hs( ) h ( , ⇥) |s h ( , ⇥)i ,
P s|⇥
/ exp
Motivation
lication of ROQs to
2
mbers:
Once we have a waveform tem
(1)
mple model of a sine- X
L ˜⇤
~ fk ) template that depends on of the parameter space where w
d
(f
)
h̃(
✓;
k
where
h(
,
⇥)
is
a
waveform
~chosen
(d|h(
✓))
= 4<
. characterise
(2) the phys- eter estimation, we are promp
Match
filtering
model
is
asf m of! parameters that
•
atechniques:
number
Sn (fk )
~n( (RB).
~h(
k=1
;
✓)
anced
LIGO
(aLIGO)
[1]
and
adS(
)
=
h
(
,
✓
)
+
)( =
t or to =
) we
In order
do fso,
ical
system,
⇥
=
{✓
,
.
.
.
✓
},
and
also
depends
on
eit
t
lthough
such
wave1
p
The detector signal (GW + noise) is correlated
parameter space and compute
ther
time routinely
= t or frequency
= f . The function
[2]
are
expected
to
start
s (see,
forsingle
example,
TR
against
GW
templates
over
long
observation
points - know
as⇤ the training p
˜
~
h(
,
⇥)
approximates
the
true
GW
signal,
h
, that is
L ˜
this
equation
d(f
)
and
h̃(
✓;
f
)
are
the
discrete
X
k
k
~spac
W)
searches
within
two
years
and
d
(f
)
h̃(
✓;
fk
is
the
weighted
norm
of
the
noise
realization
n(t),
will
conform
the
training
k
concealed
in
the
noise,
n,
present
in
the
detected
data,
i.
aluations
is
not
sigL
⇤
~
(continuous
GW signals)
and
over
large
riertimes
transforms
at
frequencies
{f
, a denotes
(d|h(✓)) = 4<
f [PCM: elaborate] For
k }k=1
the RBs.
e. s( decade.
) = hTRfined
+ n(Com).by
etections
in
the
next
the weighted
inner product
(see
e.g. S[24])
n (fk )
t this
application
is
mplex
conjugation,
and
the
power
spectral
density
~
RBs
for
Numerical
Relativity
k=1
parameter
space
!
= {✓1 , .promising
. . ✓p }
ces (CBCs) are the✓ most
generated waveforms or altern
D)
Sn (fk )we
characterizes
thetodetector’s
noise.
owever,
demonIn
order
evaluate
Eq.
(1)
and
perform
parameter
Z
(searches
target unknown
systems). a
fmax
xpected
detection
rates between
⇤
PN)waveform
training
space t
˜
~
estimation
studies,
we
need
to
repeatedly
compute
the
ã(f
b̃
(f
)
In this equation d(fthe
)
and
h̃(
✓;
f
)
model
the
speed-up
k parameter space,
k andare
then
[3]. E↵ective parameter
estimation
scalar product
hs( ) h ( , ⇥)ha
|s |b hi (=
, ⇥)i,
which in
4<
df,
L
Fourier
transforms
atNR
frequencies
{fk }k=1
waveforms.
ect
that
comparable
practice
means
that
we
have
to
evaluate
the
following
L
S̃
(f
)
f
⇤
n
X S̃ (fk )h̃(fk , ✓)
min
~
emonstrated [4–6], but approaches
In
Reduced spec
Order
complex
conjugation, andapplying
the power
~ =expression
eten
for
more
complex
hS(f
)|h(f,
✓)i
4<
f
(3)
unrealistic computational costs
rithms, to parameter estimatio
S
(f
)
n
k
(PSD)
S
(f
)
characterizes
the
detector’s
n
⇤hh|hi 2<{hs|hi} , n k
k=1
hs|si
+
(2)
be
known
at
a
given
set
of
trai
with al-denoting complex conjugation and S̃n (f )
est, even when using efficient
For a given observation
time
T greedy
= 1/ algorith
f a
as input
for the
s.
In
section
II
we
power
density
the detector’s
noise. Ow
rkov
chainobservation
Monte
Carlo
(MCMC)
either
in time
= 1/
t or spectral
inf frequency
= f . Whenof
exploror• aParameter
given
time
T =
and
detection
estimation
requires
repeated
evaluations
Basis
(RBs)
frequency
window
(fhigh flow ) there are
ing
the
parameter
space
of
searches,
this
inner
product
approach.
In
sec7].
For
the
advanced
aLIGO
and
to
the
form
of S̃n (f ) in GW physics, the lower limit
uency
window
(f
f
)
there
are
high
low
of the likelihood across
the
parameter
space
is repeatedly computed, say M times, for each differ⇥
⇤T := {⇥i
nt
approaches
will
lead
to
months
uilding blocks ofent
the
integration
in Eq.
sometimes
i = 1,(5)
..., Mis
. However,
min >
L = int replaced
fhigh fby
⇥set of parameters
⇤ ⇥i , being
low fT
onal
wall (clock)
time
for product
thefWhen
analyL =Empirint
fhigh
T does not depend
(4) on the sys- with ⇥i a point in the trainin
the inner
hs|si,
low
delling,
the
dealing
with
highjustdimensional problems,
tem
parameters
and
then,
it
needs
to
be
computed
gnal.
Given Quadrathe expected detection
the
parameters.
TheseL
training
sampling
points
in
the
sum
(2).
When
is
la
ced Order
2
one
time.
The
therms
<hs|h
( ,as
⇥)in
and
kh ( ,likelihood
⇥) k =
process
of
mapping
the
(or the
posterior)
s
mpling
points
in
the
sum
(2).
When
L
is
large,
the
training
set
of
waveforms
{h(
need
for new
approaches
can are evaluated
cases that
wepoint
consider
here, there are two m
hhVI
( , ⇥) |h which
( , ⇥)i, which
at each
in
.es Finally,
in
Sec.
[PCM:
Introduce
briefthe
descr
face
become
very
expensive.
MCMC
algorithms
ar
that
we
consider
here,
there
arecan
two
major
bottleUsually
theapresence
of noise
reduce
Correlation
cost scale
with
the
length
the
data
the
parameter
space
ofofsearches,
are
computed
using
easible
timescales.
necks: (i)convergence
evaluationofof
theOn
GW
model
at the
ea
rithm]
the
other
hand,
rm
a
MCMC
search
numerical
integrations.
ks: (i) evaluation of
the In
GW
model
at
each frequency
ROQ.
this
way, the
computational
time
of the likeli- through such large spac
useful
technique
for
searching
and
the
dimension
of
the
parameter
space
rithm is given
by
a hierarchic
point
f
and,
(ii)
computing
the
likelihood
ation
studies,
the
posterior
probak
hood
is
significantly
reduced
from,
say
days
to
a
couple
wing
that(ii)
ROQ
can thebylikelihood
nt fk and,
computing
itself
(1).
following
a
random
walk
in
parameter
{⇥1 , ⇥2 , · · · ,space,
⇥m } ✓ Twith
(with
2
~
In
general,
smoothly
parameterized
of
hours
[see
Sec.
(??)]
[?
].
n (PDF) of a set of parameters, ✓,
n general,
smoothly parameterized models are
even m ⌧ M if the problem is
Motivation
• Markov chain Monte Carlo (MCMC)
are employed to:!
Assess the relative likelihood of a true detection vs a false noise trigger!
Estimate the GW parameters by computing the posterior distribution
function (PDF)!
MCMC Maps the likelihood in
!
!
parameter space
MCMC techniques are computationally expensive: Depends
on the # of sampling points and dimensionality of the
waveform space
• Example:
Using current bayesian parameter estimation tools, blind injections from
LIGO and Virgo science runs took several months for analysing ~10s of signal.
arxiv:1304.1775
3
Goal: Compression of the GW model without
loss of information - !
fewer computational operations.
4
!
Reduced Order Modelling for GW parameter
estimation !
•Toy model: Sine-Gaussian burst!
•Realistic problem: TaylorF2 for binary Neutron stars!
Summary
5
Applying ROM for GW PE
Reduced basis (RBs) efficiently deals with parametrised problems!
•
•
Reduced number of waveforms !
Reduced number of sampling points
j =1,…,D
j =1,…,d
Training space (template bank) on a given
range of parameters
Basis of waveform templates - the RBs ( n<<
6
M, d<<D )
Applying ROM for GW PE
Reduced basis (RBs) efficiently deals with parametrised problems!
•
•
Reduced number of waveforms !
Reduced number of sampling points
j =1,…,D
j =1,…,d
Training space (template bank) on a given
range of parameters
Basis of waveform templates - the RBs ( n<<
6
M, d<<D )
Applying ROM for GW PE
ROQ parameter estimation recipe
• Step (1) Construct reduced basis: Find a set of templates that can reproduce every template in
the model space to a certain [specified] precision.
OFFLINE
7
OFFLINE
Step (1) Find a Reduced Basis (Greedy algorithm)
α
max errors
the gravitational waveforms themselves. The integration
that we want to characterise, which
converges fast, typically exponentially, with the number
dimensional set of source paramete
of sparse data
samples
m drawn from the full data set,
strumental noise.
Example: Sine-Gaussian
burst
waveform
even in the presence of noise. The overall likelihood cost
In the context of Bayesian param
is thereby reduced to m ⌧ N .
posterior probability distribution f
vides complete information about t
Our approach for speeding up correlation computasignal:
tions is based on a recently proposed Reduced
Order
representation
errors
0
Greedy points for sine−gaussian waveforms
10
parametrized
functions [? ]. ROQ
(7)
(3)(ROQ)
(6) for(4)
(9) (5)
2 (2)Quadrature
RB error
p ( |s) := Cp ( ) P (s
combines dimensional reduction with the Empirical In−3
10
terpolation Method (EIM) [? ? ] to produce
a nearly
1.5
||h hRB || .
2
optimal
quadrature rule for parametrized systems. To
Here p ( ) is the prior probability
−6
do so, it exploits smooth dependence with10respect
to pa1.5
normalization constant, and P (s| )
1
54 basis rameter variation,
when available, to achieve very fast
1
the true parameter values are given
out ofwith
180 the number of data samples. Even in
convergence
−9
templates
in other words, the likelihood that
the absence
of noise, in many cases ROQs 10
outperform the
0.5 0.5
(10)
in the data stream. For Gaussian,
best
known
quadrature
rule
(Gaussian
quadratures)
for
0
−12
likelihood is
0
0.5
1
(8) The key aspect
10
generic
smooth
functions
[?
].
of
this
ap(1)
10
20
30
40
50
0
0 parent
0.2 super-optimality
0.4 f 0.6
0.8 is to1 leverage information about # RB
2
0
PC et al
PRD
2013
P
(s|
)
/
exp
the space of functions in which we are interested.
ensuresestimation,
exponential
with the !
In theGreedy
contextalgorithm
of GW parameter
theconvergence
use of
where
ROQs can significantly improve the performance of exist# of basis (WG templates) = # of interpolation points.
2
ing numerical algorithms by reducing the
8 computational
:= hn|ni = hs(t) h (t; ) |
•
Applying ROM for GW PE
ROQ parameter estimation recipe
• Step (1) Construct reduced basis: Find a set of templates that can reproduce every template in
the model space to a certain specified precision.
OFFLINE
OFFLINE
• Step (2) Find empirical interpolation points: Find a set of points at which to match templates onto the basis.
9
OFFLINE
Step (2) Find the Reduced Basis evaluation points (interpolation points)
The interpolation points for a given set of basis functions are found
iteratively: Empirical Interpolation method (EIM) + Greedy algorithm
EIM is deals with parametrised problems characterised by non-polynomial bases. !
The set of EIM points is nested and hierarchical,
Selection of the 1st and 2nd interpolation points
4
0.1
3
0
2
1
−0.1
0
−0.2
−0.3
e1
−1
e2
−2
I1[e 2 ]
I1[e 2 ] - e 2
−3
2nd point
1st point
−0.4
0
5
10
15
20
frequency (Hz)
25
−4
0
30
10
0.1
0.2
0.3
frequency (Hz)
0.4
0.5
PC et al PRD 2013
OFFLINE
Step (2) Find the Reduced Basis evaluation points (interpolation points)
The interpolation points for a given set of basis functions are found
iteratively: Empirical Interpolation method (EIM) + Greedy algorithm
EIM is deals with parametrised problems characterised by non-polynomial bases. !
The set of EIM points is nested and hierarchical,
Selection of the 1st and 2nd interpolation points
4
0.1
3
0
2
1
−0.1
0
−0.2
−0.3
e1
−1
e2
−2
I1[e 2 ]
I1[e 2 ] - e 2
−3
2nd point
1st point
−0.4
0
5
10
15
20
frequency (Hz)
25
−4
0
30
10
0.1
0.2
0.3
frequency (Hz)
0.4
0.5
PC et al PRD 2013
OFFLINE
Step (2) Find the Reduced Basis evaluation points (interpolation points)
The interpolation points for a given set of basis functions are found
iteratively: Empirical Interpolation method (EIM) + Greedy algorithm
EIM is deals with parametrised problems characterised by non-polynomial bases. !
The set of EIM points is nested and hierarchical,
Selection of the 1st and 2nd interpolation points
4
0.1
3
0
2
1
−0.1
0
−0.2
−0.3
e1
−1
e2
−2
I1[e 2 ]
I1[e 2 ] - e 2
−3
2nd point
1st point
−0.4
0
5
10
15
20
frequency (Hz)
25
−4
0
30
10
0.1
0.2
0.3
frequency (Hz)
0.4
0.5
PC et al PRD 2013
Applying ROM for GW PE
ROQ parameter estimation recipe
• Step (1) Construct reduced basis: Find a set of templates that can reproduce every template in
the model space to a certain specified precision.
OFFLINE
OFFLINE
• Step (2) Find empirical interpolation points: Find a set of points at which to match templates onto the basis.
• Step (3) Construct signal specific weights: Compute the weights to use in the quadrature rule once
data has been collected.
startup
11
startup Step (3) Construct the [signal specific] weights
The cost of evaluating integrals scales lineally as the # of RBs n
12
following. Given a set of m nodes {xi }, known
valuations {hi := h(xi )}, and a basis ei = pi (x)
x) is a degree i  m 1 polynomial, find an apon (the interpolant)
Im [h](x) =
m
X
i=1
Applying ROM for
GW
PE
0
ci pi (x) ⇡ h(x)
(14)
Intrinsic parameters
Im [h](xi ) = hi
for
in Appendix B) and proceed to describe how we use them
to find the EIM interpolant. Equation (17) is equivalent
to solving an m-by-m system A~c = ~h for the coefficients
~c, where
i = 1, . . . , m .
(15)
B
B
A := B
B
@
e1 (F1 ) e2 (F1 ) · · ·
e1 (F2 ) e2 (F2 ) · · ·
e1 (F3 ) e2 (F3 ) · · ·
..
..
..
.
.
.
e1 (Fm ) e2 (Fm ) · · ·
em (F1 )
em (F2 )
em (F3 )
..
.
em (Fm )
1
C
C
C.
C
A
(18)
The EIM algorithm ensures that the matrix A is invertN/2
he approximant is required
Xto agree with the
1~
⇤
ible,
with
~
c
=
A
h the unique solution to Eq. (17). As
s (fk )h(fk ; ) f
t the hh(
set of)|si
m nodes.
d = 4<
A is parameter independent we have, for all values of ,
k=0 by Eqs. (14,15)
show that the problem defined
h
i
que solution in terms of N/2
Lagrange polynomiN/2
h ~e T (f ) A 1~h( ) ,i
IX
(19)
X
m [h](f ; ) =
n a convergence rate for the projection-based
⇤
⇤
T
1~
s (fk )I
; )] f = 4<
s (fk ) ~e (fk )A h( ) f
⇡ 4<
m [h(f
ation Eq. (13) we might wonder how
much
ac-k
T
k=0
where
~
e
= k=0
[e1 (f ), . . . , em (f )] denotes the transpose of
ost by trading it for the interpolation
Eq. (14)
2
3
the basis vectors, which
we continue to view as functions.
o optimally choose the nodeN/2
points
x
.
When
m
i
X
X
T
1 5empirical
nearly
optimal in the sense
nt error measurement
is the
~h( ) = interpolant
4 maximum
= 4<
s⇤ (fk )~epoint(fk ) f AThe
4<
!kis
h(F
k; )
that it satisfies
r, Chebyshev nodes are known to be neark=0
k=1
bringing an additional error which grows like
2
2
max
kh(·;
)
I
[h(·;
)]k

⇤
(20)
m
8, 49].
m m,
=: hh( )|siROQ ,
lication-specific bases, a good set of interpolas is not known a-priori. Next we describe an
where m characterizes the representation error of the bathe
space
Extrinsic
parametersset.Waveform inside/outside
for identifying
a nearly-optimal
sis as defined
inRBs
Eq. (6)
and ⇤m is a computable Lebesgue
fficients !j are given by:
N/2
The ROQ rule’s accuracy only depends on
13 polant’s accuracy to represent h(f ; ) and the
following. Given a set of m nodes {xi }, known
valuations {hi := h(xi )}, and a basis ei = pi (x)
x) is a degree i  m 1 polynomial, find an apon (the interpolant)
Im [h](x) =
m
X
i=1
Applying ROM for
GW
PE
0
ci pi (x) ⇡ h(x)
(14)
Intrinsic parameters
Im [h](xi ) = hi
for
in Appendix B) and proceed to describe how we use them
to find the EIM interpolant. Equation (17) is equivalent
to solving an m-by-m system A~c = ~h for the coefficients
~c, where
i = 1, . . . , m .
(15)
B
B
A := B
B
@
e1 (F1 ) e2 (F1 ) · · ·
e1 (F2 ) e2 (F2 ) · · ·
e1 (F3 ) e2 (F3 ) · · ·
..
..
..
.
.
.
e1 (Fm ) e2 (Fm ) · · ·
em (F1 )
em (F2 )
em (F3 )
..
.
em (Fm )
1
C
C
C.
C
A
(18)
The EIM algorithm ensures that the matrix A is invertN/2
he approximant is required
Xto agree with the
1~
⇤
ible,
with
~
c
=
A
h the unique solution to Eq. (17). As
s (fk )h(fk ; ) f
t the hh(
set of)|si
m nodes.
d = 4<
A is parameter independent we have, for all values of ,
k=0 by Eqs. (14,15)
show that the problem defined
h
i
que solution in terms of N/2
Lagrange polynomiN/2
h ~e T (f ) A 1~h( ) ,i
IX
(19)
X
m [h](f ; ) =
n a convergence rate for the projection-based
⇤
⇤
T
1~
s (fk )I
; )] f = 4<
s (fk ) ~e (fk )A h( ) f
⇡ 4<
m [h(f
ation Eq. (13) we might wonder how
much
ac-k
T
k=0
where
~
e
= k=0
[e1 (f ), . . . , em (f )] denotes the transpose of
ost by trading it for the interpolation
Eq. (14)
2
3
the basis vectors, which
we continue to view as functions.
o optimally choose the nodeN/2
points
x
.
When
m
i
X
X
T
1 5empirical
nearly
optimal in the sense
nt error measurement
is the
~h( ) = interpolant
4 maximum
= 4<
s⇤ (fk )~epoint(fk ) f AThe
4<
!kis
h(F
k; )
that it satisfies
r, Chebyshev nodes are known to be neark=0
k=1
bringing an additional error which grows like
2
2
max
kh(·;
)
I
[h(·;
)]k

⇤
(20)
m
8, 49].
m m,
=: hh( )|siROQ ,
lication-specific bases, a good set of interpolas is not known a-priori. Next we describe an
where m characterizes the representation error of the bathe
space
Extrinsic
parametersset.Waveform inside/outside
for identifying
a nearly-optimal
sis as defined
inRBs
Eq. (6)
and ⇤m is a computable Lebesgue
fficients !j are given by:
N/2
The ROQ rule’s accuracy only depends on
13 polant’s accuracy to represent h(f ; ) and the
following. Given a set of m nodes {xi }, known
valuations {hi := h(xi )}, and a basis ei = pi (x)
x) is a degree i  m 1 polynomial, find an apon (the interpolant)
Im [h](x) =
m
X
i=1
Applying ROM for
GW
PE
0
ci pi (x) ⇡ h(x)
(14)
Intrinsic parameters
Im [h](xi ) = hi
for
in Appendix B) and proceed to describe how we use them
to find the EIM interpolant. Equation (17) is equivalent
to solving an m-by-m system A~c = ~h for the coefficients
~c, where
i = 1, . . . , m .
(15)
B
B
A := B
B
@
e1 (F1 ) e2 (F1 ) · · ·
e1 (F2 ) e2 (F2 ) · · ·
e1 (F3 ) e2 (F3 ) · · ·
..
..
..
.
.
.
e1 (Fm ) e2 (Fm ) · · ·
em (F1 )
em (F2 )
em (F3 )
..
.
em (Fm )
1
C
C
C.
C
A
(18)
The EIM algorithm ensures that the matrix A is invertN/2
he approximant is required
Xto agree with the
1~
⇤
ible,
with
~
c
=
A
h the unique solution to Eq. (17). As
s (fk )h(fk ; ) f
t the hh(
set of)|si
m nodes.
d = 4<
A is parameter independent we have, for all values of ,
k=0 by Eqs. (14,15)
show that the problem defined
h
i
que solution in terms of N/2
Lagrange polynomiN/2
h ~e T (f ) A 1~h( ) ,i
IX
(19)
X
m [h](f ; ) =
n a convergence rate for the projection-based
⇤
⇤
T
1~
s (fk )I
; )] f = 4<
s (fk ) ~e (fk )A h( ) f
⇡ 4<
m [h(f
ation Eq. (13) we might wonder how
much
ac-k
T
k=0
where
~
e
= k=0
[e1 (f ), . . . , em (f )] denotes the transpose of
ost by trading it for the interpolation
Eq. (14)
2
3
the basis vectors, which
we continue to view as functions.
o optimally choose the nodeN/2
points
x
.
When
m
i
X
X
T
1 5empirical
nearly
optimal in the sense
nt error measurement
is the
~h( ) = interpolant
4 maximum
= 4<
s⇤ (fk )~epoint(fk ) f AThe
4<
!kis
h(F
k; )
that it satisfies
r, Chebyshev nodes are known to be neark=0
k=1
bringing an additional error which grows like
2
2
max
kh(·;
)
I
[h(·;
)]k

⇤
(20)
m
8, 49].
m m,
=: hh( )|siROQ ,
lication-specific bases, a good set of interpolas is not known a-priori. Next we describe an
where m characterizes the representation error of the bathe
space
Extrinsic
parametersset.Waveform inside/outside
for identifying
a nearly-optimal
sis as defined
inRBs
Eq. (6)
and ⇤m is a computable Lebesgue
fficients !j are given by:
N/2
The ROQ rule’s accuracy only depends on
13 polant’s accuracy to represent h(f ; ) and the
following. Given a set of m nodes {xi }, known
valuations {hi := h(xi )}, and a basis ei = pi (x)
x) is a degree i  m 1 polynomial, find an apon (the interpolant)
Im [h](x) =
m
X
i=1
Applying ROM for
GW
PE
0
ci pi (x) ⇡ h(x)
(14)
Intrinsic parameters
Im [h](xi ) = hi
for
in Appendix B) and proceed to describe how we use them
to find the EIM interpolant. Equation (17) is equivalent
to solving an m-by-m system A~c = ~h for the coefficients
~c, where
i = 1, . . . , m .
(15)
B
B
A := B
B
@
e1 (F1 ) e2 (F1 ) · · ·
e1 (F2 ) e2 (F2 ) · · ·
e1 (F3 ) e2 (F3 ) · · ·
..
..
..
.
.
.
e1 (Fm ) e2 (Fm ) · · ·
em (F1 )
em (F2 )
em (F3 )
..
.
em (Fm )
1
C
C
C.
C
A
(18)
The EIM algorithm ensures that the matrix A is invertN/2
he approximant is required
Xto agree with the
1~
⇤
ible,
with
~
c
=
A
h the unique solution to Eq. (17). As
s (fk )h(fk ; ) f
t the hh(
set of)|si
m nodes.
d = 4<
A is parameter independent we have, for all values of ,
k=0 by Eqs. (14,15)
show that the problem defined
h
i
que solution in terms of N/2
Lagrange polynomiN/2
h ~e T (f ) A 1~h( ) ,i
IX
(19)
X
m [h](f ; ) =
n a convergence rate for the projection-based
⇤
⇤
T
1~
s (fk )I
; )] f = 4<
s (fk ) ~e (fk )A h( ) f
⇡ 4<
m [h(f
ation Eq. (13) we might wonder how
much
ac-k
T
k=0
where
~
e
= k=0
[e1 (f ), . . . , em (f )] denotes the transpose of
ost by trading it for the interpolation
Eq. (14)
2
3
the basis vectors, which
we continue to view as functions.
o optimally choose the nodeN/2
points
x
.
When
m
i
X
X
T
1 5empirical
nearly
optimal in the sense
nt error measurement
is the
~h( ) = interpolant
4 maximum
= 4<
s⇤ (fk )~epoint(fk ) f AThe
4<
!kis
h(F
k; )
that it satisfies
r, Chebyshev nodes are known to be neark=0
k=1
bringing an additional error which grows like
2
2
max
kh(·;
)
I
[h(·;
)]k

⇤
(20)
m
8, 49].
m m,
=: hh( )|siROQ ,
lication-specific bases, a good set of interpolas is not known a-priori. Next we describe an
where m characterizes the representation error of the bathe
space
Extrinsic
parametersset.Waveform inside/outside
for identifying
a nearly-optimal
sis as defined
inRBs
Eq. (6)
and ⇤m is a computable Lebesgue
fficients !j are given by:
N/2
The ROQ rule’s accuracy only depends on
13 polant’s accuracy to represent h(f ; ) and the
following. Given a set of m nodes {xi }, known
valuations {hi := h(xi )}, and a basis ei = pi (x)
x) is a degree i  m 1 polynomial, find an apon (the interpolant)
Im [h](x) =
m
X
i=1
Applying ROM for
GW
PE
0
ci pi (x) ⇡ h(x)
(14)
Intrinsic parameters
Im [h](xi ) = hi
for
in Appendix B) and proceed to describe how we use them
to find the EIM interpolant. Equation (17) is equivalent
to solving an m-by-m system A~c = ~h for the coefficients
~c, where
i = 1, . . . , m .
(15)
B
B
A := B
B
@
e1 (F1 ) e2 (F1 ) · · ·
e1 (F2 ) e2 (F2 ) · · ·
e1 (F3 ) e2 (F3 ) · · ·
..
..
..
.
.
.
e1 (Fm ) e2 (Fm ) · · ·
em (F1 )
em (F2 )
em (F3 )
..
.
em (Fm )
1
C
C
C.
C
A
(18)
The EIM algorithm ensures that the matrix A is invertN/2
he approximant is required
Xto agree with the
1~
⇤
ible,
with
~
c
=
A
h the unique solution to Eq. (17). As
s (fk )h(fk ; ) f
t the hh(
set of)|si
m nodes.
d = 4<
A is parameter independent we have, for all values of ,
k=0 by Eqs. (14,15)
show that the problem defined
h
i
que solution in terms of N/2
Lagrange polynomiN/2
h ~e T (f ) A 1~h( ) ,i
IX
(19)
X
m [h](f ; ) =
n a convergence rate for the projection-based
⇤
⇤
T
1~
s (fk )I
; )] f = 4<
s (fk ) ~e (fk )A h( ) f
⇡ 4<
m [h(f
ation Eq. (13) we might wonder how
much
ac-k
T
k=0
where
~
e
= k=0
[e1 (f ), . . . , em (f )] denotes the transpose of
ost by trading it for the interpolation
Eq. (14)
2
3
the basis vectors, which
we continue to view as functions.
o optimally choose the nodeN/2
points
x
.
When
m
i
X
X
T
1 5empirical
nearly
optimal in the sense
nt error measurement
is the
~h( ) = interpolant
4 maximum
= 4<
s⇤ (fk )~epoint(fk ) f AThe
4<
!kis
h(F
k; )
that it satisfies
r, Chebyshev nodes are known to be neark=0
k=1
bringing an additional error which grows like
2
2
max
kh(·;
)
I
[h(·;
)]k

⇤
(20)
m
8, 49].
m m,
=: hh( )|siROQ ,
lication-specific bases, a good set of interpolas is not known a-priori. Next we describe an
where m characterizes the representation error of the bathe
space
Extrinsic
parametersset.Waveform inside/outside
for identifying
a nearly-optimal
sis as defined
inRBs
Eq. (6)
and ⇤m is a computable Lebesgue
fficients !j are given by:
N/2
The ROQ rule’s accuracy only depends on
13 polant’s accuracy to represent h(f ; ) and the
following. Given a set of m nodes {xi }, known
valuations {hi := h(xi )}, and a basis ei = pi (x)
x) is a degree i  m 1 polynomial, find an apon (the interpolant)
Im [h](x) =
m
X
i=1
Applying ROM for
GW
PE
0
ci pi (x) ⇡ h(x)
(14)
Intrinsic parameters
Im [h](xi ) = hi
for
in Appendix B) and proceed to describe how we use them
to find the EIM interpolant. Equation (17) is equivalent
to solving an m-by-m system A~c = ~h for the coefficients
~c, where
i = 1, . . . , m .
(15)
B
B
A := B
B
@
e1 (F1 ) e2 (F1 ) · · ·
e1 (F2 ) e2 (F2 ) · · ·
e1 (F3 ) e2 (F3 ) · · ·
..
..
..
.
.
.
e1 (Fm ) e2 (Fm ) · · ·
em (F1 )
em (F2 )
em (F3 )
..
.
em (Fm )
1
C
C
C.
C
A
(18)
The EIM algorithm ensures that the matrix A is invertN/2
he approximant is required
Xto agree with the
1~
⇤
ible,
with
~
c
=
A
h the unique solution to Eq. (17). As
s (fk )h(fk ; ) f
t the hh(
set of)|si
m nodes.
d = 4<
A is parameter independent we have, for all values of ,
k=0 by Eqs. (14,15)
show that the problem defined
h
i
que solution in terms of N/2
Lagrange polynomiN/2
h ~e T (f ) A 1~h( ) ,i
IX
(19)
X
m [h](f ; ) =
n a convergence rate for the projection-based
⇤
⇤
T
1~
s (fk )I
; )] f = 4<
s (fk ) ~e (fk )A h( ) f
⇡ 4<
m [h(f
ation Eq. (13) we might wonder how
much
ac-k
T
k=0
where
~
e
= k=0
[e1 (f ), . . . , em (f )] denotes the transpose of
ost by trading it for the interpolation
Eq. (14)
2
3
the basis vectors, which
we continue to view as functions.
o optimally choose the nodeN/2
points
x
.
When
m
i
X
X
T
1 5empirical
nearly
optimal in the sense
nt error measurement
is the
~h( ) = interpolant
4 maximum
= 4<
s⇤ (fk )~epoint(fk ) f AThe
4<
!kis
h(F
k; )
that it satisfies
r, Chebyshev nodes are known to be neark=0
k=1
bringing an additional error which grows like
2
2
max
kh(·;
)
I
[h(·;
)]k

⇤
(20)
m
8, 49].
m m,
=: hh( )|siROQ ,
lication-specific bases, a good set of interpolas is not known a-priori. Next we describe an
where m characterizes the representation error of the bathe
space
Extrinsic
parametersset.Waveform inside/outside
for identifying
a nearly-optimal
sis as defined
inRBs
Eq. (6)
and ⇤m is a computable Lebesgue
fficients !j are given by:
N/2
The ROQ rule’s accuracy only depends on
13 polant’s accuracy to represent h(f ; ) and the
Applying ROM for GW PE
ROQ parameter estimation recipe
• Step (1) Construct reduced basis: Find a set of templates that can reproduce every template in
the model space to a certain specified precision.
OFFLINE
OFFLINE
• Step (2) Find empirical interpolation points: Find a set of points at which to match templates onto the basis.
• Step (3) Construct signal specific weights: Compute the weights to use in the quadrature rule once
data has been collected.
startup
• Step (4) Carry out parameter estimation: Evaluate likelihood/posterior over parameter space using
ROQ rule and, e.g., MCMC.
ONLINE
14
Burst GW waveform
ROQ Likelihood
Full Likelihood
0.6
0.3
0.2
0.2
f
0.3
0.2
0.3
0.6
1.4
1
1.4
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0
0
0
0
c
0.2
t
tc
0.1
0.6 0.8 1 1.2
A
1
0.1
0.1
0.6 0.8 1 1.2
0.1
0.2
0.3
0
0.1 0.2
0.6
1
1.4
0.1
0.2
0.3
0.1
0.2
0.3
3
3
3
3
3
3
2.5
2.5
2.5
2.5
2.5
2.5
2
2
2
2
2
2
0.6 0.8 1 1.2
0.1
0.2
f
0.3
A
f
0.6 0.8 1 1.2
0
0.1 0.2
tc
2
2.5
A
3
0.6
15
1
1.4
0.1
0.2
f
0.3
0
0.1 0.2
0
0.1 0.2
tc
2
2.5
A
3
PC et al PRD 2013
advanced LIGO-Virgo gravitational-wave
detector
and, more generally,
D. Computing
the network
likelihood
ave astronomy. However, current parameter estimation approaches for such scenarios
mputationally intractable problems in practice. Therefore there is a pressing need for
E.
In
order
to
evaluate
the
likelihood
we
compute
Eq.
(3)
ccurate Bayesian inference techniques. In this paper we demonstrate that a reduced
asrapid parameter estimation studies. By implementing a reduced
approach enables
re scheme within the LIGO Algorithm Library, we show that Bayesian inference on Here we comm
hs|si
+ hh
( ) |h
)i star
2<hs|h
( )i can
,using
(31) up by
the computation
of( overlaps
• Noise does
well as the expe
nal parameter
spacenot
of affect
non-spinning
binary
neutron
inspirals
be ROQ
sped
or the early advanced detectors’ configurations. This speed-up will increase to about To find m ba
where
the last term
with the
rule
(25)would
ctors improve their
low-frequency
limitistohandled
10Hz, allowing
for ROQ
analyses
which
described in Ap
and 10
the
first term
be computed
once. detectors,
In the the
0
months to complete.
Although
theseneeds
resultstofocus
on gravitational
accuracy of t c =
0 ru
algorithm
appli
broadly applicable
to any
where fast
is desirable.
case
thatexperiment
the data stream
s(t)Bayesian
contains
a sine-Gaussian
ROQ
noiseanalysis
free
Burst GW waveform
O (N M m)[58].
burst-waveform (7) and white ROQ
noisenoise
n(t)(min)
(see Sec.II),−2we
parallelized ma
10
ROQ
noise
(max)
−2
can 10
compute a closed-form expression for the norm,
the basis is bui
⇣
⌘
the ROQ point
p
2 2 2
4⇡
f
↵
2
( ) ad|h ( )i S(
= 4A
e n( ) 0( = ,t or (32)
LIGO (aLIGO) [1]hh−4
and
) =↵ht (⇡ , 1✓~t ) +
= f ) of the
−4 EIM is d
−4
10
10
10
re expected to start routinely
particular the m
L
X
˜⇤ (fk )h̃(✓;
~ fbasis/points
earches within where
two years
fminand
= 0 and fmax = 1
have
been
assumed.
d
( se
k)
~
(d|h(are
✓)) unavailable
= 4< f we have
. −6
(2)
ions in the next When
decade.
Comclosed-form
expressions
algorithm
which
S (fk )
−6
10
−6 n
10
k=1
CBCs) are the most
a fewpromising
options. One possibility is to build an ROQ10rule
asymptotic cost
4
2
ed detection rates
between
for the
norm,awhich requires additional
o✏ine computaO
m
+
N
m
˜
~
In this equation d(fk ) and h̃(✓; fk ) are the−8 discrete
E↵ective parameter
estimation
tions. −8Here we consider an alternative. Notice that the L When
conside
Fourier transforms at frequencies {fk }k=1 , 10⇤ 0denotes
10
0.1
0
−8
strated [4–6], but
approaches
0
10
20
30
40
50
norm
trix
A
is
data-in
10
complex
conjugation, and the power
spectral
density
0
1
2
#
ROQ
points
nrealistic computational costs
computetime
ROQ
PC et al PRD 2013 the detector’s noise.
of arrivw
m
(PSD) Sn (fk ) characterizes
X
even when usingFIG.
efficient
al- error (26) versus number of ROQ
the
data
and al
7: Integration
nodal
hh (For
) |ha (given
)i =observation
c2i
(33)
time
T
=
1/
f
and
detection
8: Errors in computing the corr
PC et al FIG.
PRD 2013
chain Monte Carlo
(MCMC)
points
(m) for 10, 000 randomly selected values of . The solid
finally
ii)
the m
frequency
window
(f
f
)
there
are
i=1
high
low
stream
s
and
the
model
waveform
h
curve depicts
case s = h and the last data
For the advanced black
aLIGO
and the noise-free
Whence
using an⇤ ROQ rule
built forthe
tc =ove
0w
16
point m = 54, m ⇡ 3 ⇥ 10 8 corresponds
to the rule ⇥
used
6
<latexit sha1_base64="tYHCApFiY8slQcKMwQxwGacE74A=">AAAA+3icSyrIySwuMTC4ycjEzMLKxs7BycXNw8XFy8cvEFacX1qUnBqanJ+TXxSRlFicmpOZlxpaklmSkxpRUJSamJuUkxqelO0Mkg8vSy0qzszPCympLEiNzU1Mz8tMy0xOLAEKBcQLKBvoGYCBAibDEMpQZoACoHJDdElMRqiRnpmeQSBCG4e0koahuYNHQGhyStfknfsPQoQZGaHyggyo4BQAVIE48g==</latexit>
max integration error
max errors
<latexit sha1_base64="tYHCApFiY8slQcKMwQxwGacE74A=">AAAA+3icSyrIySwuMTC4ycjEzMLKxs7BycXNw8XFy8cvEFacX1qUnBqanJ+TXxSRlFicmpOZlxpaklmSkxpRUJSamJuUkxqelO0Mkg8vSy0qzszPCympLEiNzU1Mz8tMy0xOLAEKBcQLKBvoGYCBAibDEMpQZoACoHJDdElMRqiRnpmeQSBCG4e0koahuYNHQGhyStfknfsPQoQZGaHyggyo4BQAVIE48g==</latexit>
Burst GW waveform
Coalescence time: Introduces frequency-dependent phase shift. But the basis built for tc = 0
10
works well for non-zero tc.
accuracy of t c = 0 rule for non-zero t c
−2
10
max integration error
Q noise free
Q noise (min)
Q noise (max)
−4
−4
10
10
−6
10
−6
10
−8
10
40
50
er of ROQ nodal
−8
10
0
1
0
0.1
0.2
2
3
time of arrival (s)
17
0.3
0.4
4
5
PC et al PRD 2013
Burst GW waveform
MCMC Timing
Speed−up
27
Full
ROQ
Four parameters
2
10
26.5
0
Ratio
Time [s]
26
10
25.5
25
24.5
−2
10
24
3
2
10
4
10
# MCMC points
10
6
10
4
10
5
# MCMC points 10
PC et al PRD 2013
18
6
10
methods are dominated by statistics from computing inals with a finite number of samples. In our analysis, the
es are subject to the constraint m1 < m2 , leading to
true values (where m1 = m2 ) being at the edge of the
dence interval.
TaylorF2 waveform (BNS)
nc = 2, 000 sub-intervals, each of size t4c = 10 5 s,
• Comparison
30 times
faster!
which it constructs
a unique
set of
ROQ weights.
Thisof both methods for recovering the values of intrinsic parameters
• Speedup:
h of 10 5 s ensures that this discretization error is
m1 (M
) m2 (M
) SNR
FIG.) 2:⌘ Probability
density
function
for t
Standard: 30
hours
w the measurement
uncertainty
on the coalescenceMc (M
3
injection 1.2188
0.25
1.4 mass ratio
1.4 ⌘ of11.4
Mc and
symmetric
a simu
, which is typically
⇠
10
ROQ: 1 hour s [18].
1.2189
0.250
1.66
1.39
LIGO/Virgo
data.
In
green
as
obtained
in ⇠ 3
0.249
1.52
1.30
12.9
We found that, as expected, the ROQ and standard
standard1.2188
1.2184
0.243
1.41
1.18
standard
likelihood,
and
as obtained
1.2189
0.250
1.66 in blue1.39
ROQ
1.2188
1.301.19
12.9
ihood approaches, through their LAL implementa1.2184 0.2490.243 1.521.41
the ROQ. The injection values are in red, an
s, produce statistically indistinguishable results for
PDF for
chirpI.mass
symmetric
mass
ratio.!
Table
The and
overlap
region
ofM
the
sets of PDFs
•
4
TABLE
I:
Intrinsic
parameters
(chirp
mass
,
symmetric
c
erior probability density functions over the full 9region.
mass ratio
⌘,red
masses
m1 and
m2 ) and Signal-to-Noise RaIn
the injection
values
ensional parameter space. MAs(M examples,
•) SNR
) ⌘
m results
(M ) m (M for
tio (SNR)
of
the analysis from Figure 2. The first line give
injection 1.2188 for0.25
1.4
1.4 pa11.4
two intrinsic mass parameters
the injection
the injected
The last two lines give median value
standard 1.2188
0.249
1.52
1.30
12.9values.
eters in Table I are shown
in
Figure
2.
ROQ
1.2188
0.249
1.52
1.30 credible
12.9
Assuming
that
detectors
and 90%
intervals, for
thethe
sameadvanced
parameters
with the w
is also useful to quantify
the fractional di↵erence
in
standard
likelihood
the runtimes
compressed
likeliTABLE I: Intrinsic parameters (chirp
mass M , symmetric
least(second
⇠ 107 ,line)
this and
implies
upwards
mass ratio
⌘, masses m using
and m ) and
Signal-to-Noise
Ra9D likelihood function
computed
ROQs
andROQs
hood
using
(third
The SNR
wasofthen
computed
oneline).
Petabyte
worth
model
evaluati
tio (SNR) of the analysis from Figure 2. The first line give and
2
standard approach. We
havevalues.
observed
fractional
the injected
The last this
two with
lines
giveLikelihood
median value max ⇡ SNR /2. The di↵erences between the
The results of this p
and 90% credible intervals, for the same parameters with the standard approach.
r to be Fractional likelihood
two
methods
arethat
dominated
by approach
statistics from
computing
instandarderror
likelihood (second line) and
the compressed
likelian
ROQ
will
reduce
this
to le
•
hood using ROQs (third line). The SNR was then computed
✓
◆
tervals with a finite number of samples. In our analysis, the
with Likelihood
⇡ SNR /2. The di↵erences between the With parallelization of the sum in each lik
log are
L dominated by statistics
two methods
from computing
in6masses
are subject
to the constraint m1 < m2 , leading to
log L = 1 tervals with a finite number
. 10
of samples. In our analysis, the uation run-times could be significantly re
log
true
values
ROQ to the constraintthe
masses
are L
subject
m <
m , leading
to (where m1 = m2 ) being at the edge of the
the true values (where m = m ) being
at the edge interval.
of the to essentially real time. Remarkably, even
confidence
confidence interval.
allelization, this approach when applied to
l cases.PC That
is, both approaches are indistinguishet al arXiv:1404.6284
19 noise detectors having reached design sensitivity
for all practical purposes.
synthetic signals embedded in simulated Gaussian
c
1
1.2189
1.2184
1.2189
1.2184
0.250
0.243
0.250
0.243
2
1.66
1.41
1.66
1.41
1.39
1.18
1.39
1.19
c
1
2
2
max
1
1
2
2
– The
be in
by the
Taygdown
ot be
[? ]
rences
r this
on a
by a factor of 4, thus indicating that the speedup for an
inspiral-merger-ringdown model might be higher, especially given that not many empirical interpolation nodes
seem to be needed for the merger and ringdown regimes
[9].
TaylorF2 waveform (BNS)
Results from the LIGO scientific collaboration analysis library (LAL)
20
PC et al arXiv:1404.6284
– The
be in
by the
Taygdown
ot be
[? ]
rences
r this
on a
by a factor of 4, thus indicating that the speedup for an
inspiral-merger-ringdown model might be higher, especially given that not many empirical interpolation nodes
seem to be needed for the merger and ringdown regimes
[9].
TaylorF2 waveform (BNS)
Results from the LIGO scientific collaboration analysis library (LAL)
20
PC et al arXiv:1404.6284
by a factor of 4, thus indicating that the speedup for an
inspiral-merger-ringdown model might be higher, especially given that not many empirical interpolation nodes
Runtimes
months
one
seem toofbe3 needed
for@the
merger and ringdown regimes
Petabyte
reduced to 1 day!!
[9].
– The
be in
by the
Taygdown
ot be
[? ]
rences
r this
on a
TaylorF2 waveform (BNS)
Results from the LIGO scientific collaboration analysis library (LAL)
20
PC et al arXiv:1404.6284
Summary
We have applied ROM methods to PE studies including noise and
extrinsic parameters: Burst and TaylorF2 waveforms. !
!
ROQs speed-up likelihood evaluations by factors of factor ~30 to
150, with the same accuracy.!
!
Expected higher speed-ups for more complex problems.!
!
Working on extending ROQ methods for LIGO surrogate models
and eLISA data analysis and parameter estimation.!
21
Thank you for your attention!
22
Download