On a time-frequency approach to translation on finite graphs Matthew Begu´e

advertisement
On a time-frequency approach to translation on
finite graphs
Matthew Begué
Norbert Wiener Center for Harmonic Analysis and Applications
University of Maryland, College Park
College Park, MD 20742
Email: begue@math.umd.edu
Abstract—The authors of [1] have used spectral graph theory
to define a Fourier transform on finite graphs. With this
definition, one can use elementary properties of classical timefrequency analysis to define time-frequency operations on graphs
including convolution, modulation, and translation. Many of
these graph operators have properties that match our intuition
in Euclidean space. The exception lies with the translation
operator. In particular, translation does not form a group, i.e.,
Ti Tj 6= Ti+j . We prove that graphs whose translation operators
exhibit semigroup behavior are those whose eigenvectors of the
Laplacian form a Hadamard matrix.
I. I NTRODUCTION
Graph theory has developed into a useful tool in applied
mathematics as many modern data sets can be represented as
a graph. In these data sets, vertices correspond to different sensors, observations, or data points and edges represent connections, similarities, or correlations among those points. Some
immediate examples include social network data, electricity
networks, and images. In fact, in many dimension reduction
methods the first step is to create a graph out of the data by
identifying the k-nearest neighbors of each point or by some
other notion of nearness, see [2]–[4]. Spectral graph theory is
also a cornerstone of the field of analysis on fractals [5]–[7].
The document is organized as follows: Section II provides
relevant definitions from spectral graph theory as well as
introduces the Laplacian operator. Section III introduces timefrequency operations in the graph domain, namely, the Fourier
transform, convolution, modulation, and translation. Section
IV discusses the graph translation operator and presents our
main result which provides conditions in which graph translation acts as a semigroup operator.
II. BACKGROUND
A. Graph definitions
We denote a finite, undirected graph by G = G(V, E) where
V = {xi }N
i=1 denotes the vertex set and E denotes the edge
set. The edge set, E, consists of ordered pairs that correspond
to edges on a graph. If there is an edge between points x ∈ V
and y ∈ V then we write x ∼ y. Hence, E = {(x, y) : x, y ∈
V and x ∼ y}. For any point x ∈ V , we define the degree of
x, denoted dx , to be the number of edges connected to point
x. In an undirected graph, the edge set E is symmetric, that
c
978-1-4673-7353-1/15/$31.00 2015
IEEE
is, x ∼ y implies y ∼ x. In other words, we do not distinguish
an incoming vs. outgoing orientation to any edge.
We consider functions defined on the vertex set f : V → R,
xn 7→ f (xn ). We shall occasionally use the shorthand notation
f (n) = f (xn ) which shall be clear in context. Since the
cardinality of the vertex set is finite, |V | = N , we shall
sometimes write functions on graphs as vectors in RN .
B. The Laplacian Operator
The main differential operator that we shall study is the
Laplacian, denoted by L. The pointwise formulation of the
unweighted Laplacian applied to a function f : V → R is
defined as
X
∆f (x) =
f (x) − f (y).
(II.1)
y∼x
This is precisely the same formulation given in [8], [9] (up to
a sign).
For a finite graph, Laplace’s operator can be represented as a
matrix. For every point xi ∈ V , (II.1) gives a linear equation
depending on function values at other points in V . Each of
these linear equations can be represented in the rows of the
Laplacian matrix yielding

 dxi if i = j
−1 if xi ∼ xj
L(i, j) =
(II.2)

0
otherwise.
Let the matrix D denote the degree matrix which is the
diagonal N × N matrix given by D = diag(dx ). Let A denote
the adjacency matrix, also N × N , where
1, if xi ∼ xj
A(i, j) =
0, otherwise.
Then the unweighted Laplacian (II.2) can be written as
L = D − A.
(II.3)
Remark 1. The matrix, L, in (II.3) is the Laplacian matrix
for unweighted graphs (each edge is assigned a weight of 1).
Weighted graphs are more general and have Laplacian L =
D − W where W is the 1’s in the adjacency are replaced
with nonnegative
weights ω(i, j), and the degrees dxi become
PN
dxi =
ω(i,
j). While we consider unweighted graphs
j=1
for this document, the results presented still hold for weighted
graphs and their Laplacians.
Remark 2. Matrix L is called the unnormalized Laplacian to distinguish it from the normalized Laplacian, L =
D−1/2 LD−1/2 , used in some of the literature on graphs, e.g.,
[9]. We shall work exclusively with the unweighted Laplacian
and shall henceforth just refer to it as the Laplacian.
Since L is a real symmetric matrix, it has nonnegative
−1
eigenvalues {λk }N
k=0 with associated orthonormal eigenvecN −1
tors {ϕk }k=0 . If the graph is connected then λ0 = 0 and
λi > 0 for all 1 ≤ k ≤ N − 1. We denote the conjugate
transpose of a vector ϕ by ϕ∗ . Furthermore, let us define Φ to
be the N ×N orthogonal matrix whose (k+1)th column is the
vector ϕk . The spectrum of the Laplacian, σ(L), is fixed but
−1
one’s choice of eigenvectors {ϕk }N
k=0 can vary. Throughout
the document, we assume that the choice of orthonormal
eigenvectors is fixed. Furthermore, since L is Hermetian, the
eigenvectors can be selected to be entirely real-valued. Unless
specifically noted, all of the following theory holds for real or
complex eigenvectors.
Figure II.1 shows the first four nontrivial eigenfunctions of
the Minnesota Road Network graph [10]. They correspond
to the three lowest nonzero eigenvalues. As the eigenvalues
increase, the eigenfunctions oscillate more.
50
50
49
49
48
48
47
47
46
46
45
45
44
43
−98
50
−96
−95
−94
−93
−92
−91
−90
−89
43
−98
50
49
49
48
48
47
47
46
46
45
45
44
44
43
−98
43
−98
−97
−96
−95
−94
−93
−92
−91
−90
−89
−97
−96
−95
−94
−93
−92
−91
−90
−89
N
−1
X
fˆ(λl )ϕl (n).
(III.2)
l=0
If we consider the function f and fˆ as N × 1 vectors, then
(III.1) and (III.2) become
fˆ = Φ∗ f
and
f = Φfˆ.
Since, Φ is a unitary matrix, the derivation of (III.2) is a direct
result of (III.1). Futhermore, (III.2) implies that every function
equals its “Fourier series”. In addition, Φ being unitary immediately gives Parseval’s relation, i.e. hf, gi = hfˆ, ĝi, hence
Plancherel’s theorem holds as well, i.e. kf k2 = kfˆk2 .
With the Fourier transform now defined on a graph, we
exploit elementary time-frequency properties of real signals
to derive graph convolution, modulation, and translation, as
was done in [1].
B. Graph Convolution
f ∗ g(n) =
N
−1
X
fˆ(λl )ĝ(λl )ϕl (n).
(III.3)
l=0
−97
−96
−95
−94
−93
−92
−91
−90
−89
Fig. II.1. The first four nontrivial eigenfunctions (left to right) on the
Minnesota Road Newtwork graph. Blue corresponds to lower values while
red corresponds to larger values.
III. G RAPH T IME -F REQUENCY O PERATIONS
A. The Graph Fourier Transform
The classical Fourier transform on the real line is the
expansion of a function f in terms of the eigenfunctions of
the Laplace operator, i.e. fˆ(ξ) = hf, e2πiξt i. Analogously,
we define the graph Fourier transform, fˆ : σ(L) → R, of
a functions f : V → R as the expansion of f in terms of the
eigenfunctions of the graph Laplacian. It is defined by
fˆ(λl ) = hf, ϕl i =
f (n) =
For signals
f, g ∈ L2 (R) we define the convolution as f ∗
R
g(t) = R f (u)g(t−u) du. However, there is no clear analogue
of translation in the graph setting. So we exploit the property
\
of the convolution (f
∗ g)(ξ) = fˆ(ξ)ĝ(ξ). Then by the inverse
graph Fourier transform, (III.2), we can define convolution in
the graph domain. For f, g : V → R, we define the graph
convolution of f and g as
44
−97
Fourier transform is only defined on values of σ(L) which is
b and that the graph Fourier transform is
a discrete finite set in R
completely dependent on the choice of Laplacian eigenvectors.
The inverse Fourier transform is then given by
N
X
f (n)ϕ∗l (n).
Proposition 3 ( [1]). For α ∈ R, and f, g, h : V → R then
the graph convolution defined in (III.3) satisfies the following
properties:
1) f[
∗ g = fˆĝ.
2) α(f ∗ g) = (αf ) ∗ g = f ∗ (αg).
3) Commutativity: f ∗ g = g ∗ f .
4) Distributivity: f ∗ (g + h) = f ∗ g + f ∗ h.
5) Associativity: (f ∗ g) ∗ h = f ∗ (g ∗ h).
The proofs of these properties are relatively straightforward
and some follow immediately from the definition, (III.3).
If we want to express the graph convolution as a vector
then (III.3) is equivalent to Φ∗ (fˆ ĝ) where is the
pointwise multiplication operator (written as .* in MATLAB).
In MATLAB, (recalling that fˆ = Φ∗ f ) one would express
the vector f ∗ g as
Φ*((Φ0 *f).*(Φ0 *g))
(III.1)
n=1
This definition of the graph Fourier transform was introduced
by Vandergheynst et al. in [1], [11]. Notice that the graph
or equivalently
Φ*(diag(Φ0 *f)*Φ0 *g) or Φ*(diag(Φ0 *g)*Φ0 *f).
Example 4. Let us introduce the function g0 : V → R by first
defining it in the Fourier (frequency) domain. We set ĝ0 (λl ) =
1 for all l = 0, ..., N −1. Then the values of g0 can be obtained
by
the inverse Fourier transform (III.2) giving g0 (n) =
PNtaking
−1
ϕ
(n)
(in vector notation, g0 = Φ∗ 1N ×1 , where 1N ×1
l
l=0
is the N × 1 vector of all ones). The function g0 is shown in
Figure III.1. Then as a result of Proposition 3, we have
f (n)
N
−1
X
=
l=0
N
−1
X
=
We can express the operator Mk as a diagonal matrix


ϕk (1)
0


..
Mk = 

.
0
ϕk (N )
and in the language of MATLAB we have
Mk=diag(Φ(:,k)).
fˆ(λl )ϕl (n)
(III.4)
fˆ(λl )ĝ0 (λl )ϕl (n) = f ∗ g0 (n).
l=0
In other words, f ∗ g0 = f , so convolution with the function
g0 is the identity.
D. Graph Translation
In the classical setting, the translation operator, Tu , translates the function f : R → R by vector u. The translation
action can be expressed as a convolution with δu where
1, if x = u
δu (x) =
0, if x 6= u.
Then in R, by exploiting (III.3) we have
(Tu f )(t)
=
=
f (t − u) = (f ∗ δu )(t)
Z
Z
fˆ(k)δ̂u (k)ϕk (t) dk =
fˆ(k)ϕ∗k (u)ϕk (t) dk
R
R
since δ̂u (k) = R δu (x)ϕ∗k (x) dx = ϕk (u).
Motivated by this example, for any f : V → R we
can define the graph translation operator, Ti , via the graph
convolution of the Dirac delta centered at the ith vertex:
R
Fig. III.1. An overhead (left) and side (right) view of the function g0 which is
the unique function that acts as the identity with convolution on the Minnesota
graph [10].
√
(Ti f )(n) =
C. Graph Modulation
then
=
=
N
X
√
N
N
−1
X
fˆ(λl )ϕ∗l (i)ϕl (n).
l=0
Motivated by the fact that in Euclidean space, modulation
of a function is multiplication of a Laplacian eigenfunction,
we define for any k = 0, 1, ..., N − 1 the graph modulation
operator Mk : RN → RN as
√
(Mk f )(n) = N f (n)ϕk (n).
(III.5)
√
We normalize√
by N in (III.5) so that M0 , that is, convolution
with ϕ0 ≡ 1/ N , is the identity operator.
An important remark is that in the classical case, modulation
in the time domain represents translation in the frequency
ˆ
[
domain, i.e. M
ξ f (ω) = f (ω − ξ). The graph modulation in
general does not exhibit this property due to the fact that the
discrete spectral domain does not have a clear structure. However, it is worthy to notice the special case if ĝ(λl ) = δ0 (λl ),
where
1, if λ = 0
δ0 (λ) =
0, if λ 6= 0,
d
M
k g(λl )
N (f ∗ δi )(n) =
ϕ∗l (n)(Mk g)(n)
n=1
N
X
√
1
ϕ∗l (n) N ϕk (n) √ = δ(l, k)
N
n=1
−1
by the orthonormality of {ϕk }N
k=0 .
(III.6)
We can express Ti f in matrix notation as follows:
 ∗


ϕ0 (i)ϕ0 (1) · · · ϕ∗N −1 (i)ϕN −1 (1)
fˆ(λ0 )



..
..
..



.
.
···
.
ˆ
ϕ∗0 (i)ϕ0 (N ) · · · ϕ∗N −1 (i)ϕN −1 (N )
f (λN −1 )
=Φ
ϕ∗0 (i) · · · ϕ∗N −1 (i) Φ∗ f = √1N Ti f
where we introduce the binary operation to signify elementwise multiplication. For example, if
a11 a12
A=
, b = (b1 , b2 )
a21 a22
then A b multiplies the row vector b component-wise with
each row of A; that is
a11 b1 a12 b2
A b=
.
a21 b1 a22 b2
Similarly if b0 was instead a column vector, i.e.
a11 a12
b
0
A=
, b = 1 ,
a21 a22
b2
then A b0 multiplies the column vector b0 component-wise
with each column of A; that is
a11 b1 a12 b1
0
A b =
.
a21 b2 a22 b2
The MATLAB command for A b is bsxfun(@times,A,b).
Therefore for any i = 1, ..., N the MATLAB command for
the translation operator as
Proof. Suppose there exist some k ∈ {1, ..., N − 1} and some
i ∈ {1, ..., N } for which ϕk (i) = 0. Then if f = ϕk then we
have fˆ(λl ) = δk (l) and
Ti=sqrt(N)*bsxfun(@times,Φ,Φ(i,:))*Φ0 .
There are some properties of graph translation that agree
with our intuition with classical Euclidean translation. For
example, the graph translation operator is distributive with the
convolution and they commute among themselves.
Proposition 5 ( [1]). For any f, g : V → R and for any
i, j ∈ {1, 2, ..., N } then
1) Ti (f ∗ g) = (Ti f ) ∗ g = f ∗ (Ti g).
2) Ti Tj f = Tj Ti f .
N −1
Corollary 6. As long as the eigenvectors, {ϕk }k=0
are realvalued then for any i, n ∈ {1, ..., N } and for any function
g : V → R we have
Ti g(n) = Tn g(i).
Proof. By definition (Ti f )(n) =
√
N
N
−1
X
fˆ(λl )ϕ∗l (i)ϕl (n).
√
Ti f (n) =
N
N
−1
X
fˆ(λl )ϕ∗l (i)ϕ∗l (n) =
√
N ϕ∗k (i)ϕ∗k (n) = 0
l=0
for all n = 1, ..., N . Therefore, there exist nonzero f : V → R
such that Ti f = 0, hence Ti is not injective and therefore not
invertible.
IV. G RAPH T RANSLATION AS A S EMIGROUP
The discrepancies between classical and graph translation
continue. For arbitrary graphs, we do not have the collection of
translation operators forming a group, i.e., in general Ti Tj 6=
Ti+j , unlike the case in the Euclidean setting. There are some
exceptions to this in the very special case of shift-invariant
graphs such as the cycle graph (The eigenvector matrix for the
cycle graph can be chosen to equal the Discrete Fourier Transform matrix). In fact, we cannot even assert that the translation
operators form a semigroup, i.e. Ti Tj 6= Tk = Ti•j for some
semigroup operator • : {1, ..., N } × {1, ..., N } → {1, ..., N }.
l=0
But by assumption, the eigenbasis is entirely real-valued.
Therefore, for any i, n ∈ {1, ..., N } and any l ∈ {0, ..., N −1},
then ϕ∗l (i)ϕl (n) = ϕ∗l (n)ϕl (i) which proves the theorem.
Corollary 7. Let α(0) ∈ {1, ..., N } and α =
(α(1), α(2), ..., α(K)) where α(j) ∈ {1, ..., N } for 1 ≤ j ≤
K. We let Tα denote the composition Tα1 ◦ Tα2 ◦ · · · ◦ TαK .
Then for any f : V → R and any permutation, σ, of the
set {0, 1, ..., K}, we have Tα f (α0 ) = Tβ f (α(σ(0))) where
β = (α(σ(1)), ..., α(σ(K)).
Proof. Define an equivalence relation among the space
{1, ..., N }K+1 by (a0 , ..., aK ) ∼
= (b0 , ..., bK ) if and only if
Ta1 ◦ · · · ◦ Tak f (a0 ) = Tb1 ◦ · · · ◦ Tbk f (b0 ).
By Corollary 6, (a0 , a1 ..., aK ) ∼
= σ1 (a0 , a1 ..., aK ) =
(a1 , a0 ..., aK ), i.e. σ1 is the permutation (1, 2). In general,
we write σi to denote the permutation (i, i + 1).
By Proposition 5 b, (a0 , a1 ..., aK ) ∼
= σi (a0 , a1 ..., aK ) for
any i = 2, 3, ..., K − 1.
We now have that any permutation σi for i = 1, ..., K − 1
preserves equivalency. This collection of K − 1 transpositions
allow for any permutation, σ, which proves the corollary.
However, the niceties end here; many of the properties of
Euclidean translation do not hold with this definition of graph
translation. Unlike in the classical setting, graph translation is
not an isometric operation, that is, kTi f k2 6= kf k2 . However,
we do have the following estimate.
Lemma 8 ( [1]). For any f : V → R,
√
|fˆ(0)| ≤ kTi f k2 ≤ N
max
|ϕl (i)| kf k2
l∈{0,1,...,N −1}
√
≤
N
max
kϕl k∞ kf k2 .
(III.7)
l∈{0,1,...,N −1}
Proposition 9. In general, the operator Ti is not injective and
therefore not invertible.
Theorem 10. Consider the graph, G(V, E), with real eigenvector matrix Φ = [ϕ0 | · · · |ϕN −1 ]. Graph translation on
G is a semigroup, i.e. Ti Tj = Ti•j for some semigroup
operator√• : {1, ..., N } × {1, ..., N } → {1, ..., N }, only if
Φ = (1/ N )H, where H is a Hadamard matrix.
Proof. i. We first show that graph translation on G is a
semigroup, i.e. Ti Tj = Ti•j for some semigroup operator
•√ : {1, ..., N } × {1, ..., N } → {1, ..., N }, if and only if
N ϕl (i)ϕl (j) = ϕl (i•j) for all l = 0, ..., N −1. By repeating
the calculations in the proof of Proposition 5, we have
Ti Tj f (n) = N
N
−1
X
fˆ(λl )ϕ∗l (j)ϕ∗l (i)ϕl (n)
l=0
and by definition
√
Tk f (n) =
N
N
−1
X
fˆ(λl )ϕ∗l (k)ϕl (n).
l=0
Therefore
√ Ti Tj f = Ti•j f for any function f : V → R if and
only if N ϕl (i)ϕl (j) = ϕ√l (i • j) for all l ∈ {0, ..., N − 1}.
ii. We show next that N ϕl (i)ϕl (j) = ϕl (i • j) for all
l = 0, ..., √
N −1 only if the eigenvectors are constant amplitude.
Assume √ N ϕl (i)ϕl (j) =
√ ϕl (i • j), which, in particular,
implies N ϕl (i)ϕl (i) = N ϕ√l (i)2 = ϕl (i • i).
Suppose that |ϕl (a1 )| < 1/ N for some
a1 ∈ {1, ..., N }
√
and for some l ∈ {0, ..., N − 1}. Then N ϕl (a1 )2 < ϕl (a1 )
and so a1√• a1 = a2 6= a√1 . Then by the same argument,
ϕl (a2 ) = N ϕl (a1 )2 < 1/ N and hence a2 • a2 = a3 6= a2 .
This procedure can be repeated producing an infinite number
of unique indices {ai } on a graph, G, with only N < ∞ nodes,
a contradiction.
√ A similar argument gives a contradiction if
|ϕl (i)| > 1/ N for any l, i. Therefore, the graph√translation
operators form a semigroup only if |ϕl (i)| = 1/ N for all
l = 0, 1, ..., N − 1 and i = 1, ..., N . Since Φ is an orthogonal










H=









1
1
1
1
1
1
1
1
1
1
1
1
1
−1
−1
1
−1
−1
−1
1
1
1
−1
1
1
1
−1
−1
1
−1
−1
−1
1
1
1
−1
1
−1
1
−1
−1
1
−1
−1
−1
1
1
1
1
1
−1
1
−1
−1
1
−1
−1
−1
1
1
1
1
1
−1
1
−1
−1
1
−1
−1
−1
1
√
matrix, i.e. ΦΦ∗ = Φ∗ Φ = I, then Φ = (1/ N )H, where H
is a Hadamard matrix.
Remark 11. a) The order of a Hadamard matrix must be 1,
2, or a multiple of 4. It remains an open problem to show that
there exists a Hadamard matrix of order equal to any multiple
of 4, [12].
√
b) It is shown in [13, Theorem 5] that if Φ = (1/ N )H
for Hadamard H, then the spectrum of the Laplacian, σ(L),
must be entirely even integers.
c) The converse to Theorem 10 does not necessarily
hold. That is, if the eigenvector matrix Φ is a renormalized
Hadamard matrix, then the translation operators on G need not
form a semigroup. For example, consider the real Hadamard
matrix, H, of order 12 given by (IV.1). Then the second and
third columns multiplied componentwise equals the vector
[1, −1, 1, −1, −1, 1, 1, −1, 1, 1, −1, −1]> ,
which does not equal any of the columns of H.
What kinds of graphs have a Hadamard eigenvector matrix?
The authors of [13] prove that the complete graph on n
vertices, Kn , is one such graph.
Theorem 12 ( [13]). √Suppose H is a real Hadamard matrix
of order N . Then 1/ N H is an eigenvector matrix for the
complete graph of N vertices, KN .
Proof. It is a standard result (see [9]) that the complete graph
has Laplacian matrix L = N I − J, where J is the matrix
with all entries 1, with eigenvalues λ0 = 0 and λi = N for
i = 1, ..., N −1. Let D be the diagonal matrix D(i, i) = λi−1 .
We can write the N × N matrix H as
h
i
H = 1 |H̃
where 1 is the N ×1 vector of all ones and H̃ is the remaining
N × (N − 1) matrix.
If the N ×N matrix, A = [a|B] where a is an N ×1 vector,
then it is simple to verify that
AA> = aa> + BB > .
This identity gives
N I = HH > = 11> + H̃ H̃ > ,
1
1
1
1
−1
1
−1
−1
1
−1
−1
−1
1
−1
1
1
1
−1
1
−1
−1
1
−1
−1
1
−1
−1
1
1
1
−1
1
−1
−1
1
−1
1
−1
−1
−1
1
1
1
−1
1
−1
−1
1
1
1
−1
−1
−1
1
1
1
−1
1
−1
−1
1
−1
1
−1
−1
−1
1
1
1
−1
1
−1










.









(IV.1)
where the first equality holds from the property of H being Hadamard. Additionally, one can compute HDH > =
N H̃ H̃ > .
Thus we have
−1 1
1
1
√ H D √ H
=
HDH > = N I − J = L,
N
N
N
which completes the proof.
Characterizing non-complete Hadamard graphs remains an
open problem and potential area for future work.
ACKNOWLEDGMENT
The author would like to thank Professor Kasso A. Okoudjou for his guidance and mentorship, as well as for introducing
him to [1]. The author also thanks the Norbert Wiener Center
for Harmonic Analysis and Applications for its support.
R EFERENCES
[1] P. V. David I Shuman, Benjamin Ricaud, “Vertex-frequency analysis on
graphs,” preprint, 2013.
[2] M. Belkin and P. Niyogi, “Laplacian Eigenmaps for Dimensionality
Reduction and Data Representation,” Neural computation, vol. 15, no. 6,
pp. 1373–1396, 2003.
[3] R. R. Coifman and S. Lafon, “Diffusion maps,” Applied and computational harmonic analysis, vol. 21, no. 1, pp. 5–30, 2006.
[4] W. Czaja and M. Ehler, “Schroedinger eigenmaps for the analysis of
bio-medical data,” arXiv preprint arXiv:1102.4086, 2011.
[5] J. Kigami, Analysis on Fractals. Cambridge University Press, 2001,
vol. 143.
[6] M. Begué, T. Kalloniatis, and R. S. Strichartz, “Harmonic functions and
the spectrum of the Laplacian on the Sierpinski carpet,” Fractals, vol. 21,
no. 01, p. 1350002, 2013.
[7] J. Needleman, R. S. Strichartz, A. Teplyaev, and P.-L. Yung, “Calculus
on the Sierpinski gasket I: polynomials, exponentials and power series,”
Journal of Functional Analysis, vol. 215, no. 2, pp. 290–340, 2004.
[8] R. S. Strichartz, Differential Equations on Fractals: a Tutorial. Princeton University Press, 2006.
[9] F. R. Chung, Spectral Graph Theory. American Mathematical Soc.,
1997, vol. 92.
[10] T. Davis, “UF Sparse Matrix Collection,” http://www.cise.ufl.edu/
research/sparse/matrices/Gleich/.
[11] D. K. Hammond, P. Vandergheynst, and R. Gribonval, “Wavelets on
graphs via spectral graph theory,” Applied and Computational Harmonic
Analysis, vol. 30, no. 2, pp. 129–150, 2011.
[12] A. Hedayat, W. Wallis et al., “Hadamard matrices and their applications,” The Annals of Statistics, vol. 6, no. 6, pp. 1184–1238, 1978.
[13] S. Barik, S. Fallat, and S. Kirkland, “On Hadamard diagonalizable
graphs,” Linear Algebra and its Applications, vol. 435, no. 8, pp. 1885–
1902, 2011.
Download