synchronization of oscillators in complex networks

advertisement
SYNCHRONIZATION OF OSCILLATORS
IN COMPLEX NETWORKS
Louis M. Pecora1 and Mauricio Barahona2
1
Code 6343
Naval Research Laboratory
Washington, DC 20375, USA
2
Department of Bioengineering, Mech. Eng. Bldg., Imperial College of STM,
Exhibition Road, London SW7 2BX, UK
ABSTRACT
We introduce the theory of identical or complete synchronization of identical
oscillators in arbitrary networks. In addition, we introduce several graph
theory concepts and results that augment the synchronization theory and tie is
closely to random, semirandom, and regular networks. We then use the
combined theories to explore and compare three types of semirandom
networks for their efficacy in synchronizing oscillators. We show that the
simplest k-cycle augmented by a few random edges or links appears to be the
most efficient network that will guarantee good synchronization.
I. INTRODUCTION
In the past several years interest in networks and their statistics has grown
greatly in the applied mathematics, physics, biology, and sociology areas.
Although networks have been structures of interest in these areas for some time
recent developments in the construction of what might be called structured or
semirandom networks has provoked increased interest in both studying networks
and their various statistics and using them as more realistic models for physical or
biological systems. At the same time developments have progressed to the point
that the networks can be treated not just as abstract entities with the vertices or
62
Louis M. Pecora and Mauricio Barahona
nodes as formless place-holders, but as oscillators or dynamical systems coupled
in the geometry of the network. Recent results for such situations have been
developed and the study of dynamics on complex networks has begun.
In 1968 Watts and Strogatz [1] showed that simple cyclical networks called kcycles (nodes connected to each other in circles, see Fig. 1 for an example) make
the transition from networks where average distances between nodes is large to
short average distance networks with the addition of surprisingly few edges
randomly rearranged and reattached at random to other nodes in the network. At
the same time the network remained highly clustered in the sense that nodes were
connected in clumps. If we think of connected nodes as friends in a social
network, highly clustered would mean that friends of a particular node would,
with high probability be friends of each other. Thus with only a few percent or
less of rearranged edges the network shrank in size, determined by average
distance, but stayed localized in the clustering sense. These networks are referred
to as smallworld networks. See Fig. 2 for a plot of fractional change in average
distance and clustering vs. probability of edge rearrangement.
Fig. 1 Example of a cycle and a semiregular cycle (smallworld a la Watts&Strogatz)
Synchronization of Oscillators in Complex Networks
63
Fig. 2 Plot of L=L(p)/L(0) (normalized average distance between nodes) and C=C(p)/C(0)
(normalized clustering) vs. p. Shown at the bottom are typical graphs that would obtain at the
various p values including the complete graph. Note in the smallworld region we are very far
from a complete graph.
Such smallworld networks are mostly regular with some randomness and can
be referred to as semirandom. The number of edges connecting to each node, the
degree of the node, is fairly uniform in the smallworld system. That is, the
distribution of degrees in narrow, clustered around a well-defined mean. The
Watts and Strogatz paper stimulated a large number of studies [2] and is seminal
in opening interest in networks to new areas of science and with a new perspective
on modeling actual networks realistically.
A little later in a series of papers Barabasi, Albert, and Jeong showed how to
develop scale-free networks which closely matched real-world networks like coauthorship, protein-protein interactions and the world-wide web in structure. Such
networks were grown a node at a time by adding a new node with a few edges
connected to existing nodes. The important part of the construction was that the
new node was connected to existing nodes with preferences for connections to
those nodes already well-connected, i.e. with high degree. This type of
64
Louis M. Pecora and Mauricio Barahona
construction led to a network with a few highly connected hubs and more lower
connected hubs (see Fig. 3). In this network the degree of the node, has no welldefined average. The distribution of node degrees is a power law and there is no
typical degree size in the sense that the degrees are not clustered around some
mean value as in the smallworld case. The network is referred to as scale-free.
The natural law of growth of the scale-free network, the rich get richer in a sense,
seems to fit well into many situations in many fields. As a result interest is this
network has grown quickly along with the cycle smallworld.
Fig. 3 Example of SFN with m=1. Note the hub structure.
In the past decade in the field of nonlinear dynamics emphasis on coupled
systems, especially coupled oscillators has grown greatly. One of the natural
situations to study in arbitrarily connected identical oscillators is that of complete
synchronization which in a discrete system is the analog of the uniform state in
continuous systems like fluids where the uniform state would be laminar flow or
chemical reactions like the BZ reaction where there is no spatial variation
although temporal evolution can be very complex and/or chaotic. That is, in a
completely synchronized system all the oscillators would be doing the same thing
Synchronization of Oscillators in Complex Networks
65
at the same time. The stability of the uniform state is of great interest for it
portends the emergence of patterns when its stability is lost and it amounts to a
coherent state when the stability can be maintained. The uniform or completely
synchronized state is then a natural first choice to study in coupled systems.
In the last several years a general theory has been developed for the study of
the stability of the synchronized state of identical oscillators in arbitrary coupling
topologies [3,4]. A natural first step in the study of dynamics on complex or
semirandom networks is the study of synchronization in cycle smallworlds and
scale-free networks of oscillators. In the next section we develop the formalism of
synchronization stability in arbitrary topologies and presents some ideas from
networks and graph theory that will allow us to make some broad and generic
conclusions.
II. FORMAL DEVELOPMENT
A. Synchronization in Arbitrary, Coupled Systems
Here we present a formal development of the theory of stability of the
synchronous state in any arbitrary network of oscillators. It is this theory which is
very general that allows us to make broad statements about synchronous behavior
in classes of semirandom networks.
We start with a theory based on linear coupling between the oscillators and
show how this relates to an important quantity in the structure of a network. We
then show how we can easily generalize the theory to a broader class of
nonlinearly coupled networks of oscillators and iterated maps.
Let's start by assuming all oscillators are identical (that is, after all, how we
can get identical or complete synchronization). This means the uncoupled
oscillators have the following equation of motion,
dx i
= F(xi )
dt
(1)
where the superscript refers to the oscillator number (i=1,...,N) and subscripts on
i
dynamical variables will refer to components of each oscillators, viz., x j ,
j=1,...,m. We can linearly couple the N oscillators in some network by specifing a
connection matrix, G, that consists of 1's and 0's to specify what oscillators is
coupled to which other ones. We restrict our study to symmetric connections since
Louis M. Pecora and Mauricio Barahona
66
our networks will have non-directional edges, hence, G is symmetric.
Generalizations to non-symmetric couplings can be made (see Refs [4,5]).
We also assume all oscillators have an output function, H, that is a vector
function of dimension m of the dynamical variables of each oscillator. Each
oscillator has the same output function and its output is fed to other oscillators to
which it is coupled. For example, H, might be an m×m matrix that only picks out
one component to couple to the other oscillators.
The coupled equations of motion become [6],
N
dx i
= F(xi ) − σ ∑ GijH(x j ) ,
dt
j =1
(2)
where σ is the overall coupling strength and note that G acts on each oscillator as
a whole and only determines which are connected and which are not. H
determines which components are used in the connections. Since we want to
examine the case of identical synchronization, we must have the equations of
motion for all oscillators be the same when the system is synchronized. We can
N
assure this by requiring that the sum
∑ G H(x )
j =1
j
ij
is a constant when all
oscillators are synchronous. The simplest constant is zero which can be assured by
restricting the connection matrix G to have zero row sums. This works since all
H(x j ) are them same at all times in the synchronous state. It means that when
the oscillators are synchronized they execute the same motion as they do when
uncoupled (Eq. (1)), except all variables are equal at all times. Generalization to
non-zero constants can be done, but it unnecessarily complicates the analysis. A
typical connection matrix is shown in the next equation,
 2 −1 0 ... 0 −1
 −1 2 −1 0 ... 0 


 0 −1 2 −1 ... 0 
G=
,
M


 0 ... 0 −1 2 −1


 −1 0 ... 0 −1 2 
for nearest neighbor, diffusive coupling on a ring or cycle.
(3)
Synchronization of Oscillators in Complex Networks
67
Our central question is, for what types of oscillators (F), output functions (H),
connection topologies (G), and coupling strengths (σ) is the synchronous state
stable? Or more generally, for what classes of oscillators and networks can we get
the oscillators to synchronize? The stability theory that emerges will allow us to
answer these questions.
In the synchronous state all oscillators' variables are equal to the same
1
2
N
dynamical variable: x (t) = x (t) = ... = x (t) = s(t) , where s(t) is a solution of
Eq. (1). The subspace defined by the constraint of setting all oscillator vectors to
the same, synchronous, vector is called the synchronization manifold. We test
whether this state is stable by considering small, arbitrary perturbations ξj to each
xj and see whether all the perturbations ξj die out or grow. This is accompllished
by generating an equation of motion for each ξj and determining a set of
Lyapunov exponents which tell us the stability of the state. The use of Lyapunov
exponents is the weakest condition for the stability of the synchronous state.
Although other stability criteria can be used [5] we will use the Lyapunov
exponents here.
To generate an equation of motion for the set of ξj we start with the full
equations of motion for the network (Eq. (2)) and insert the perturbed value of the
j
j
dynamical variables x (t ) = s(t) + ξ expanding all functions (F and H) in
Taylor series to 1st order (we are only interested in small ξj values). This gives,
N
dξ i
= ∑ DF(s)δ ij − σ Gij DH(s) ⋅ ξ j ,
dt
j =1
[
]
(4)
where DF and DH are the Jacobians of the vector field and the output function.
Eq. (4) is referred to as a variational equation and is often the starting point
for stability determinations. This equation is rather complicated since given
arbitrary coupling G it can be quite high dimensional. However, we can simplify
the problem by noting that the equations are organized in block form. The blocks
correspond to the (ij) indices of G and we can operating on them separately from
the components within each block. We use this structure to diagonalize G. The
first term with the Kronecker delta remains the same. This results in variational
equations in eigenmode form:
dζ l
= [DF(s) – σγ l DH(s)]⋅ζ l ,
dt
(5)
Louis M. Pecora and Mauricio Barahona
68
where γl is the lth eigenvalue of G. We can now find the Lyapunov exponents of
each eigenmode which corresponds to a "spatial" pattern of desynchronization
amplitudes and phases of the oscillators. It would seem that if all the eigenmodes
are stable (all Lyapunov exponents are negative), the synchronous state is stable,
but as we will see this is not quite right and we can also simplify our analysis and
not have to calculate the exponents of each eigenblock separately. We note that
because of the zero-sum row constraint γ=0 is always an eigenvalue with
eigenmode the major diagonal vector (1,1,...,1). We denote this as the first
eigenvalue γ1 since by design it is the smallest. The first eigenvalue is associated
with the synchronous state and the Lyapunov exponents associated with it are
those of the isolated oscillator. Its eigenmode represents perturbations that are the
same for all oscillators and hence do not desynchronize the oscillators. The first
eigenvalue is therefore not considered in the stability analysis of the whole
system.
Next notice that Eq. (5) is the same form for all eigenmodes. Hence, if we
solve a more generic variational equation for a range of couplings, then we can
simply examine the exponents for each eigenvalue for stability. This is clearer if
we show the equations. Consider the generic variational equation,
dζ
= [DF(s) − α DH(s)]⋅ζ ,
dt
(6)
where α is a real number (G is symmetric and so has real eigenvalues). If we
know the maximum Lyapunov exponent λmax(α) for α over a range that includes
the Lyapunov spectrum, then we automatically know the stability of all the modes
by looking at the exponent value at each α=σγl value. We refer to the function
λmax(α) as the master stability function.
For example, Fig. 4 shows an example of a typical stability curve plotting the
maxium Lyapunov exponent vs. α. This particular curve would obtain for a
particular choice of vector field (F) and output function (H). If the spectrum {γl}
all falls under the negative part of the stability curve (the deep well part), then all
the modes are stable. In fact we need only look to see whether the largest γmax and
smallest γ2, non-zero eigenvalues fall in this range. If there exists a continuous,
negative λmax regime in the stabilty diagram, say between α1 and α2, then it is
sufficient to have the following inequality to know that we can always tune σ to
place the entire spectrum of G in the negative area:
Synchronization of Oscillators in Complex Networks
γ max α 2
<
,
γ2
α1
69
(7)
We note two important facts: (1) We have reduced the stability problem for
an oscillator with a particular stability curve (say, Fig. 4) to a simple calculation
of the ratio of the extreme, non-zero eigenvalues of G; (2) Once we have the
stability diagram for an oscillator and output function we do not have to recalculate another stability curve if we reconfigure the network, i.e. construct a
new G. We need only recalculate the largest and smallest, non-zero eigenvalues
and consider their ratio again to check stability.
Fig. 4 Stability curve for a generic oscillator. The curve may start at (1) λmax=0 (regular
behavior), (2) λmax>0 (chaotic behavior) or (3) λmax<0 (stable fixed point) and asymptotically
(σ→∞) go to (a) λmax=0, (b) λmax>0, (c) λmax<0. Of course the behavior of λmax at intermediate
σ values is somewhat arbitrary, but typical stability curves for simple oscillators have a single
minium. Shown are the combinations (1)-(b) and (2)-(c) for some generic simple oscillators.
Finally, we remark that stability curves like Fig. 4 are quite common for
many oscillators in the literature, especially those from the class arising from an
Louis M. Pecora and Mauricio Barahona
70
unstable focus. Appendix A gives a heuristic reason for this common shape. For
the rest of this article, we assume that we are dealing with the class of oscillators
which have stability curves like Fig. 4. They may or may not be chaotic. If they
are then at α=0 λmax > 0, otherwise λmax = 0 at α=0. And their values for large α
may go positive or not for either chaotic or periodic cases. We will assume the
most restrictive case that there is a finite interval where λmax < 0 as in Fig. 4. This
being the most conservative assumption will cover the largest class of oscillators
including those which have multiple, disjoint α regions of stability as can happen
in Turing pattern-generating instabilities [7]. Several other studies of the stability
of the synchronous state have chosen weaker assumptions, including the
assumption that the stability curve λmax(α) becomes negative at some threshold
(say, α1) and remains negative for all α > α1. Conclusions of stability in these
cases only require the study of the first non-zero eigenvalue γ2, but cover a smaller
class of oscillators and are not as general as the broader assumption of Eq. (7).
B. Beyond Linear Coupling
We can easily generalize the above situation to one that includes the case of
nonlinear coupling. If we write the dynamics for each oscillator as depending,
somewhat aribtrarily on it's input from some other oscillators, then we will have
the equation of motion,
dx i
= F i (x i ,H{x j )},
dt
(8)
where here Fi is different for each i because it now contains arbitrary couplings. Fi
takes N+1 arguments with xi in the first slot and H{xj} in the remaining N slots.
H{xj} is short for putting in N arguments which are the result of the output
function H applied in sequence to all N oscillator vectors xj, viz., H{xj}= (H(x1),
H(x2), ... H(xN)). We require the constraint,
F i (s,H{s)} = F j (s, H{s)},
for all i and j so that identical synchronization is possible.
The variational equation of Eq. (8) will be given by,
(9)
Synchronization of Oscillators in Complex Networks
71
N
dξ i
= ∑ D0 F i (s, H{s})δij + DjF i (s,H{s}) ⋅ DH(s) ⋅ξ j ,
dt
j =1
[
]
(9)
where D0 is the partial derivative with respect to the first argument and Dj,
j=1,2,...N, is the derivative with respect to the remaining N argument slots. Eq. (9)
is almost in the same form as Eq. (4). We can regain Eq. (4) form by restricting
our analysis to systems for which the partial derivatives of the second term act
simply like weighting factors on the outputs from each oscillator. That is,
DjF i (s,H{s}) = − σGij 1m ,
(10)
where Gij is a constant and 1m is the m×m unit matrix. Now we have recovered Eq.
(4) exactly and all the analysis that led up to the synchronization criterion of Eq.
(7) applies. Note that Eq. (10) need only hold on the synchronization manifold.
Hence, we can use many forms of nonlinear coupling through the Fi and/or H
functions and still use stability diagrams and Eq. (7).
C. Networks and Graph Theory
A network is synonomous with the definition of a graph and when the nodes
and/or edges take on more meaning like oscillators and couplings, respectively,
then we should properly call the structure a network of oscillators, etc. However,
we will just say network here without confusion. Now, what is a graph? A graph
U is a collection of nodes or vertices (generally, structureless entities, but
oscillators herein) and a set of connections or edges or links between some of
them. See Fig. 1. The collection of vertices (nodes) are usually denoted as V(U)
and the collection of edges (links) as E(U). We let N denote the number of
vertices, the cardinality of U written as |U|. The number of edges can vary
between 0 (no vertices are connected) and N(N–1)/2 where every vertex is
connected to every other one.
The association of the synchronization problem with graph theory comes
through a matrix that appears in the variational equations and in the analysis of
graphs. This is the connection matrix or G. In graph theory it is called the
Laplacian since in many cases like Eq. (3) it is the discrete version of the second
derivative ∆=∇2. The Laplacian can be shown to be related to some other matrices
from graph theory.
Louis M. Pecora and Mauricio Barahona
72
We start with the matrix most studied in graph theory, the adjacency matrix
A. The is given by the symmetrix form where Aij=1 if vertices i and j are
connected by an edge and Aij=0 otherwise. For example, for the graph in Fig. 5
has the following adjacency matrix,
0
1

A = 1
0

0
1 1 0 0
0 1 0 1

1 0 1 0 ,
0 1 0 0

1 0 0 0
(11)
Much effort in the mathematics of graph theory has been expended on
studying the adjacency matrix. We will not cover much here, but point out a few
things and then use A as a building block for the Laplacian, G.
Fig. 5 Simple graph generating the adjacency matrix in Eq. (11)
Synchronization of Oscillators in Complex Networks
73
The components of the powers of A describe the number of steps or links
between any two nodes. Thus, the non-zero components of A2 show which nodes
are connected by following exactly two links (including traversing back over the
same link). In general, the mth power of A is the matrix whose non-zero
components show which nodes are connected by m steps. Note, if after N–1th step
A still has a zero, off diagonal component, then the graph must be disconnected.
That is, it can be split into two subgraphs each of whose nodes have no edges
connecting them to the other subgraph. Given these minor observations and the
fact that a matrix must satisify its own characteristic equation, one can appreciate
that much work in graph theory has gone into the study of the eigenspectrum of
the adjacency matrix. To pursue this further, we recommend the book Ref. [8].
The degree of a node is the sum of the number of edges connecting it to other
nodes. Thus, in Fig. 5, node 2 has a degree = 3. We see that the degree of a node
is just the row sum of the row of A associated with that node. We form the degree
matrix or valency matrix D which is a diagonal matrix whose diagonal entries are
the row sums of the corresponding row of A, viz., Dij=Σk Aik. We now form the
new matrix, the Laplacian G=D–A. For example, for the diffusive coupling of Eq.
(3), A would be a matrix like G, but with 0 replacing 2 on the diagonal and D
would be the diagonal matrix with 2's on the diagonals. The eigenvalues and
vectors of G are also studied in graph theory [8,9], although not as much as the
adjacency matrix. We have seen how G's eigenvalues affect the stability of the
synchronous state so some results of graph theory on the eigenspectrum of G may
be of interest and we present several below.
The Laplacian is a postive, semi-definite matrix. We assume its N eigenvalues
are ordered as γ1<γ2<...<γN. The Laplacian always has 0 for the smallest
eigenvalue associated with the eigenvector v1=(1,1,...,1), the major diagonal. The
latter is easy to see since G must have zero row sum by construction. The next
largest eigenvalue, γ2 is called the algebraic connectivity. To see why, consider a
graph which can be cleanly divided into two disconnected subgraphs. This means
by rearranging the numbering of the nodes G would be divided into two blocks
with no non-zero "off-diagonal" elements since they are not connected,
 G1 0 
G=
,
 0 G 2
(12)
where we assume G1 is n1×n1 dimensional and G2 is n2×n2 dimensional. In this
latter case we now have two zero eigenvalues each associated with unique
eigenvectors mutually orthogonal, namely, v1=(1,1,...,1,0,0,...,0) with n1 1's and
Louis M. Pecora and Mauricio Barahona
74
v2=(0,0,...,0,1,1,...,1) with n2 1's. The degeneracy of the zero eigenvector is one
greater than the connectivity. If γ2>0, the graph is connected. We immediately see
that this has consequences in synchronization stabilty, since a disconnected set of
oscillators cannot synchronize physically and mathematically this shows up as a
zero eigenvector.
Much work has gone into obtaining bounds on eigenvalues for graphs [8,9].
We will present some of these bounds (actually deriving a few ourselves) and
show how they can lead to some insight into synchronization conditions in
general. In the following the major diagonal vector (1,1,...,1) which is the
eigenvector of γ1 is denoted by θ.
We start with s few well-known min-max expressions for eigenvalues of a
real symmetric matrix which is G in our case, namely,
 Gv, v

v ≠ 0,v ∈R N  ,
 v,v

(13)
 Gv, v

v ≠ 0,v ∈R N ,v⊥θ ,
 v,v

(14)
 Gv,v

v ≠ 0,v ∈ RN  ,

 v, v
(15)
γ 1 = min 
γ 2 = min 
and,
γ max = max
Now with the proof of two simple formulas we can start deriving some
inequalities.
N
∑ A (v
First we show that
i , j =1
ij
i
− v j ) = 2 Gv,v , where Aij are the
2
components of the adjacency matrix. By symmetry of A, we have
N
N
∑ Aij (vi − v j ) = 2 ∑ ( Aijv 2j − Aijv j vi ) ,
i , j =1
2
i, j = 1
Synchronization of Oscillators in Complex Networks
75
The first term in the sum is the row sum of A which is the degree Dii making
the term in parenthesis an matrix product with the Laplacian so that we have
N
∑ v (D δ
i
i , j =1
ii ij
− Aij )v j = 2 Gv, v , thus proving the first formula.
N
∑ (v
Second, we show that
i
i , j =1
v⊥θ
we
N
∑ (v
i , j =1
i
− v j ) = 2 N v, v . Because we are using
2
can
easily
N
N
i, j = 1
i, j = 1
(
− v j ) = 2 ∑ (v2i − viv j ) = 2 ∑ v2i − v.θ
2
(
)
(v − v )= ((v
2
show
that
)= 2 N v,v
since v⊥θ .
Noting that the vi − v j terms are not affected if we add a constant to each
component of v, that is,
i
j
i
+ c) − (v j + c)). We can now write
Eq.(14) and Eq. (15) as
 N

2

 ∑ Aij (vi − v j )


N
γ 2 = N min i , j =1N
v ∈R , v ≠ cθ,c ∈R ,


(vi − v j )2

 i,∑
j =1
(16)
and,
γ max
 N

2

 ∑ Aij (vi − v j )

 i, j =1
N
= N max N
v ∈R ,v ≠ cθ,c ∈R ,


(vi − v j )2

 i∑
, j =1
(17)
If δ is the minimum degree and ∆ the maximum degree of the graph then the
following inqualities can be derived:
γ2 ≤
N
N
δ≤
∆ ≤ γ max ≤ max{Dii + Djj} ≤ 2∆ ,
N −1
N −1
(18)
where Dii+Djj is the sum of degrees of two nodes that are connected by an edge.
We can prove the first inequality by choosing a particular v, namely the standard
Louis M. Pecora and Mauricio Barahona
76
basis e(i)= all zeros except for a 1 in the ith position. Putting e(i) into Eq. (16) we
get
γ2 ≤
NDii
. This last relationship is true for all Dii and so is true for the
N −1
minimum, δ. A similar argument gives the first γmax -∆ inequality. The remaining
inequalities bounding γmax from above are obtained through the Perron-Frobenius
theorem [9].
There remains another inequality that we will use, but not derive here. It is the
following:
γ2 ≥
4
,
N diam(U )
(19)
where diam(U) is the diameter of the graph. The distance between any two nodes
is the minimum number of edges transversed to get from one node to the other.
The diameter is the maximum of the distances [8]. There are other inequalities,
but we will only use what is presented up to here.
The synchronization criterion is that the ratio γmax/γ2 has to be less than some
value (α2/α1) for a particular oscillator node. We can see how this ratio will scale
and how much we can bound it using the inequalities. First, we note that the best
we can do is γmax/γ2=1 which occurs only when each node is connected to all other
nodes. We can estimate how much we are bounded away from 1 by using the first
inequalities of Eq. (18) . This gives,
∆
δ
≤
γ max
,
γ2
(20)
Hence, if the degree distributions are not narrow, that is, there is a large range of
distributions, then the essential eigenratio γmax/γ2 cannot be close to 1 possibly
making the oscillators difficult to synchronize (depending on α2/α1). Note that
this does not mean necessarily that if δ/∆ is near 1 we have good synchronization.
We will see below a case where δ/∆=1, but γmax/γ2 is large and not near 1.
To have any hope in forcing the eigenratio down we need an upper bound.
We combine Eq. (18) with Eq. (19) . This gives,
γ max N diam(U )max{Dii + Djj | i and j are connected}
≤
,
4
γ2
(21)
Synchronization of Oscillators in Complex Networks
77
This inequality is not very strong since even if the network has small diameter
and small degrees the eigenratio still scales as N. We want to consider large
networks and obviously this inequality will not limit the eigenratio as the network
grows. However, it does suggest that if we can keep the degrees low and lower the
diameter we can go from a high-diameter, possibly hard to synchronize network
to an easier to synchronize one. We will see this is what happens in smallworld
systems.
III. SYNCHRONIZATION IN SEMIRANDOM
SMALLWORLDS (CYCLES)
A. Generation of Smallworld Cycles
The generation of smallworld cycles starts with a k-cycle (defined shortly)
and then either rewires some of the connections randomly or adds new
connections randomly. We have chosen the case of adding more, random
connections since we can explore the cases from pristine k-cycle networks all the
way to compete networks (all nodes connected to all other nodes).
To make a k-cycle we arrange N nodes in a ring and add k connections to each
of the k nearest neighbors to the right. This gives a total of kN edges or
connections. Fig. 1 shows a 2-cycle. We can continue the construction past this
initial point by choosing a probability p and going around the cycle and at each
node choosing a random number r in (0,1) and if r ≤ p we add an edge from the
current node to a randomly chosen node currently not connected to our starting
node. Note that we can guarantee adding an edge to each node by choosing p=1.
We can extend this notion so we can reach a complete neetwork by allowing
probabilities greater than 1 and letting the integer part specify how many times we
go around the cycle adding edges with probability p=1 with the final loop around
the cycle using the remainig fraction of probability. For example, if we choose
p=2.4 we go twice around the cycle adding an edge at each node (randomly to
other nodes) and then a third time around adding edges with probability 0.4. With
this contruction we would use a probability p=N(N–2k–1)/2 to make a complete
network.
We refer to the k-cycle without any extra, random edges as the pristine
network. It is interesting to examine the eigenvalues of the pristine Laplacian
especially as they scale in ratio.
78
Louis M. Pecora and Mauricio Barahona
B. Eigenvalues of Pristine Cycles
The Laplacian for the pristine k-cycle is shift invariant or circulant which
means that the eigenvalues can be calculated from a discrete Fourier transform of
a row of the Laplacian matrix. This action gives for the lth eigenvalue,
k

2π (l − 1) j 
γ l = 2 k − ∑ cos(
) .
N

j =1

(22)
For N even, l=1 is gives γ1 (nondegenerate), l=N/2 is gives γmax (nondegenerate),
with the remaining eigenvalues being degenerate where γl =γN-l+1. A plot of the
eigenvalues for k=1, 2, 3, and 4 is shown in Fig. 6. The maximum eigenvalue
(ME) γmax occurs at different l values for different k values. The second or firstnon-zero eigenvalue (FNZE) always occurs for l=2.
Fig. 6 Eigenvalues of the pristine k-cycle for k=1, 2, 3, and 4
Synchronization of Oscillators in Complex Networks
79
What we are mostly interested in here is the eigenratio γmax/γ2. We might even
question whether this ratio is sufficiently small for the pristine lattices that we
might not even need extra, random connections. Fig. 8 shows the eigenratios as a
function of N for k=1, 2, 3 and 4. The log-log axes of Fig. 8 shows that γ2 scales
roughly as (2k3+3k2+k)/N2, γmax values are constant for each k, and, therefore, the
eigenratio γmax/γ2 scales as N2/(2k3+3k2+k). The scaling of the eigenratio is
particularly bad in that the ratio gets large quickly with N and given that for many
oscillators the stability region (α2/α1) is between 10 and a few hundred we see
that pristine k-cycles will generally not produce stable synchronous behavior.
Hence, we must add random connections in the hope of reducing γmax/γ2.
Fig. 7 Eigenratios of the pristine k-cycle as a function of N for k=1, 2, 3 and 4.
80
Louis M. Pecora and Mauricio Barahona
Fig. 8 Eigenratio for 50 node k-cycle semirandom network as a function of fraction of complete
network.
C. Eigenvalues of Smallworld Cycles
In this section and the next on SFNs we look at the trends in eigenratios as
new edges are randomly added increasing the coupling between oscillators. If
these new edges are added easily in a system, then surely the best choice is to
form the complete network and have the best situation possible, γmax/γ2=1.
However, we take the view that in many systems there is some expense or cost in
adding extra edges to pristine networks. For biological systems the cost will be in
extra intake of food and extra energy which will be fruitful only if the new
networks are a great improvement and provide the organisms some evolutionarly
advantages. In human-engineered projects, the cost is in materials, time, and
Synchronization of Oscillators in Complex Networks
81
perhaps maintenance. Therefore, the trend in γmax/γ2 as edges are added randomly
will be considered important.
Fig. 9 shows the eigenratio for k-cycle networks as a function of f the fraction
of the complete lattice that obtains when edges are added randomly for
probabilities from p=0 to values of p yielding complete graphs for N=50 nodes.
Fig. 10 shows the same for 100 nodes. For now we ignore the plots for SFNs and
concentrate only on the trends in the k-cycle, semirandom networks.
Fig. 9 Eigenratio for 100 node k-cycle semirandom network as a function of fraction of
complete network.
82
Louis M. Pecora and Mauricio Barahona
Fig. 10 Distribution of node degrees for SFNs.
We saw in the previous section that the worse case for pristine k-cycles is
when k=1. However, Fig. 9 and Fig. 10 show that this turns into the best case of
all the k-cycles when random edges are added if we compare the networks at the
same fraction of complete graph values where the total number of edges or cost to
the system is the same. Although each case for k>1 starts at lower γmax/γ2 values
for the pristine lattices, at the same fraction of complete graph values the k=1
Synchronization of Oscillators in Complex Networks
83
network has lower values. This immediately suggests that a good way to generate
a synchronizable network is to start with a k=1 network and randomly add edges.
We will compare this to starting with other networks below.
In terms of graph diameter we can heuristically understand the lowering of
γmax/γ2. Recall inequality Eq. (21) . This strongly suggests that as diam(U)
decreases the eigenratio must decrease. From the original Watts and Strogatz
paper we know the average distance between nodes decreases early upon addition
a few random edges. Although average distance is not the same as diameter, we
suspect that since the added edges are randomly connected the diameter also
decreases rapidly. Note that the k-cycles are well-behaved in regard to the other
inequality Eq. (20) . In fact δ/∆ starts out with a value of 1 in the pristine lattices.
Of course, the eigenratio is not forced to this value from above since the diameter
is large for the pristine networks – on the order of N/(4k) which means the upper
bound grows as N2 just the same trend as the actual eigenratio for the pristine
networks. However, adding edges does not change δ/∆ by much since the
additions are random, but it does force the upper bound down.
IV. SYNCHRONIZATION IN SCALE-FREE NETWORKS
A. Generation of Scale-Free Networks
Scale-free networks (SFNs) get their name because they have no "typical" or
average degree for a node. The distribution of degrees follows an inverse power
law first discovered by Albert and Barabási [10] who later showed that one could
derive the power law in the "continuum" limit of large N. Several methods have
been given to generate SFNs, but here we use the original method [11].
The SFN is generated by an iterative process. We start with an initial number
of nodes connected by single edges. The actual initial state does not affect the
final network topology provided the initial state is a very small part of the final
network. We now add nodes one at a time and each new node is connected to m
existing nodes with no more than one connection to each (obviously the initial
number of nodes must be equal to or greater than m). We do this until we have the
desired number of nodes, N. The crucial part is in how we choose the m nodes to
connect each new node to. This is done by giving each existing node a probability
Pi which is proportional to the degree of that node. Normalizing the probabilities
gives,
Louis M. Pecora and Mauricio Barahona
84
Pi =
di
∑
existing
j∈
nodes
,
(23)
dj
where dj is the degree of the jth node. Using this probability we can form the
cumulative set of intervals, { P1, P1+P2, P1+P2+P3,...}. To choose an existing
node for connection we randomly pick a number from the interval (0,1), say c,
and see which interval it falls into. Then we pick the node of that interval with the
highest index. A little thought will show that the ordering of the nodes in the
cumulative intervals will not matter since c is uniformly distributed over (0,1).
Such a network building process is often referred to as "the rich get richer." It
is easy to see that nodes with larger degrees will be chosen preferentially. The
above process forms a network in which a few nodes form highly connected hubs,
more nodes form smaller hubs, etc. down to many single nodes connected only
through their original m edges to the network. This last sentence can be quantified
by plotting the distribution or count of the degrees vs. the degree values. This is
shown in Fig. 11.
Fig. 11 FNZE, ME, and their ratio vs. N for SFNs.
Synchronization of Oscillators in Complex Networks
85
We note that since we are judging synchronization efficiency of networks we
want to compare different networks (e.g. cycles and SFNs) which have roughly
the same number of edges, where we view adding edges as adding to the "cost" of
building a network. We can easily compare pristine cycles and SFNs since for
k=m we have almost the exact same number of edges (excepting the initial nodes
of the SFN) for a given N.
B. Eigenvalues of Pristine Scale-Free Networks
Although we do not start with a fully deterministic network in the SFN case,
we can still characterize the eigenvalues and, to a lesser extenct, the eigenvectors
of the original SFN. This is useful when comparing the effect of adding more,
randomly chosen edges as we do in the next section.
Fig. 12 Average distances between nodes on SW, SFN, and HC networks.
Louis M. Pecora and Mauricio Barahona
86
Fig. 12 shows the FNZE, ME, and their ratio vs. N for various m values of 1,
2, 3, and 4. For m=1 we have (empirically) γmax~ N and γ2~1/N. Therefore the
3
2
ratio scales as γmax/γ2~ N . Recall that the pristine cycle's ratio scaled as
2
γmax/γ2~ N , hence for the same number of nodes and k=m the pristine eigenratio
of cycles grows at a rate ~ N faster then the SFN eigenratio. Thus, for large
networks it would seem that SFNs would be the network of choice compared to
cycles for obtaining the generic synchronization condition γmax/γ2≤α2/α1 for the
same number of edges. However, as we add random edges we see that this
situation does not maintain. For values of m≥2, the situation is not as clear.
Fig. 13 Clustering coefficient for SW, SFN, and HC networks.
Synchronization of Oscillators in Complex Networks
87
C.. Eigenvalues of Scale-Free Networks
Fig. 9 and Fig. 10 also show the eigenratio for SFNs as a function of f the
fraction of complete graph. Note that we can directly compare SW and SFNs
when m=k for which they have the same f value. We are now in a position to
make several conclusions regarding the comparison of SW networks and SFNs.
Such comparisons seem to hold across several values of N. We tested cases for
N=50, 100, 150, 200, and 300.
First SW semirandom k cycles start out in their pristine state with a larger
eigenratio than SFNs with m=k. However, with the addition of a small number of
edges – barely changing f the SW k cycle eigenratio soon falls below its SFN
counterpart with m=k. The eigenratios for all SFNs fall rapidly with increasing f,
but they do not reach the same low level as the SW networks until around f=0.5.
This implies that the SWs are more efficient than SFNs in reducing eigenratio or,
equivalently, synchronizing oscillators.
An interesting phenomenon for SFNs is that changing m only appears to
move the eigenratio up or down the m=1 curve. That is, curves for SFNs with
different m values fall on top of each other with their starting point being higher f
values for higher m values. This suggests an underlying universal curve which the
m=1 curve is close to. The connection to the previous section where level spacing
for the SFN eigenvalues begins to appear random-like as we go to higher m values
is mirrored in the present section by showing that higher m values just move the
network further along the curve into the randomly added edge terroritory.
At f=1 all networks are the same. They are complete networks with N(N–1)/2
edges. At this point the eigenratio is 1. As we move back away from the f=1 point
by removing edges, but preserving the underlying pristine network, the networks
follow a "universal" curve down to about f=0.5. Larger differences between the
networks don't show up until over 50% of the edges have been removed which
implies that the underlying pristine skeleton does not control the eigenratio
beyond f=0.5. This all suggests that an analytic treatment of networks beyond
f=0.5 might be possible. We are investigating this.
In their original paper Watts and Strogatz [12] suggested that the diminishing
average distances would allow a smallworld network of oscillators to couple more
efficiently to each other and therefore synchronize more readily. In Fig. 14 we
plot the average distance in each network as a function of fraction of completed
graph f. In Fig. 15 the same type of plot is given for the clustering coefficient.
What we immediately notice is that there seem to be no obvious relationship to
eigenratios as a function of f. SFNs start out with very small average distance and
have very low clustering coeffieicnts, at least until many random edges are added
Louis M. Pecora and Mauricio Barahona
88
to beyond f=0.1 which is beyond the smallworld regime. Thus, it appears that
neither average distance nor clustering affect the eigenratio. What network
statistics, if any, do affect γmax/γ2?
Fig. 14 The bounded interval containing the eigenratio γmax/γ2.
Fig. 15 Clustering coefficient for SW, SFN, and HC networks.
From the graph theory bounds in section II.C. Eqs. (20) and (21) we can
explain some of the phenomena we find numerically for eigenratios. We can
immediately explain the higher eigenratio of SFNs using Eq. (20) . The eigenratio
is bounded away from 1 (the best case) by the ratio of largest to smallest degrees
(∆/δ). For smallworld k-cycles this ratio starts at 1 (since all nodes have degree
2k) and does not change much as edges are added randomly. Hence, it is at least
possible for the eigenratio to approach 1, although this argument does not require
it. Conversely, in the SFN the degree distribution is wide with many nodes with
degree=m and other nodes with degree on the order of some power of N. Hence,
for SFNs ∆/δ is large, precluding the possibility that γmax/γ2 can get close to 1 until
many random edges are added.
We can bound the eigenratio from above using Eq. (21) . Thus, γmax/γ2 is
sandwiched between ∆/δ and
1
4
N diam(U )max{Dii + Djj} . We show this
schematically in Fig. 16. The ratio ∆/δ provides a good lower bound, but the
upperbound of Eq. (21) is not tight in that it scales with N. Hence, even in small
networks the upper bound is an order of magnitude or more larger than the good
lower bound. At this stage it does not appear that graph theory can give the full
Synchronization of Oscillators in Complex Networks
89
answer as to why smallworld k-cycles are so efficient for synchronizing
oscillators.
Fig. 16 The bounded interval containing the eigenratio γmax/γ2
V. HYPERCUBE NETWORKS
As a simple third comparison we examine the eigenratio of the hypercube
(HC) network which we showed in Ref. [13] was a very efficient network for
obtaining low eigenratios. HC networks are built by connecting N=2Dnodes in a
topology so that all nodes form the corners of a D-dimensional hypercube with
degree D. Hence, the 1st graph theory inequality (Eq. (20) ) allows the eigenratio
to be the minimum allowed value of 1 although this is not mandatory.
Furthermore, it is not hard to show that the maximum eigenvalue scales as
2D=2log2(N) and the smallest eigenvalue is always 2 so that γmax/γ2=D for the
pristine HC network and this increases very slowly with N. We therefore expect
the eigenratio for any HC network to start off small and decrease to 1.
Fig. 9 and Fig. 10 contain plots of the HC network eigenratio vs. f from
pristine state to complete network. Indeed, the initial eigenratios are low, but this
comes with a price. The price contains two contributions. One is that we need to
start at a much higher f value, that is, HC networks in their pristine state require
more edges than the SW k=1 network. And the other contribution is that the HC
must be constructed carefully since the pristine state is apparently more complex
than any other network.
In the end one gets the same performance by just connecting all the nodes in a
loop and then adding a small number of random edges. The construction is
simpler and the synchronization as good as one of the best networks, the HC.
V. CONCLUSIONS
Our earlier work [13] showed that many regular networks were not as
efficient in obtaining synchronization as the smallworld network or fully random
networks. Here we added two other semirandom networks, the SFN+random
edges and the HC+random edges to our analysis. The results appear to be that
elaborately constructed networks offer no advantage and perhaps even
90
Louis M. Pecora and Mauricio Barahona
disadvantages over the simple k-cycle+random edges, especially in the
smallworld regime.
The fully random network [14] can also have ratios γmax/γ2 which are similar
to the smallworld case for large enough probabilities for adding edges. However,
the random networks are never guaranteed to be fully connected. When
unconnected, as we noted earlier, synchronization is impossible. The observation
we can make then is that the 1-cycle smallworld network may be the best
compromise between fully random and fully regular networks. The smallworld
network maintains the fully connectedness of the regular networks, but gains all
the advantages of the random networks for efficient synchronization.
REFERENCES
[1]
[2]
[3]
D.J. Watts, Small Worlds. (Princeton University Press, Princeton, NJ, 1999).
M.E.J. Newman and D.J. Watts, Physics Letters A 263 (4-6), 341 (1999);
M.E.J. Newman, C. Moore, and D.J. Watts, Physical Review Letters 84
(14), 3201 (2000); C. Moore and M.E.J. Newman, Physical Review E 61
(5), 5678 (2000); L.F. Lago-Fernandez, R. Huerta, F. Corbacho et al.,
Physical Review Letters 84 (12), 2758 (2000); Rémi Monasson, EuropeanPhysical-Journal-B 12 (4), 555 (1999); R.V. Kulkami, E. Almaas, and D.
Stroud, Physical Review E 61 (4), 4268 (2000); M. Kuperman and G.
Abramson, Physical Review Letters 86, 2909 (2001); S. Jespersen, I.
Sokolov, and A. Blumen, J. Chem. Phys. 113, 7652 (2000); H. Jeong,
B. Tombor, R. Albert et al., Nature 407, 651 (2000); M. Barthelemy and L.
Amaral, Physical-Review-Letters 82 (15), 3180 (1999).
Y. Chen, G. Rangarajan, and M. Ding, Physical Review E 67, 026209
(2003); A.S. Dmitriev, M. Shirokov, and S.O. Starkov, IEEE Transactions
on Circuits and systems I. Fundamental Theory and Applications 44 (10),
918 (1997); P. M. Gade, Physical Review E 54, 64 (1996); J.F. Heagy, T.L.
Carroll, and L.M. Pecora, Physical Review E 50 (3), 1874 (1994); Gang Hu,
Junzhong Yang, and Winji Liu, Physical Review E 58 (4), 4440 (1998); L.
Kocarev and U. Parlitz, Physical Review Letters 77, 2206 (1996); Louis M.
Pecora and Thomas L. Carroll, International Journal of Bifurcations and
Chaos 10 (2), 273 (2000); C. W. Wu, presented at the 1998 IEEE
International Symposium of Circuits and Systems, Monterey, CA, 1998
(unpublished); Chai Wah Wu and Leon O. Chua, International Journal of
Bifurcations and Chaos 4 (4), 979 (1994); C.W. Wu and L.O. Chua, IEEE
Transactions on Circuits and Systems 42 (8), 430 (1995).
Synchronization of Oscillators in Complex Networks
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
91
L.M. Pecora and T.L. Carroll, Physical Review Letters 80 (10), 2109
(1998).
K. Fink, G. Johnson, D. Mar et al., Physical Review E 61 (5), 5080 (2000).
Unlike our earlier work we use a minus sign for the coupling terms so the
definition of matrices from graph theory agree with the graph theory
literature.
A. Turing, Philosophical Transactions B 237, 37 (1952).
D.M. Cvetkovi´c, M. Doob, and H. Sachs, Spectra of Graphs: Theory and
Applications. (Johann Ambrosius Barth Verlag, Heidelberg, 1995).
B. Mohar, in Graph Symmetry: Algebraic Methods and Applications, NATO
ASI Ser. C, edited by G.Hahn and G.Sabidussi (Kluwer, 1997), Vol. 497,
pp. 225.
Réka Albert and Albert-László Barabási, Physical Review Letters 84, 5660
(2000).
Réka Albert and Albert-László Barabási, Reviews of Modern Physics 74, 47
(2002).
Duncan J. Watts and Steven H. Strogatz, Nature 393 (4 June), 440 (1998).
M. Barahona and L. Pecora, Physical Review Letters 89, 054101 (2002).
P. Erdös and A. Renyi, Publicationes Mathematicae (Debrecen) 6, 290
(1959); P. Erdös and A. Renyi, Bull. Inst. Internat. Statist. 38, 343 (1961).
Download