Adjacency Matrix Incidence Matrix

advertisement
Adjacency Matrix
Let G be an n-vertex directed graph. Let A be the n×n adjacency matrix of the graph G. Element
aij = 1 if and only if the edge (i, j) ∈ G. All other elements are zero. A row of A lists the nodes
at the tip of the outgoing edges while a column of A lists the nodes at the tail of the incoming
edges.
Powers of an Adjacency Matrix
Consider the power A2 of an adjacency matrix. What does element (A2 )ij mean? Element
(A2 )ij is the dot product A(i, :)A(:, j). The k-th term in dot product equals 1 if and only if
A(i, k) = A(k, j) = 1. In other words, it is 1 if and only if there is a path from i → k → j. The
dot product thus counts the number of paths of length exactly 2 from i to j. This generalizes to
arbitrary nonnegative powers: (Ar )ij is the number of paths of length exactly r from i to j.
Relevant questions:
• What if A is symmetric?
If A is symmetric, then there is an edge (i, j) and an edge (j, i), so A is basically undirected.
• If Ar = 0, then what can we say about the structure of G? And if Ar 6= 0 for all
r?
If Ar = 0, then there are no paths of length exactly r, so the longest path is < r and the
graph has no cycles. Conversely, Ar 6= 0 for all r if and only if G has a cycle.
• What is I + A + A2 + · · · + Ar ?
The sum represents the number of paths of any length less than or equal to r between every
pair of vertices. Drop the identity and get only the nontrivial paths.
• I + A + A2 + · · · + Ar − (A + A2 + · · · + Ar ) = I.
This implies that (I − A)−1 = I + A + A2 + · · · + Ar .
• How can we interpret a symmetric permutation P T AP ?
It is a relabeling of the vertices and does not change the structure of the graph. Two graphs
are isomorphic if and only if there is a symmetric permutation from the adjacency graph of
one to the adjacency graph of the other.
• What if A is (now or after symmetric permutation) strictly upper triangular?
The vertices have been labeled such that all arcs go from lower to higher labels. This is what
is known as a topological ordering and it also means that the graph has no cycles.
• What if A is (now or after symmetric permutation) banded?
The vertices have been labeled so that the edges are local. The graph is in a sense long and
thin.
Incidence Matrix
An undirected or directed graph G with n vertices and m edges can be represented as an incidence
matrix A. Arbitrarily number the edges 1, 2, . . . , m and the vertices 1, 2, . . . , n. For edge i: (j, k),
there is an entry aij = −1 and an entry aik = 1. All other elements are zero.
Interpretation of Ax
The matrix-vector product Ax can be understood in the following way. The vector x has one
component for each vertex in G. Each component in the product Ax contains the difference
between the components at the head of the edge and at the tail of the edge. The matrix-vector
product Ax therefore maps values on the edges to differences on the edges.
1
Interpretation of Nullspace
The nullspace of A are those vectors that give zero difference over all edges.
Interpretation of AT x
The matrix-vector product AT x can be understood in the following way. The vector x has one
component for each edge in G. The entry (AT )ij = −1 iff edge j is outgoing from vertex i.
Similarly, the entry (AT )ij = 1 iff edge j is incoming to vertex i. Therefore, the matrix-vector
product AT x sums the incoming flow from each edge and subtracts the outgoing flow. In other
words, each component is the net flow to/from that vertex.
Interpretation of Left Nullspace
The left nullspace of A are those vectors that give zero net flow on all vertices. The only nontrivial
flows are constant flows around cycles of G.
Interpretation of AT Ax
Since Ax computes the difference over each edge, the product AT (Ax) computes the net flow
induced by the values at the vertices.
Stiffness Matrix
Consider n masses serially connected with springs. Mass mi for i = 2, 3, . . . , n − 1 is connected
with mi−1 and mi+1 . The masses m1 and mn are possibly connected to a fixed point.
Consider the fixed-fixed case where both ends are connected to a fixed point. Let x be the
vector of displacements of the masses. Let y be the vector of internal forces and c the vector of
spring constants, which we arrange in a diagonal matrix C. The vector e contains elongations of
the springs.
The elongation is a linear function of the displacements. In general,
ei = xi − xi−1 .
With four masses, we get e = Ax for



1
0
0
0
1
−1 1

0
0
−1


0
Afix/fix = 
 0 −1 1
 , Afix/free =  0
0

0 −1 1
0
0
0
0 −1
0
1
−1
0


0 0
−1
0 0
 , Afree/free =  0
1 0
0
−1 1
1
−1
0
0
1
−1

0
0 .
1
Now use Hooke’s law to find the internal forces y from the spring constants and elongations.
Hooke’s law states that the internal force is directly proportional to the elongation:
y = Ce = CAx.
Finally, we need to balance the internal forces and the external forces to find the equilibrium
state. In general,
fi = mi g = yi − yi−1 .
In matrix notation,
f = gm = AT y = AT CAx.
Relevant questions:
2
• What does it mean if K = AT CA is nonsingular? Singular?
The final equation relates external forces with internal forces via f = Kx. If K is nonsingular,
then the displacements, x, are uniquely determined by the external forces. Compare the fixedfree case (nonsingular) with the free-free case (singular).
• What if A is nonsingular?
Since e = Ax, if A is nonsingular, then the elongations are uniquely determined by the
displacements.
• The matrix Afree/free has a nullspace. What is it and what does it mean?
If A has a nontrivial nullspace, then there are infinitely many displacements that give the
same set of elongations. The nullspace of Afree/free is the span of the vector of all ones. It
means that you may displace all masses the same amount and the elongations will be zero.
• The matrix Afix/fix has a left nullspace. What is it and what does it mean?
The left nullspace of Afix/fix is the vector of all ones. From f = AT y we see that the internal
forces are not unqiuely determined from the external forces.
• Characterize all static force vectors f in the free-free case.
The vector f is in the columnspace of K if the system is solvable and hence the system is
static. The columnspace are precisely the vectors orthogonal to the left nullspace, which in
this case is the vector with all ones. Hence, the valid f vectors are those that sum to zero.
Network Flow: Electrical Engineering
Consider the circuit depicted below.
1
3
1
3
2
2
5
4
4
We search for the equilibrium potentials and currents.
The graph structure gives the incidence matrix

−1 0 1
−1 1 0

A0 = 
 0 1 −1
−1 0 0
0 0 −1

0
0

0
.
1
1
The framework we use to solve this problem is as follows.
• Potentials at the vertices: u.
• Potential differences over the edges: e.
• Currents along the edges: w.
3
• Conductance (inverse resistance) of the edges: c.
The potential differences are given by
e = −Au.
Notice the minus sign. We use it to conform to the convention that electric current flows from
higher potential to lower potential.
The currents are given by Ohm’s law:
w = Ce = −CAu.
Finally, Kirchhoff’s Current Law is a balance equation which says that the net current flow is
zero at each node. We get the set of equations
−w1 − w2 − w4 = f1 ,
w2 + w3 = f2 ,
w1 − w3 − w5 = f3 ,
w4 + w5 = f4 .
In matrix form:

−1
0

1
0
|
−1
1
0
0
0
1
−1
0
{z
 
 w1
 
0  
f1
w
 2  f2 
0
 w3  =   .
 f3 
−1 
w4 
1
f4
} w5
−1
0
0
1
AT
0
It is time to put everything together:
f = −AT0 CA0 u
| {z }
K0
balances the potentials u with the current sources f . These equations do not uniquely determine u
since A0 has a nontrivial nullspace. The physical interpretation is that the potentials are relative
and we need to ground one vertex. We arbitrarily drop the last column of A0 and get


−1 0 1
−1 1 0 



A=
 0 1 −1 .
−1 0 0 
0 0 −1
Suppose we have batteries on some of the edges. The first equation changes slightly and we
summarize the three equations:


 e = b − Au,
w = Ce,


f = AT w.
If we are only interested in the potentials, we simply eliminated e and w:
f = AT C(b − Au) = AT Cb − AT CAu.
Assume we are interested in the potentials u and currents w but do not care about the potential
differences. We can eliminate e by solving the second equation for e and substituting into the first
equation:
( −1
C w = b − Au,
f = AT w.
4
Collect the unknowns w and u into a single vector and move the current sources to the right:
−1
C
A w
b
=
.
f
AT
0 u
We may decouple w from u by doing block-elimination:
−1
−1
C
A w
C
A
b
b
w
=
⇔
=
.
f
AT
0 u
0
−AT CA u
f − AT Cb
The multiplier was AT C. The last block row is an equation for u uncoupled from w:
−AT CAu = f − AT Cb.
Consumption Matrix
Assume we have n industries in an economy. Suppose that there is a linear relationship between
the inputs and the produced outputs. Say we want to have an output p from the economy. How
do we construct a matrix A such that the matrix-vector product e = Ap gives the required input
to produce the given output p? In particular, e is a linear combination of the columns of A, each
scaled by an element of p. Our matrix A must therefore has as its j-th column, A(:, j), a vector
describing the required inputs of all products to produce one unit of product j.
Balance Equation
We want to satisfy a demand y. We can produce an output p, but in the process we consume Ap
resources. The net production is therefore p − Ap = (I − A)p. The balance equation is
(I − A)p = y.
When it is solvable for y ≥ 0 and p ≥ 0, then we can satisfy any nonnegative demand. A unique
solution exists if and only if (I − A)−1 exists. Consider
(I − A)(I + A + A2 + A3 + · · · ) = I − A + A − A2 + A2 − A3 + A3 − · · · .
The sum converges to I if and only if Ar → 0. Since the inverse of (I − A) has nonnegative
elements, so does p = (I − A)−1 y.
5
Download