partial requirements submitted fulfillment

advertisement
MACHINE ALGORITHMS FOR BOOLEAN MATRICES
by
DALE ROBERT COMSTOCK
A THESIS
submitted to
OREGON STATE UNIVERSITY
in
partial fulfillment of
the requirements for the
degree of
MASTER OF SCIENCE
June 1963
APPROVED:
Professor
of MíJth/ematics
n
Charge of Major
Chairman of Departm t of Mathematics
Chairman
of School
raduate Committee
Dean of Graduate School
Date thesis is presented
Typed by Carol Baker
July 18, 1962
TABLE OF CONTENTS
Chapter
Page
INTRODUCTION
J.
II,
BOOLEAN ALGEBRA
3
III.
BOOLEAN MATRICES
8
IV.
CONTACT NETWORKS
15
THE ANALYSIS OF CONTACT NETWORKS
22
MACHINE ALGORITHMS FOR CONTACT NETWORKS
31
BIBLIOGRAPHY
43
APPENDIX
45
I.
V.
VI.
MACHINE ALGORITHMS FOR BOOLEAN MATRICES
CHAPTER
I
INTRODUCTION
The complex circuitry of the modern electronic digital
computer has resulted in detailed studies of methods
and synthesis of switching networks.
of
analysis
In many of these studies
the desirable outcome is that of simplifying the circuit, so that the
final circuit will be as economical as possible.
In the
past ten years the concept
of Boolean
matrices has
been used with success in switching network theory.
The purpose of this paper is to discuss some of the general
theory of Boolean matrices, illustrate their application to contact
networks and formulate machine algorithms for answering the
question: Is network A equivalent to network
B ?
That is,
suppose network A has been redesigned to yield a simpler network
B.
The simplification may have been done by
trial and error or
any of the many other methods studied in recent years.
We
mechanize a method for showing the equivalence of the two net-
works.
The
first section
of this
paper defines a Boolean algebra
and lists some of its elementary properties.
The second section
2
presents Boolean matrices and their properties. The next section
illustrates the application
of Boolean
matrices to contact networks.
The last section presents some machine algorithms for network
analysis with Boolean matrices. An appendix is included which
discusses two special cases.
3
CHAPTER II
BOOLEAN ALGEBRA
Definition
( +)
and
(
A
2. 1.
set
B
together with two binary operations,
algebra provided that the following postu-
), is a Boolean
lates hold for elements a, b, c E B:
Pl. The operations
a
P2.
+b =b +a
( +)
and
() are commutative,
i. e.
andab =ba.
There exist in
B
distinct identity elements
relative to the operations
( +)
and
(
0
and
respectively,
),
i. e. there exist elements 0, 1E B such that a
and
a
1
=
1
+ 0 =
a
a.
P3. Each operation is distributive relative to the other,
i. e. a
+
(b
c)
=
(a
+
b)
(a
+
c)
and
a (b
+
c)
=
(a b)
+
(a c).
P4. For each a
a
+
a'
=
1
f B,
and
there exists an a' E
a a'
= O.
B
The element
such that
a' is called the
dual of a.
The above definition of a Boolean algebra is due to E. V.
Huntington (4).
Some of the basic properties of a Boolean algebra are given
4
by the following theorem:
Theorem
i.
0
and
are unique elements of
1
ii. a
+
a
=
iii. a
+
1
=
iv.
a
+
ab
v.
a
+ (b +
(a')'
vi.
vii.
=
1
=
=
a;
=
0;
a (a
+
b)
=
(a + b) + c and
=
B;
a;
a (b c)
(a' b')'
viii. a
+
a' b
=
=
a
and
ab
=
(a'
+
b')';
b.
+
of a Boolean
2. 2.
algebra see (11).
The order relation
<
(less than or equal to)
is defined on B as follows: for each a, b E B, a
+
b
=
(a b) c;
proof of the above theorem and a discussion of other
Definition
a
=
a;
b
properties
a and
c)
+
a
aa
and a 0
a and
a
For
For each a, b, c E B, we have
2. 1.
< b
if and only if
b.
Theorem
for any a, b,
i.
2. 2.
c CB
The relation
<
partially orders the set
we have
a < a;
ii. if a
< b
and b
<
a, then a
=
b;
iii. if a
< b
and b
<
c, then a
<
b.
.
B, i. e.
5
Theorem
c a< c b
for any
For a, b
2. 3.
c
E B, if a < b,
(a
b)
+
But, if a
c.
and
E B.
Proof. By P3 and P4 of Definition
_
ac <bc
then
b, then a
<
+
b
=
2. 1, we
b by
have
Definition
ac +bc
2. 2.
Hence,
a c + b c = b c, which is equivalent to a c < b. c by Definition
c a < c. b follows from P 1
2. 2.
Perhaps the simplest Boolean algebra, denoted by B0, con-
sists
of the two
elements
and
0
Addition ( +), multiplication
1.
( ),
and duality (') are defined by the tables
+
0
1
0
0
1
0
1
1
1
1
0
Now to the Boolean
xl, x2,
...
,
0
0
a' a
1
0
0
1
1
1
0
.
algebra B0, we adjoin the variables
xm
m whose common domain is B 0 , extending to them
the postulates and definitions previously made.
Definition
A function f(xl, x2,
2. 3.
the variables xl, x2,
...
,
x
m
...
,
xm) built up from
and from the elements of B by a
finite number of applications of the operations
o
(
be called a Boolean function of the m variables x
,
2. 4.
variables xl, x2, ..
2
x
,
...
m
The set of all Boolean functions of the m
1
Definition
.
( ), and (') will
+),
,
x
m
2
will be called the Boolean algebra Bm.
6
This particular Boolean algebra has been extensively ap-
plied in switching network theory. Except for the more general
results concerning Boolean matrices, this paper will be devoted
entirely to the Boolean algebra Bm.
As a consequence of the definition of Boolean functions and
the equality of functions, we have for any f, g E Bm, f
if f and g have the same value when x
set of values from
B
,
x2,
,
..
= g
if and only
xm take on any
,
.
o
The previous remark gives
rise to the so- called
"identification problem" of the Boolean algebra B
i.e. given
m
two functions of
are they equivalent? Several solutions to
m
,
B,
this problem exist. An obvious solution to the problem is to
evaluate the two functions for every set of values of the variables
x1, x2,
...
,
xm.
This requires a large amount of computation if
m is very large. A second method is to perform an equivalence
transformation
on the function which
transforms it into
a unique
canonical form. Then we identify the equivalence of two functions
by inspection of
In
their representations in canonical form.
this paper we shall show the equivalence of two functions
by the second method mentioned above.
The canonical form that we have selected has come to be
known as Quine 's canonical form.
It is formed in the following
7
manner:
transform the function into an equivalent polynomial
i.
representation;
ii. if
such that
ß
2
(3,1
and
< (31,
iii. if x1
ß
132
are two monomials
then delete
and
nomial such that 431 ß
x1'
ß
p 2
0
2
this polynomial
of
from the polynomial;
are two monomials
and pi-
3
of the
poly-
is not less than or equal to
any monomial of the polynomial, then add
iv. if the monomials x and x' (i
ß
1
= 1,
1
ß 2
2,
to the polynomial;
...
,
m) occur
in the polynomial, then the canonical form is the Boolean constant
In view of the
distributive postulates and Theorem 2.1, vii,
any Boolean function can be transformed into a polynomial.
course, such
Laxdal
(5)
a
transformation may not yield
a unique
Of
result.
has shown that after a finite number of applications of
ii, iii, and iv, the polynomial representation of a Boolean function
has been transformed into Quine 's canonical form.
1.
8
CHAPTER III
BOOLEAN MATRICES
Definition 3.1. Let
let
M be the
set
all
of
of B.
where A
(b..),
=
(aij),
B
=
arbitrary Boolean algebra and
matrices, where each element
n x n
matrix is an element
be an
B
make the following definitions
We
...
of the
are elements
of M (i, j
= 1,
2,
...
n ):
i. A
B
+
(a.. f b
=
1J
ii. A
iii.
B
A! =
iv. Z
v. I
=
=
(aij
=
(E.)
ij
) ;
bij);
where..
3
where E.,1J
(E..)
1J
(Eij) where E
aij;
=
=
,
0;
= 1;
J
vi. A
=
B if
and only if a..
13
Theorem 3.
(
+)
and
()
1.
=
b
ij
.
The set M together with the two operations
as defined in Definition 3.1 is a Boolean algebra.
Proof . The theorem is established immediately since the
elements
of M
satisfy the postulates P
,
1
P
,
2
P
,
3
and P of
4
Definition 2.1.
Theorem 3.1 is due to the Russian mathematician A. G.
Lunts (8). As a consequence of Theorem 3. 1, Boolean matrices
,
9
have all the properties of a Boolean algebra and, in particular,
Theorem
2.
1
holds for Boolean matrices.
Definition 3. 2. For A, BE M, A -<
for i, j
= 1,
...
2,
If A < B,
-
then a..
13
Hence, by Definition 2. 2, a..
iJ
=
B.
If A + B
=
and only if a
< b
ij -
1J
n.
,
Theorem 3. 2. For A,B EM,
Proof .
B if
B, then a..
+
b
+
1J
A < B if and only if A + B
= 1,
2,
1J
bijij
=
=
b
ij
for i, j
-< b.
...
,
n.
for every i, j, and so A
bij
1J
for i, j
=
1, 2,
...
B.
=
+ B
n.
,
ij
Hence, by Definition 2. 2, a.. < b.. for every i, j and so A
1J 1J
-< B.
As in ordinary matrix theory over a field, we make the
following definitions:
Definition 3. 3. For A,
BE
M, we define
n
i.
AB
=
(E.) where E
J
ii. E
iii.
.
J
(E) where EJ=
=
IAI=
als
! `a2s
1
2
...
aik b, .;
>
k =1
j and
1ifi=
=0ifi#
Ei.
=
j;
ans n where the summation is
taken over all permutations (sl, s2,
IAI
t
...
,
s n) of (1, 2,
...
,
n ).
is called the determinant of A;
iv. if A.. is the submatrix of A obtained by deleting the ith
1J
row and the jth column (which intersect in the element aij) of A,
then
IA
I
iJ
is termed the minor of a.. in A;
v. A
1J
T
=
(Ej) where E
J
=
J
a
ji is called the transpose of
A;
10
vi. A
=
(Eij) where E
IA
=
J
(
ji
is called the adjoint of the
matrix A.
Notice the difference between the matrix multiplication of
Definition 3. 3, i and the Boolean multiplication of Definition 3.
In the sequel AB will
mean matrix multiplication and A.
B
1,
ii.
will
mean Boolean multiplication.
The Boolean
matrix
E
appears as the identity element for
matrix multiplication, i.e. AE
The definition of
IAI
EA
=
=
A
for each A CM.
differs from the ordinary definition of
determinant only with respect to the Kronecker symbol, which,
of
course, would have no meaning in a Boolean algebra.
With
of
respect to matrix multiplication many
of the
properties
ordinary matrices carry over into Boolean matrices. For a
proof
of
these results, any standard text on matrices can be
consulted.
Theorem 3. 3. For A, B, CEM, A(B
(A + B)C
=
AC
+
+
C)
=
AB
+
BC.
Theorem 3. 4. For A, B,
C E
M, (AB)C
=
A(BC).
Theorem 3.5. For some ACM, AB # BA.
Theorem 3. 6. For A, BE M, (AB)T
Theorem 3.7. For ACM,
=
IAI = IATI .
Theorem 3. 8. For A EM, we have
BTAT.
AC and
11
i.
IAI is not changed by
interchanging rows and columns;
ii. IA is not changed by the interchange of two rows (or
I
columns
);
iii. if all the elements
of a row (or column) have a
common
factor, then it may be taken outside the determinant sign;
iv. If the elements of an
arbitrary row
(or column)
are
the sum of two terms, then the determinant appears as the sum of
two determinants;
v.
IA
may be expanded with respect to the elements of an
arbitrary row or column,
IAI=
i. e.
akJ
=
aik" IAiki
k=1
lAkjJI
k=1
where Aik is a minor of the element a in A.
ik
We prove some properties of Boolean matrix algebra not
encountered in ordinary matrix algebra.
Theorem 3.9. For AE M, (AT)'
Proof. Let
where E ..=
E
ij
=
A
ai
Hence,
a:...
J1
.
(A
=
=
AT BT.
)T.
) and A' = (a' ).
Then (A' )T =
ij
ij
But AT = (a i) so (AT)' _ (Ei ) where
(a
T)'
J
_
(A')
T
J
.
Theorem 3.10. For A, BE M,
(A B)T
_ (A'
(A +
B)T
=
AT
+
BT and
12
Proof. Let
A
(a..
=
)
and
B
Then A
).
= (b
+ B = (aij
+
b..),
J
so that (A
+
+
B)T
=
(a
Ji
+
b,.). Now AT
=
(a..
(a
)
and BT
= (b
+
ji
AT
+
BT.
(a.1J b..)T
13
=
(a
B)T
+
=
so AT
)
ji
b..). Hence, (A
ji
AT BT, we see that (A B)T
B
=
J1
To show (A B)T
J1
=
AT
T
=
=
(ajibji). Hence, the proof
Theorem 3.11. For A,
Ji
b..).
J1
But, also
is complete.
B E M, if A <
B, then AC
<
BC and
CA < CB for any C E M.
Proof. AC
+
BC
=
(A + B)C
hypothesis and Theorem 3. 2, AC
CA
<
CA
<
CB if A < B is
+
by Theorem 3.3. Hence, by
BC
=
BC.
Thus, AC
BC.
<
established in the same manner. Notice that
CB does not follow from AC < BC, since the operation of
forming the matrix product is not commutative.
Thoerem 3.12. For ACM,
if E < A,
then E
<
A2 < A3 < A4
Proof. This result follows immediately by repeated application of Theorem 3. 11, i.e. E
implies A2
<
< A
implies AE
=
A < A2
,
A
A3, etc.
For the next theorem
<
oo
we introduce the notation A*
=
)
k=1
Theorem 3.13. For A, B EM, if E
(A
+B)*
=
(A *B *)' .
A2
<
A and E < B,
then
Ak.
13
Proof. By Theorem 3.12, A
<
Applying Theorem 3.11, we have AB*
Hence, A
<
A
(A +
B)*
< (A +
Hence,
(A + B) *.
B)*
(A +
implies A*
+ B
<
A*B*. In a similar manner
A*B* and so
<
< (A *B*) *.
< (A *B *)*
((A
and (A*B*)*
+
< (A +
=
A
<
AB*.
Thus, A
B < A*B*.
< (A +
B) *)*
E < B*.
A*B* and A
_Further,
B)* and B*
(A *B*)* <
Further
A *.
<A
+ B
and B
Thus, A *B*
B)*.
+ B
<
<
Combining
(A + B) *.
B)*, we have the desired
result.
This result was stated and proved for matrices over a
semiring by Yoeli (13).
Theorem 3. 14. For A, B, C
(A B)C
E M,
A(B C)
(AB) (AC) and
(AC) (BC).
<
Proof. Let
An=
(aii
B
),
(bii) and C
=
=
(cii). Then A(BC)
n
(E..) where E..ij
=
<
ckj for
aik bkj
Nj
=
i, j
= 1,
...
2,
,
n.
Now
k=1
AB
=
aikbki) and
(
AC
=
ai
(
k =1
(ß
13
ckj)' so that (AB).
k=1
ij) where
ik bkj
ij
k=1
1¡n
j
n
1
ckj
l
k=1
>
I
=
aik bkj ckj
k=1
+
R.
(AC)
=
14
Hence Ej
+ ß
i
= ß
A(BC)`(AB) (AC).
ij for i, j
=
1, 2,
...
,
n.
Thus, by Theorem 3. 2,
15
CHAPTER IV
CONTACT NETWORKS
We now
introduce the concept of a network and indicate
terms
how networks can be described in
Definition 4.
consisting
of n
one branch bij
1.
A
terminals
(i
of
contact network is a directed graph
,
vn with exactly
j) from each vi to each other vj, each
,
of
...
(or nodes) vl, v2,
being weighted by an element w.'J of the set
2.3-2.4)
Boolean matrices.
B
m
(see Definition
Boolean functions of the m variables xl, x2,
...
The variables xl, x2,
in our network with
,
branch
...
,
xm.
xm take on the role of contacts
their values coming from B0. When two or
more contacts always open and close simultaneously,
we denote
Thus, we do not distinguish
them by the same circuit variable.
between occurrences of a contact and the contact itself as a circuit
variable. Thus, in Figure
.
1,
x' of branch b
contact than x' of branch b3,
We
0
may be a different
but they operate simultaneously.
use the convention in which
contact and
1, 4
1
stands for a closed circuit or
for an open circuit. The parallel connection between
contacts x and
y is
represented by
tion is represented by
x y.
In
x
+
y while
practice
their series connec-
is usually omitted. Thus
16
x.y becomes xy.
Definition 4. 2. A directed path of a contact network of
length r from vi to v, is a sequence of r branches of the form
J
b
,
ikl
b
klk2
,
...
,
b
If i
.
j, the path is called closed,
=
kr
otherwise open.
...
If i, k1, k2,
kr
,
j
are all distinct, the
path is called proper. An open path which is not proper is redundant.
The weight w of a directed path is called
Definition 4. 3.
the conductance of the path and is defined as the product of its
branch weights, i.e.
w
=
wik
wk k
1
Theorem 4.
1.
1
' '
2
The maximal length of any proper path of a
network of n terminals is n -
[
1
see (13)].
Proof. Consider an open path
has the form b.iki , bk k ,
1 2
' ' '
subscripts here. But since r
subscripts i, k
,
1
k
...
,
2
wk r
,
k
b
,
r -1
.
kr
+
1
,
p of
>
j
length r
> n - 1.
There are (r -1)
+ 2 =
It
r
+
n, this must mean not all of the
are distinct. Hence,
p is not
proper path.
a
Theorem 4. 2.
v. to v., then
i
If w is the
there exists
w* such that w < w*
a
weight of an open path p from
proper path from y to v j with weight
i
[
see (13)].
1
17
Proof. Let
form b
p have the
ik1
b
,
...
,
,
b
kr -1.1.
k1k2
Since p is redundant (if not, there is nothing to prove), there must
be a coincidence, say k
k
=
...
,
'
bkg-lkq bkgkg+l
We
,
,
,
bkp-lkp bkpkp+l
can omit some of the branches of
form bik
,
. . .
,
bk
,
k
bk
q-1 p
1
eliminate some
By
Then p has the form bik
.
P
q
of the
,
k
of
. .
'
1
,
kr-1J.
to obtain a new path of the
. . .
Continuing to
bk
,
r-1J
p p+l
branches,
repeated application
p
...
,
we
finally obtain a proper path
Theorem 2. 3, we have
p *.
w*
w
where w* is the weight of the proper path p*.
In a
contact network without diodes, the conductance from
v, to v is the same as
1
J
that from v, to v
J
great number
of diode
i
.
In
actual practice, a
elements are used in the network to prevent
an unwanted (sneak) circuit and to reduce the number of contacts
required.
A diode
element is any device that allows current to
pass in one direction only.
For
a given n- terminal
an n x n matrix, where the
i,
contact network A we will associate
.th
entry is the conductance
of a
path
from v, to v..
i
J
Definition 4. 4.
that E -< A where a..1J
conductance matrix.
The n x n Boolean
= w
(i
iJ
matrix
A
=
(a..) such
#* j), is called the immediate
18
definitions for conductance matrices, note that
In our
a..u
=
1
(E < A),
-
since a terminal is always connected to itself by a
closed circuit.
If in
the network A, diode elements are not present, then
the conductance matrix A will be symmetric, i. e. a,
=
1J
matrix
a
.
The
ji
network. For
A does not define uniquely an n- terminal
example, the three terminal networks of Figure
1
have the same
immediate conductance matrix. This example is due to Hohn and
Schissler
(2).
v
\
x
/
x
Ó
0
,
v
v2
v1
2
x
Y
,y'
y
(v4)
\yy
y
(v5)
x'
y
3
Figure
1
x
x
1
Ly
x'y'
v
1.
x'y
x'y'
1
3
19
Definition 4. 5.
(i #
Let v and v, be any two distinct terminals
i
j
j) of a contact network and a.. be the corresponding immediate
1J
conductance. Then the complete conductance
conductances
of
all paths from
v to v
i
c
is the sum of the
ij
.
J
By the definition above, for the complete conductance, we
may write c..
=
ik 1 ak 1 k 2
1J
'
°
' ak
runs through all possible subsets
í - 1,
a
ij
i +
1
,
. . .
,
j - 1, j +
corresponds to m
1
,
where
k
mJ
of the
. . .
,
k
n
terminals
for m
=
,
.
.
k
,
m
2
1,
2,
.
0, 1, 2,
.
,
.
n - 2.
,
= O.
Definition 4. 6. The
called the characteristic
of
n x n
Boolean matrix
the network A, where
and c.. is the complete conductance if i
For the network
of
Figure
2, we have
vl
'2
3-
W
3
Figure
2.
C (Al
c
ij
=
=
(c.
1
if i
=
13
1
is
j
j. A is the immediate
conductance matrix of the network.
O
,
1
20
A
1
x
0
y
x
1
0
z
0
0
1
w
y
z
w
=
x+yz
1
x+yz
C(A)
1
yw+xzw
y+xz
wz+xyw
z+xy
=
yw +xzw zw +xyw
The
1
y+xz
z+xy
characteristic
C (A) of a
1
w
w
1
network with immediate con-
ductance matrix A in a certain sense characterizes the network.
That is, each cif represents all proper paths from vi to v..
leads us to make a definition for the equivalence
of two
This
networks.
Definition 4. 7. Two n- terminal networks with immediate
conductance matrices A and
only if
C (A) =
B
are equivalent (written A^,B) if and
C(B).
For the networks
A^B and the network
A and B of
B,
Figure
1, C (A) =
C(B).
Hence,
requiring less hardware for its realization,
may be substituted for A.
With a given n= terminal network, we can associate another
21
type of Boolean matrix. We select and number certain nodes in
the network n +1, n +2,
...
n +k.
,
..
call these nodes non -
We
terminal to distinguish them from the terminal nodes
of the
1,
2,
...
,
n
network. The non -terminal nodes are so chosen that between
any of the n +k terminals of the network there appears at most a
single contact or group of single contacts in parallel. Further, we
assume every contact is included in the connection between some
pair
terminals.
of
Definition 4. 8.
that
finals v and v such
j
i
if
there is
a single
a
Let p
p
=
if
= 0
ij
of such
(pij) is called the
term-
there is no conductance and
short circuit, but otherwise
contact or a sum
matrix P
be the conductance between
ij
is the symbol denoting
p
ij
symbols.
The
(n +k) x (n +k)
primitive conductance matrix
the network (2).
1
of
..
In the second network of
Figure
1, if we
select a single
non -terminal node in the obvious way and number it 4, we obtain
the primitive conductance matrix
P
1
x
0
y
x
1
0
y'
0
0
1
x'
y
y'
x
=
'
1
22
CHAPTER
V
THE ANALYSIS OF CONTACT NETWORKS
The fundamental problem of the analysis of a contact net-
work involves writing a conductance matrix corresponding to the
network and then determining the corresponding characteristic.
In
this section, we present some algebraic methods of analysis
developed by the Russian mathematician A. G. Lunts about 1950.
Many of the theorems of this section will be presented without
proof since they are readily available
see (2),
[
(7),
_
(8), (13)
These methods with some modifications will be used in
describing a computer oriented procedure for analyzing contact
networks.
The
first method
of
analysis arises from the concepts
of
determinant and adjoint for Boolean matrices [see Def. 3.31.
Theorem 5.
1.
If A is an n x n
immediate conductance
matrix corresponding to some contact network, then
Cij(A)
=
i. e.
C (A) = A,
IAjiI.
Proof. See Lunts
(7)
[also see
(13)1.
Yoeli generalizes
this result to matrices over a semiring.
The second method of analysis stems from Theorem 4.
1
23
and the following theorem.
Theorem 5.
trix, then
An -1
=
If A is an n x n
2.
An
=
Anf1
=
immediate conductance ma-
.
Proof. See Lunts (7).
Theorem 5. 3.
If A is an n x n
conductance matrix and
the corresponding characteristic, then C(A)
=
C (A)
An -1.
Proof. See Lunts (7).
As an immediate consequence of Theorems 5.1, 5. 2, and
corollary.
5. 3, we have the following
Corollary.
If A
is an n x n conductance matrix, and A is
the matrix adjoint of A, then AA
In
entries
regard to Theorem
of A, it may happen
where a..EB
1
In
m
=
5. 3,
that
AA
determining
because
C (A) =
C (A)
of the
Ar for r
behavior
of the
In the
< n -1.
case
in the manner of Theorem 5. 3, we can
shorten the process by forming AA
other words,
A.
this occurs frequently.
,
tinuing until A2
=
k
k
=
=
An -1.
[ln
=
A2, then A2A2
This will occur when 2k
(n -1)/ In 2] +
1
=
A4, con-
< n - 1.
In
where [a] is the greatest
integer not greater than a.
Consider the matrix product AA
=
A2 where ai2
j
=
aik akj.
k =1
Notice that we must preserve the matrix A until the formation of
A 2 iis
complete.
We would
like to transform A into
C (A)
without
24
having to maintain a second matrix.
each a
by
13
k
a.
ik
can do this by replacing
We
before proceeding to the next entry.
akj
We
1
process the elements
of A
beginning with a1,1, proceeding
through the first row, then the second row, continuing through
each row from left to right, until the last.
such a transformation on A. Since a
of
n
/aik akj'
we will have A2 < A (1),
Let A
(1)
be the result
is one of the terms of
ij
Denoting by A(2) the result of
k=1
transforming
A4 < A(2).
[ln
(n -1)
A
(1)
into itself in the same manner, we will have
Continuing, we will have An -1
/ In 2]
<
A(k) when k
=
Hence, since the maximum length of a
+ 1.
proper path in an n- terminal network is n
- 1, we have the
following result.
Theorem 5. 4.
matrix, then
elements
\
ik
(P -1)
.
For the
k
kj
A(k), where k
=
immediate conductance
[ln
(n -1)/ In 2] + 1,
are formed by replacing each
of A(P)
a (13-1)a
L,
C (A) =
If A is an n x n
and the
a (P-1) of A(P-1) by
ij
k=1
occurs.
In
It
described in Theorem 5.4, an interesting result
appears that
k
could be less than [ln
(n -11/
fact, by direct computation, we have shown that for
In 2]
n < 6,
+ 1.
25
k
=
2, while
[ln 5/ In 2]
+
1
=
3.
For large n, considerable
saving may occur in the number of iterations we must transform A.
For large n,
we could
maintain A(p-1) while computing A(P), then
compare A(p-1) and A(P).
If A(P-1)
=
A(P), thenA(p)
further processing is required. This,
of
=
C(A) and no
course, would have the
disadvantage of requiring storage for an auxiliary matrix.
The method of analysis of Theorem 5. 4 will be used in the
procedures for machine analysis
of not
of
networks. It has the advantage
requiring a duplicate matrix or additional temporary storage
locations.
Another method of analysis of a contact network depends on
the notion of the characteristic function (7).
Definition 5.
1.
The
characteristic function
of the n x n
conductance matrix A is the Boolean function fA(xl, x2,
given by
...
,
x
n
n
fA(xl, x2,
...
,
xn)
=
aijxixj
i, j=1
where A
=
(a. ,).
li
Theorem 5. 5.
of two
If A and B
contact networks, then
fA(xl, x2,
...
,
xn)
<
n
conductance matrices
C (A) < C (B) if
fB (xl, x2,
The proof of the above
are n x
...
,
and only if
xn).
result depends
on the following two
)
26
lemmas. For a proof
Lemma 5.
Lemma
arbitrary
(3
1.
of the
fA (xl, x2,
theorem and the lemmas see (7).
...
,
x
n
)
=
5. 2. fA(Cß 1(A), Cß 2(A),
= 1,
...
2,
,
fC
C (A)
...
,
...
(xl, x2,
Cßn(A))
= 0
,
xn).
for
n.
Thus the characteristic function characterizes the operation of
the network just as the characteristic matrix does.
are equivalent networks
x
n
)
=
fB (xl, x2,
The
..
.
,
we need only
To show A and B
establish that fA(xl, x2,
...
,
x).
characteristic function can also be used for removal
non -terminal nodes of a primitive conductance matrix.
of
Consider a
non -terminal node v in a network with k non -terminal nodes and n
r
terminal nodes.
characteristic
We want to
remove vr in such a manner that the
of the network on the
remaining terminals will be the
same. The procedure was first developed by Lunts
(7)
[see also(2)].
Conductances air and arj provide a path from vi to v. with conductance
air arj.
We
replace the conductances
of the given network with others
such that between each pair of terminals v,
i
aryl
pears conductances corresponding to a.. + a.
it
i
v. (i,
J
j#
r), there ap-
rj , and then remove
all conductances a.it and a rj between the non -terminal node v and
r
the other terminals of the network. Thus, we remove the non a
.
terminal node v r , but we retain the same complete conductance
each pair of remaining terminals.
on
27
Definition 5. 2. The greatest lower bound of a Boolean
function f(x) is a variable which is less than or equal to f(x) and
is greater than or equal to any variable which is less than or
equal to f(x).
Theorem 5.
The
5.
greatest lower bound
of a
Boolean
function, f(x), is f(0)f(1).
Proof. Since the range of the variable x is the set [0,
we can
write f(x)
Definition 2.
f(x)
xf (1)
=
+
,
x'f (0). Applying Theorem 2. 1, iii, and
P2, we have
1,
f(0)] +x'f(0)[1
=
xf(1)[1
+
=
xf (1)
x'f (0)
=
f(x)
+
+
+
f (0)f(1)
+
f(1)]
(Def. 2.
1,
P3, P4)
f(0)f(1).
Hence, by Def. 2. 2, f (0)f (1)
To show f (0)f (1) is the
bound of f(x).
1}
f(x) and so f (0)f (1) is a lower bound.
<
greatest lower bound, let
Then, b
< f (0)
Definition 5. 3.
lower
Applying Theorem 2.
and b< f (1).
and the transitive property of the relation
we have b < f (0)f (1).
b be any
<
3
(Theorem 2. 2, iii),
Thus, f (0)f (1) is the greatest lower bound.
The removal of x from f(x) by replacing
f(x) by f (0)f (1) is called the exclusion of the variable x from f(x)
and will be denoted by (Ex)f(x)
Lunts
(7)
=
f (0)f (1).
has shown that if we exclude the variable x from
the characteristic function fA(xl, x2,
n
...
,
xn), we shall obtain a
28
characteristic function
...
In
,
x
n -1
)
where b..
of some n =
aij
+
1
terminal network fB(xl, x2,
ain ani (.i, j
1, 2,
=
. . .
n-1).
,
other words,
al1
a12
...
a1
if A=
,we have
a
a
ant
and
all
a12
. . .
a
ann
J
al, n-1
aln
a2n
B =
f
an-1,1
.
.
an-1, n-1
[ani
..
,
aan
an-1, n
Furthermore, the following theorem holds:
Theorem
5. 6.
In the exclusion of the
terminal
v
n
from the
n- terminal network A, the elements of the characteristic corre-
sponding to the network B, formed from the remaining terminals,
remains unchanged, i, e. Cij(A)
=
Cij (B) for i,
j = 1, 2,
...
,
n -1.
n-1
29
Proof. See
(7).
Thus, given a primitive conductance matrix of n
(n
terminal and
k
+
k
nodes
non -terminal), we can obtain the immediate
conductance matrix by repeated application of the exclusion
process until no non -terminal nodes remain.
Since for a two terminal network the immediate conductance
matrix is its characteristic,
successively the terminals
" J +1 , v j_ 1 ,
We
of the
...
,
v
P
n
v
,
nel
,
,
ij
(A)
by excluding
vifl , i -l'
'
.
1
illustrate the exclusion
first network
matrix,
v
we can obtain C
of
Figure
1.
of non -terminal nodes 4 and
As primitive conductance
we have
1
x
0
x'
0
x
1
0
o
Y'
0
0
1
Y
x'
x'
0
y
1
Y
0
y'
x
Y
1
=
5
30
Excluding the terminal v
we have
,
5
...._
1
x
x
1
0
x'y'
X'
0
0
x
x' ÿ
0
1
y
y
1
Excluding the terminal v4, we have
4
A
1
]í
X' y'
X
1
X'y'
=
x'y'
r-
x'y'
1
which is the immediate conductance matrix shown in Figure
1.
31
CHAPTER VI
MACHINE ALGORITHMS FOR CONTACT NETWORKS
In the
preceding sections the general concept of a contact
network has been introduced and Boolean matrix methods of
analysis
of
such networks have been discussed.
We now
describe procedures for a digital computer for Boolean matrices
as applied to switching network analysis.
The language of these
procedures will be ALGOL 60, the International Algorithmic
Language (9).
The problem of representing Boolean functions (elements of
Bm) in a digital computer has been studied in detail by Witcraft
(12).
We
sketch briefly the representation presented in his
paper. For a detailed description consult Chapter
IV of
his paper.
The computer's arithmetic unit is binary with a word length
of at
least m bits where m is the number
the network.
of
contact variables in
The computer further has the usual capabilities of
forming the logical sum and product
one's complement
of two
of a word, shifting the
words, forming the
bits
of a word and
counting the number of shifts.
The machine representation of Boolean functions is
32
restricted to polynomials. This is not
from the distributive postulates (Def.
theorems (Thm.
a
great restriction, since
2. 1, P3) and the
2. 1, vi and vii) of a Boolean
duality
algebra, it follows
that any Boolean function can be expressed as a polynomial.
Consider the polynomial P
+ p
= p
+
...
where pi
+ p
P will be represented in the machine as a
is a monomial.
,M* ), ...
ßl
1
word pair corresponds to a monomial p
sequence of word pairs
(M
p
(M
,
,M*
)
where each
pk Pk
Each monomial
may
p
J
J
have up to m variables representing contacts in the network, so
for each of x1, x2,
. , .
pi or xi is a factor
of p
xm, we indicate whether xi is a factor of
,
or neither xi nor x! is a factor of
by the
p
1
J
J
following correspondence:
[M
i.
p
ii. [M
J.i
=
J.
=
factor
of p
1
i
p
iii. [M
1
]
pJ i
and [M* ] i
ßJ
and [M* J.
p
= 0
= 0
if x. is a
1
=
if
1
1
and [M* J.
p i
x`,
factor
of ß
is a factor of
if
p
;
J
1
= 0
;
J
neither x. nor
I.
x!
i
is a
.
J
For example, suppose m
polynomial xlx2
+
=
4
and we want to represent the
xlx2x3. Then, the machine representation will
be either (1100, 0100), (1110, 0000) or
(1110, 0000), (1100, 0100),
the ordering of the set of word pairs being immaterial.
In addition,
associated with each sequence
of
word pairs
33
representing the terms
of a
polynomial is a location word (P,m).
P is the memory location of the first monomial, and m is the
number of monomials in the polynomial.
In
case the polynomial is a Boolean constant, i.e.
0
or
1,
the location word will contain that information.
The user of the subroutines for handling Boolean poly-
nomials thus, need only refer to the location words in writing a
program. Hence, our procedures for Boolean matrices will only
require two pieces
of
information for each entry in the matrix,
namely, the storage location
monomials
of the
We now
of the
polynomial and the number of
polynomial.
describe briefly the code procedures which
we
use
for manipulating Boolean polynomials. For a detailed description
of
these routines see (12).
BPP (P1, ml, P2, m2, P3, m3);
comment BPP is a code procedure which forms the product of two
Boolean polynomials in machine representation. On entry, the
location words (Pl, ml) and (P2, m2)
of the
operands and P3, the
location at which the product is to be stored, must be provided.
On
exit, the location word, (P3, m3) of the product, is available.
34
BPA (P1,
ml, P2, m2, P3, m3);
comment BPA is a code procedure which forms the sum of two
Boolean polynomials in machine representation. Entry and exit
data are as described above for BPP.
QCF (Pl, ml, P2, m2);
comment QCF is a code procedure which forms Quine 's canonical
form (see p.
On
6) of a
Boolean polynomial in machine representation.
entry, the location word (Pl, ml) of the operand and P2, the loca-
tion at which the canonical form is to be stored must be provided.
On
exit, the location word
BPE (Pl,
ml, P2, m2,
of the
result, (P2, m2), is available.
I);
comment BPE is a code procedure which checks two Boolean
polynomials in machine representation for equality.
The two
polynomials must be in Quine's canonical form (QCF). On entry,
the location words (P1, ml) and (P2, m2) of the operands must be
provided. On exit, the indicator
I
or false, depending on whether Pl
We
has a logical value, i. e. true
=
P2 or P1
#
P2.
assume there is sufficient random access storage
available in which to store the procedures and all of the data.
35
This assumption is generally not true, so that a limitation
depending on the size of the memory unit of the computer would
have to be placed on either the length of the polynomials in each
entry or on the number
of
terminals in the network.
or output procedures are included.
For
a
No input
description
of input -
output routines consult (12).
We
describe four procedures (subroutines) for computer
manipulation of Boolean matrices as applied to contact networks.
In
these procedures it will be assumed that the integer arrays A
and a corresponding to the storage locations and the number of
monomials, respectively, for each entry in the matrix are already
stored in the computer memory. L is the first memory location
available for storing results, and further results are stored in
sequence in locations L +1, L +2,
...
n is the
.
number of
terminals in the network.
The
first procedure, identified
by CHAR,
transforms an
nth order conductance matrix A into its characteristic
according to Theorem 5.4.
In addition CHAR
canonical form of each entry in
compared with
C (B)
C (A).
forms Quine's
Thus, on exit,
for the equivalence
C (A)
C (A)
can be
of A and B as given by
Definition 4. 7.
The second procedure described below is identified by
36
CHAR FUN.
X
= (x
ij
)
Let A
where x..
1j
=
=
(a..) be an n x n conductance matrix.
x. x'.j
CHAR FUN forms the
Let
characteristic
1
function of the matrix A by forming A
X
(Boolean multiplication,
not matrix multiplication), and summing all the elements in the
result.
In addition, the
characteristic function is placed in
canonical form. Thus, on exit, fA is available for comparison
with some fB (previously determined) to see if A is equivalent to
B
as given by Theorem 5.
5
and Definition 4. 7.
The third procedure, identified by EX excludes the
termi-
nal v from a network by replacing a by a + a a for i, j # r.
r
it rj
ij
ij
Thus, EX forms an (n -1) x (n -1) matrix B from an n x n matrix A,
so that C
(A) = C
1j
ij
(B)
for i, j
= 1,
2,
...
,
n -1.
EX is used for
removing non -terminal nodes.
The
last procedure has the identifier EQ.
It
compares two
Boolean matrices of the nth order for equality. Each entry of the
matrices must be in the same canonical form before activating
the procedure.
Thus, if
C (A)
corresponding to two networks
and C(B) are the characteristics
A and B,
thenEQ answers the ques-
tion, is A^'B ? The answer will be in the form of a Boolean logical
value, i. e. true or false.
37
MACHINE ALGORITHMS
procedure CHAR
(A, a, L, n);
integer array A, a; integer L,
value n;
n;
begin integer i, j, k, s, m, t;
integer array P[1:3], p[1:3];
m
entier (ln(n -1)/ ln(2))
:=
s :=
+ 1;
L;
t := 0;
LL: t
:
=t
+1;
for
i :=
1
step
1
until n do
for
j :=
1
step
1
until n do
begin P[ 1 ]
:= s;
BPP (A[i,1], a[i,1],
s := s
for k
A[ 1,
j], a[1,j],
P[ 1], p[1]);
+2xp[1];
:=
2
step
begin P[2]
:=
1
until
n do
1;
BPP (A[i,k], a[i,k], A[k, j], a[k, j], P[2], p[2]);
s
:=s
P[3]
2
:=
x
p[2];
1;
BPA (P[1], p[1], P[2], p[2], P[3], p[3]);
s := s + 2 x
p[3];
38
P[1]
P[3];
:=
p[1] := p[3];
end
A[i,J]
P[3];
:=
a[i,J] :=p[3];
end
if t
<m then
fori
:=
for
:=
3
step
1
step
1
begin P[
1
1
1
until
n do
until
n do
:= s;
]
QCF
(A[i,j], a[i,j], P[1], p[1]);
s := s + 2 x
A[1,J]
:=
a[i,J]
:=
s := s
end
L := s;
end CHAR
else
go to LL
p[1];
P[1];
P[1];
+2xp[1];
39
procedure CHAR FUN
(A, a, X, x,
F, f, L, n); value n;
integer array A, a, X, x; integer F, f, L, n;
begin integer i, j, k, s;
integer array P[1:3], p[1:3];
s := L;
P[1]
:= s;
BPP (A[1,1], a[
s := s + 2 x
1,
1], X[1,1],
p[ 1 ];
for
i :=
1
step
1
until n do
for
j := 2
step
1
until n do
begin P[2]
:= s;
BPP (A[i,j],
s := s
P[3]
x[1,1], P[1], p[1]);
2
x
a[i,j], X[i,j], x[i,j], P[2], P[2];
p[2];
:= s;
BPA (P[1], p[1], P[2], p[2], P[3], p[3];
s := s + 2 x
p[3];
P[1]
:=
P[3];
1]
P[1]
:=
P[3];
end
F :=
1;
QCF (P[3],
s := s + 2
p[3], F,
x f;
f);
40
L
.=
s;
end CHAR FUN
.,
41
procedure EX
(A,
a, L, r, n); value r, n;
integer array A, a; integer r,
n;
begin integer i, j, s;
integer array P[ 1:2], p[1 :2];
s := L;
for
i :=
1
step
1
until r -1, r +1 step
1
until n do
for
j :=
1
step
1
until r -1, r +l step
1
until n do
begin P[1]
:= s;
BPP (A[i, r], a[i, r], A[r, j], a[r, j], P[ 1], p[
s :=
s+2xp[1];
P[2]
:= s;
BPA (A[i,j],
s := s + 2 x
A[i,j]
a[i,j]
end
end EX
:=
a[i,j], P[1], p[1], P[2], p[2]);
p[2];
P[2];
P[2];
1] );
42
procedure EQ
(A,
a, B, b, I, n); value n;
integer array A, a, B, b; Boolean
I;
integer
n;
begin integer i, j;
I :=
true
for
i :=
1
step
1
until n do
for
j :=
1
step
1
until n do
begin BPE (A[i, j], a[i, j], B[i, j], b[i, j], I);
if
end
LL: end EQ
-1
I
then go to LL
43
BIBLIOGRAPHY
1.
Baker, James J. A note on multiplying Boolean matrices.
Communications of the Association for Computing
Machinery 5:102. 1962.
2.
Hohn,
3.
Hohn,
F.E. and L.R. Schissler. Boolean matrices and the
design of combinational relay switching circuits. The
Bell System Technical Journal 34:177 -202. 1955.
F.E.,S. Seshu and
nets. Transactions
EC -6 :154 -161.
D. D. Aufenkamp. The theory of
of the Institute of Radio Engineers
1957.
4.
Huntington, E. V. Sets of independent postulates for the algebra
of logic. Transactions of the American Mathematical
Society 5:288 -309. 1904.
5.
Laxdal, A. L. A mechanization of Quine' s canonical form.
Master's thesis. Corvallis, Oregon State College, 1959.
39 numb. leaves.
6.
Luce, R. Duncan. A note on Boolean matrix theory.
Proceedings of the American Mathematical Society 3:382388.
1952.
7.
Lunts, A.G. Algebraic methods of analysis and synthesis
of contact networks. Akademiia Nauk SSSR Izvestia,
Ser. Matematika 16:405 -426.. 1952. (Tr. by H.E. Goheen,
Dept. of Mathematics, Oregon State University, Corvallis,
Oregon.)
8.
Lunts, A.G. The application of Boolean matrix algebra to the
analysis and synthesis of relay contact networks.
Akademiia Nauk SSSR Doklady 70 :421 -42.3. 1950.
9.
Naur, Peter (ed. ), et al. Report on the algorithmic language
ALGOL 60. Communications of the Association for
Computing Machinery 3:299 -314. 1960.
44
10.
theorem on Boolean matrices.
the Association for Computing Machinery
War shall, Stephen.
A
Journal of
9:11 -12. 1962.
11.
Whitesitt, J. Eldon. Boolean algebra and its applications.
Reading, Massachusetts, Addison -Wesley, 1961. 182 p.
12.
Witcraft, D. A. The mechanization of logic II. Master' s
thesis. Corvallis, Oregon State College, 1960. 108 numb.
leaves.
13.
Yoeli, M. A note on a generalization of Boolean matrix
theory. The American Mathematical Monthly 68 :552 -557.
1961.
45
APPENDIX
TWO SPECIAL CASES
Suppose A is a conductance matrix where for each aij
EA
either a..
=
or a..
1
=
Thus, there are no contacts in the
0.
J
network.
There are only closed and open circuits. Baker
(1)
and War shall (10) report the use of this type of Boolean matrix
in the analysis of flow charts and in handling other types of prob-
lems in program topology.
Definition
where A
ij
= 0
or
1.
1.
Let A
= (A, ,)
1J
be an n x n Boolean matrix
Define B by the following algorithm:
begin
comment The logical values true and false instead of their
common representation by
A
correspond to
integer i, j,
+
and
1
and 0, respectively, are used. V and
;
Boolean array A, B[ 1:n, 1:n1;
k;
for
i :=
1
step
1
until n do
for
j :=
1
step
1
until n do
begin if A[ j, i] then
fork
:=
1
step until
1
n do
A[ j, k] := A[ j, k] V A[ i, k];
46
end
fori
for
:=
1
step
1
until n do
j :=
1
step
1
until n do
B[i,j]:= A[i,j];
end
Theorem
algorithm. Then
Let A be transformed into
1.
B
=
B
by the above
C (A).
Proof. See (10).
An apparently
similar algorithm is described without
proof in a short note by Baker (1). It is actually quite different
and, in fact, more laborious.
Definition 2. Let A
where A..
= 0
or
Define
1.
=
B
(A..)
ij be an n x n Boolean
by the following algorithm:
1J
begin
integer i, j, k; Boolean array A, B[1 :n,l:n];
for
i :=
1
step
1
until n do
for
j :=
1
step
1
until n do
B[i, j]
:= A[ i,
j];
LL: for
i :=
1
step
1
until n do
for
j :=
1
step
1
until n do
if B[ i, j ] then
matrix
47
fork
:=
step
1
B[i,k]
1
until n do
:= B[ i,
k]\B[ j, k];
for
i :=
1
step
1
until n do
for
j :=
1
step
1
until n do
j]=A[i, j]) then
if --1(13{
begin for i :=
for
j :=
1
step
1
until n do
1
step
1
until n do
A[i,J]
:=
B[i,j];
go to LL;
end
end
Theroem 2. Let A be transformed into
Then B
=
B
by the above algorithm.
C (A).
Proof. Consider the statement B[ i, k]
:= B[
i, k]VB[ j, k] in the
algorithm above. This is equivalent to replacing Bik by Bik
for
k
=
...
1, 2,
,
n provided B
is for B
ij
=
1.
+
Bik
The only way a change can
But, if Bik = 1 (since B.. = 1), we
jk
J
have a path from vi to vk through vj, i. e. B. B. = 1. Hence,
occur in
B
k
to be
1.
J
J
Bik should be set to
1.
J
Obviously, the result of such a transforma-
tion forms a new matrix greater than or equal to the original
matrix (since
we
replace
0
by
1,
but never
1
by 0).
Hence, when one
48
complete pass through the matrix has been made without changing it,
we
must have all possible paths through the network.
Download