Uploaded by Zhou Ganyu

IFM Algebra OutlineNotes 2023

advertisement
Introduction to Financial Mathematics
Algebra Notes
School of Mathematical Sciences
The University of Adelaide
version 2023
Contents
1 Matrices and linear equations
1.1 Matrices . . . . . . . . . . . . . . . . . . . . . . . .
1.1.1 Special matrices . . . . . . . . . . . . . . . .
1.2 Matrix operations . . . . . . . . . . . . . . . . . . .
1.2.1 Matrix addition . . . . . . . . . . . . . . . .
1.2.2 Scalar multiplication . . . . . . . . . . . . .
1.2.3 Matrix multiplication . . . . . . . . . . . . .
1.2.4 Matrix powers . . . . . . . . . . . . . . . . .
1.2.5 Matrix transpose . . . . . . . . . . . . . . .
1.3 Example applications . . . . . . . . . . . . . . . . .
1.3.1 Warehouse stock problem . . . . . . . . . .
1.3.2 Encryption and decryption - Hill cipher . . .
1.4 Linear equations . . . . . . . . . . . . . . . . . . .
1.4.1 Defining linear equations . . . . . . . . . . .
1.4.2 Linear equations in matrix form . . . . . . .
1.4.3 Geometrical representation of linear equations
1.4.4 Homogeneous equations . . . . . . . . . . .
1.5 Elementary operations . . . . . . . . . . . . . . . .
1.5.1 Elementary operations on linear equations .
1.5.2 Elementary row operations on matrices . . .
1.5.3 Row echelon form . . . . . . . . . . . . . . .
1.5.4 Reduced row echelon form . . . . . . . . . .
1.6 The solution set . . . . . . . . . . . . . . . . . . . .
1.6.1 Defining the solution set . . . . . . . . . . .
1.6.2 Row operations and the solution set . . . . .
1.6.3 Reduced row echelon form and the solution set
1.7 Inverse matrix . . . . . . . . . . . . . . . . . . . . .
1.7.1 Finding the inverse matrix . . . . . . . . . .
1.7.2 Finding a unique solution . . . . . . . . . .
1.8 Determinants . . . . . . . . . . . . . . . . . . . . .
1.8.1 Condition for a matrix inverse . . . . . . . .
1.8.2 Determinant of a 2 × 2 matrix . . . . . . . .
1.8.3 Determinant of a triangular matrix . . . . .
1.8.4 Properties of determinants . . . . . . . . . .
1.8.5 Determinant of a n × n matrix . . . . . . .
2
3
5
7
7
8
10
16
17
19
19
21
24
24
28
30
35
36
36
38
40
43
46
46
49
50
52
55
58
61
61
61
62
62
63
2 Leontief economic models
2.1 Leontief open economic model . . . . . . . . . . . .
65
67
1
Contents
2.2
2.1.1 A profitable industry . . . . . . . . . . . . .
2.1.2 A productive economy . . . . . . . . . . . .
Leontief closed economic model . . . . . . . . . . .
71
71
73
3 Optimisation
82
3.1 Inequalities . . . . . . . . . . . . . . . . . . . . . . 82
3.1.1 Inequalities in one variable . . . . . . . . . . 82
3.1.2 Inequalities in two variables . . . . . . . . . 84
3.2 Optimization application . . . . . . . . . . . . . . . 89
3.3 Linear programming . . . . . . . . . . . . . . . . . 91
3.4 Graphical method of solution . . . . . . . . . . . . 94
3.4.1 Classifying the feasible region . . . . . . . . 94
3.4.2 Convex sets . . . . . . . . . . . . . . . . . . 96
3.4.3 Standard form and the convex feasible region 97
3.4.4 Feasible regions and objective function solutions102
3.5 Algebraic method of solution . . . . . . . . . . . . . 104
3.5.1 Obtaining vertices algebraically . . . . . . . 104
3.5.2 Obtaining the optimal solution . . . . . . . 109
3.6 The simplex algorithm . . . . . . . . . . . . . . . . 110
3.7 Problem formulation . . . . . . . . . . . . . . . . . 122
1
Matrices and linear equations
Contents
1.1
Matrices . . . . . . . . . . . . . . . . . . .
1.1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
Special matrices . . . . . . . . . . . . .
Matrix operations . . . . . . . . . . . . .
3
5
7
1.2.1
Matrix addition . . . . . . . . . . . . . .
7
1.2.2
Scalar multiplication . . . . . . . . . . .
8
1.2.3
Matrix multiplication . . . . . . . . . . 10
1.2.4
Matrix powers . . . . . . . . . . . . . . 16
1.2.5
Matrix transpose . . . . . . . . . . . . . 17
Example applications . . . . . . . . . . .
19
1.3.1
Warehouse stock problem . . . . . . . . 19
1.3.2
Encryption and decryption - Hill cipher
21
Linear equations . . . . . . . . . . . . . .
24
1.4.1
Defining linear equations
. . . . . . . . 24
1.4.2
Linear equations in matrix form . . . . 28
1.4.3
Geometrical representation of linear equations . . . . . . . . . . . . . . . . . . . . 30
1.4.4
Homogeneous equations . . . . . . . . . 35
Elementary operations . . . . . . . . . .
36
1.5.1
Elementary operations on linear equations 36
1.5.2
Elementary row operations on matrices
1.5.3
Row echelon form . . . . . . . . . . . . 40
1.5.4
Reduced row echelon form . . . . . . . . 43
The solution set . . . . . . . . . . . . . .
38
46
1.6.1
Defining the solution set . . . . . . . . . 46
1.6.2
Row operations and the solution set . . 49
1.6.3
Reduced row echelon form and the solution set . . . . . . . . . . . . . . . . . 50
Inverse matrix . . . . . . . . . . . . . . .
52
1.7.1
Finding the inverse matrix . . . . . . . . 55
1.7.2
Finding a unique solution . . . . . . . . 58
Determinants . . . . . . . . . . . . . . . .
61
3
1 Matrices and linear equations
1.8.1
Condition for a matrix inverse . . . . . 61
1.8.2
Determinant of a 2 × 2 matrix . . . . . 61
1.8.3
1.1
Determinant of a triangular matrix . . . 62
1.8.4
Properties of determinants
. . . . . . . 62
1.8.5
Determinant of a n × n matrix . . . . . 63
Matrices
Definition 1.1. An m×n order matrix, with m, n ∈ N , is an array
of numbers (or symbols) arranged across m rows and n columns:
a11 · · ·
 ..
 .

A = [aij ] =  ai1 · · ·
 .
 ..
am1 · · ·

a1j
..
.
···
aij
..
.
···
amj · · ·

a1n
.. 
. 

ain  ,
.. 
. 
amn
where aij is the matrix element in the ith row and jth column for
i = 1, 2, . . . m and j = 1, 2, . . . n .
Example 1.1. . . . . . .
4
1 Matrices and linear equations
Example 1.2. . . . . . .
Definition 1.2. The matrices A and B are equal if they have equal
order and all the corresponding elements of A and B are equal. For
A = [aij ] and B = [bij ] both of order m × n ,
A=B
iff aij = bij ,
for all i = 1, . . . , m and j = 1, . . . , n .
Example 1.3. . . . . . .
1 Matrices and linear equations
1.1.1
5
Special matrices
Definition 1.3. An m dimensional row vector is a matrix of
order 1 × m . It has one row and m columns:
A = a11 · · · a1m .
We usually drop the unchanging subscript, write the vector name
in bold font, use lower case and, for clarity, separate elements by
commas:
a = [a1 , . . . , am ] .
Definition 1.4. An m dimensional column vector is a matrix of
order m × 1 . It has one column and m rows. Like row vectors, we
usually drop the unchanging subscript, write the vector name in
bold font and use lower case.
 
a1
 .. 
a =  . .
am
Also, to save space when typesetting we will often write column
vectors on one line and identify them as column vectors by adding
a superscript T to indicate the transpose 1 which swops rows for
columns.
a = [a1 , . . . , am ]T .
Example 1.4. . . . . . .
Definition 1.5. An m × n zero matrix has each element equal to
zero:


0 0 0 ··· 0

..  .
Z =  ...
.
0 0 0 ··· 0
We usually just write Z = 0 .
1
The general definition is given in Section 1.2.5
1 Matrices and linear equations
6
Definition 1.6. A square matrix has the same number of rows as
columns so the order is n × n for some n ∈ N :


a11 · · · a1j · · · a1n
..
.. 
 ..
.
. 
 .


A =  ai1 · · · aij · · · ain  .
 .
..
.. 
 ..
.
. 
an1 · · · anj · · · ann
Definition 1.7. A square diagonal matrix of order n×n is a square
matrix with all elements not on the diagonal equal to zero:


d11 0 · · · 0
.. 
.

. 
 0 d22 . .
D= . .
.
.
..
.. 0 
 ..
0 · · · 0 dnn
Or, D = [dij ] where dij = 0 when i ̸= j for all i, j = 1, 2, . . . , n .
Definition 1.8. An identity matrix is a square diagonal matrix
with all the diagonal elements equal to one:


1 0 ··· 0
. . .. 

. .
0 1
I = . .
.
.
. . 0
 .. . .
0 ··· 0 1
For I = [Iij ] with order n × n , the elements are Iij = 1 when i = j
and 0 otherwise for all i, j = 1, 2, . . . , n .
Example 1.5. . . . . . .
7
1 Matrices and linear equations
1.2
1.2.1
Matrix operations
Matrix addition
Definition 1.9. For two matrices A = [aij ] and B = [bij ] with
equal order m × n , addition is defined by
A+B =C
where C = [cij ] and cij = aij + bij ,
for all i = 1, . . . , m and j = 1, . . . , n .
Matrix addition has the following properties (assuming A, B, C
and the zero matrix have the same orders):
• (A + B) + C = A + (B + C) (associativity)
• A + B = B + A (commutativity)
• A + 0 = 0 + A = A.
Example 1.6. . . . . . .
1 Matrices and linear equations
1.2.2
8
Scalar multiplication
Definition 1.10. For some real number c ∈ R (c is called a ‘scalar’)
and some matrix A, to multiply A by c we multiply each element
of A by c:
cA = [caij ] .
Example 1.7. . . . . . .
1 Matrices and linear equations
Example 1.8. . . . . . .
9
10
1 Matrices and linear equations
1.2.3
Matrix multiplication
Before we introduce matrix multiplication we first consider the
multiplication of two vectors.
Definition 1.11. The dot product, also known as the scalar product,
of two vectors v = [v1 , . . . , vm ] and w = [w1 , . . . , wm ] of equal
dimension m is defined by
v · w = v1 w1 + v2 w2 + · · · + vm wm
m
X
=
vi w i .
i=1
Matrix multiplication is very different to addition. The product of
the m × p order matrix A and the q × n order matrix B exists iff
p = q , that is, the number of columns of A must equal the number
of rows of B. The resulting matrix has order m × n ,
Definition 1.12. For matrix A with order m × p and matrix B
with order p × n the matrix product AB exists and has order m × n .
The element in the ij position of the product matrix AB is found
by using the dot product to multiply the ith row of A by the
jth column of B. For C = [cij ] = AB ,
cij = [ai1 , ai2 , . . . , aip ] · [b1j , b2j , . . . , bpj ]
= ai1 b1j + ai2 b2j + · · · + aip bpj .
In the following the coloured rows in A and coloured columns in B
produce the elements in C with corresponding colours:



 b11 · · · b1j · · · b1n
a11 · · · a1j · · · · · · a1p  ..
..
.. 
.
.
. 
..
..  
 ..

.
. 
 .
bi1 · · · bij · · · bin 



AB =  ai1 · · · aij · · · · · · aip   ..
..
.. 
 .

.
.
. 

..
..  
 ..

.
.  ..
.
.
.
.
 .
.
. 
am1 · · · amj · · · · · · amp
bp1 · · · bpj · · · bpn


c11 · · · c1j · · · c1n
..
.. 
 ..
.
. 
 .


=  ci1 · · · cij · · · cin  = C ,
 .
..
.. 
 ..
.
. 
cm1 · · · cmj · · · cmn
and
(m × p)(p ×n) =⇒ m × n is the order of C .
|{z}
equal
1 Matrices and linear equations
Example 1.9. . . . . . .
11
1 Matrices and linear equations
Example 1.10. . . . . . .
12
13
1 Matrices and linear equations
Remark 1.1. In the above example we see that in general AB ̸=
BA . Also, the existence of AB does not imply that BA exists.
In general, AB ̸= BA which means that matrix multiplication is
non-commutative.
Matrix multiplication has the following properties (assuming A, B,
C and the identity matrix I have suitable orders):
• (AB)C = A(BC) (associativity)
• (A + B)C = AC + BC
(distributivity)
• A(B + C) = AB + AC
(distributivity)
• c(AB) = (cA)B = A(cB) for scalar c
• IA = AI = A .
Example 1.11. . . . . . .
1 Matrices and linear equations
Example 1.12. . . . . . .
14
1 Matrices and linear equations
15
Remark 1.2. If we obtain zero when multiplying two real numbers
a and b, then we know that either a and/or b are zero. Matrix
multiplication is different. If the product of two matrices A and B
is a zero matrix, then this does not mean that either A and/or B
must be a zero matrix (although they could be).
Example 1.13. . . . . . .
16
1 Matrices and linear equations
1.2.4
Matrix powers
A matrix can be multiplied by itself iff it is a square matrix. For
p ∈ N:
Ap = AA
· · · A} .
| {z
p times
Example 1.14. . . . . . .
17
1 Matrices and linear equations
1.2.5
Matrix transpose
Definition 1.13. The transpose of an m × n matrix A = [aij ] is
the n × m matrix AT = [aji ] . The columns of AT are the rows of A.
That is:
a11
 a21
 .
 .
 .
A=
 ai1
 .
 ..
am1

a11
 a12
 .
 .
 .
T
A =
 a1j
 .
 ..
a1n

a12
a22
..
.
···
···
a1j
a2j
..
.
···
···
..
.
ai2
..
.
···
aij
..
.
···
..
.
am2 · · ·
amj · · ·
a21 · · ·
a22 · · ·
..
.
ai1 · · ·
ai2 · · ·
..
.
a2j · · ·
..
.
aij · · ·
..
.
a2n · · ·
ain · · ·

a1n
a2n 



,
ain 


amn

am1
am2 
.. 

. 
.
amj 
.. 
. 
amn
The transpose of an m dimensional column vector (an m × 1 order
matrix) is an m dimensional row vector (a 1 × m order matrix):


a1
 
a =  ...  ,
am
aT = a1 · · ·
Example 1.15. . . . . . .
am .
18
1 Matrices and linear equations
The matrix transpose has the following properties (assuming A and
B have suitable orders):
• (A + B)T = AT + B T
• (AT )T = A
• (cA)T = cAT
• (Ap )T = (AT )p
for scalar c
for any p ∈ N
• (AB)T = B T AT .
Example 1.16. . . . . . .
1 Matrices and linear equations
1.3
1.3.1
19
Example applications
Warehouse stock problem
A company supplies garden furniture made from sustainable plantation teak and has warehouses in Adelaide, Melbourne and Sydney.
There are three types of furniture, labelled F , G and H. Adelaide
holds 10 sets ofF , 7 sets of G and 3 sets of H; Melbourne holds 5
sets of F , 9 sets of G and 6 sets of H; Sydney holds 4 sets of F ,
8 sets of G and 2 sets of H. The retail prices of F, G, and H are
$800, $1000, and $1200, respectively.
Example 1.17. . . . . . .
1 Matrices and linear equations
....
20
21
1 Matrices and linear equations
1.3.2
Encryption and decryption - Hill cipher
We want to encrypt the message “good job”, and be able to decrypt
it. To encrypt we use the following steps.
1. Convert the letters into numbers using
letter
a
b
c
d
e
f
g
h
i
number letter
1
j
k
2
3
l
m
4
5
n
6
o
7
p
q
8
9
r
number letter
10
s
11
t
12
u
13
v
14
w
15
x
16
y
17
z
18
‘ ’
number
19
20
21
22
23
24
25
26
27
so “good job”= 7, 15, 15, 4, 27, 10, 15, 2 .
2. Break this list of numbers into column vectors of some chosen
order. We use column vectors of order 2 × 1 ,
7
15
27
15
,
,
,
.
15
4
10
2
3. Multiply the vectors (on the left) by a suitable matrix. We
choose the matrix
3 5
A=
.
4 6
So,
3
4
3
4
7
96
=
,
15
118
5 27
131
=
,
6 10
168
5
6
15
65
=
,
4
84
3 5 15
55
=
.
4 6 2
72
3 5
4 6
4. The new column matrices are the encrypted message: 96, 118,
65, 84, 55, 72 .
How do we decrypt this message? To decrypt the message we need
a matrix M which satisfies M A = AM = I . If this matrix M
exists, then it is called the inverse of A. The matrix A does have
an inverse and it is
1 6 −5
M =−
.
2 −4 3
1 Matrices and linear equations
We check to see if M is the inverse of A
22
1 Matrices and linear equations
23
Now, to decrypt the message we use the same method used to
encrypt the message, but starting with the encrypted message and
using the matrix M . We break the encrypted message 96, 118, 65,
84, 55, 72 into column vectors and multiply them by M :
1 6 −5 96
1 6 −5 65
7
15
−
=
, −
=
,
118
15
84
4
2 −4 3
2 −4 3
1 6 −5 131
1 6 −5 55
27
15
−
=
, −
=
.
168
10
72
2
2 −4 3
2 −4 3
The numbers in our new column vectors, 7, 15, 15, 4, 27, 10, 15, 2 ,
are now converted back into letters using the above table, revealing
the original message, “good job”.
Rather than creating many column vectors from the original message, a better option is to create one matrix from the original
message:
24
1 Matrices and linear equations
1.4
Linear equations
Many economic and financial systems are approximated by sets of
linear equations. For example,
• 3x1 + 8x2 = 5 is linear in x1 and x2 ,
• 3x + 8y − 7z = 9 is linear in x, y and z,
• 3x2 + 8y − 7z = 9 is not linear because it contains x2 ,
√
√
• 3 x + 8y − 7z = 9 is not linear because it contains x .
1.4.1
Defining linear equations
For example,
Definition 1.14. A system of m linear equation in n unknowns,
x1 , x2 , . . . , xn , is a set of m equations of the form
a11 x1
+ a12 x2
..
.
+ ···
..
.
+ a1n xn
..
.
= b1 ,
..
.
ai1 x1
+
..
.
+ ···
..
.
+
..
.
=
..
.
ai2 x2
am1 x1 + am2 x2 + · · ·
ain xn
bi ,
+ amn xn = bm ,
where bi ∈ R and the coefficients are aij ∈ R for i = 1, . . . , m and
j = 1, . . . , n .
The above system of m linear equations has solution set (s1 , s2 , . . . , sn ) ∈
Rn if s1 = x1 , s2 = x2 , . . . , sn = xn simultaneously satisfies all
m equations. A system of linear equations might have an infinite
number of solution sets, one solution set or no solution sets.
Example 1.18. . . . . . .
1 Matrices and linear equations
Example 1.19. . . . . . .
25
1 Matrices and linear equations
Example 1.20. . . . . . .
26
1 Matrices and linear equations
27
Definition 1.15. Two systems of linear equations are equivalent
when they have the same solution sets.
Example 1.21. . . . . . .
28
1 Matrices and linear equations
1.4.2
Linear equations in matrix form
The system of m linear equations in Definition 1.14 are compactly
written in matrix form:
Ax = b ,
(1.1)
with column vectors
x = [x1 , x2 , . . . , xn ]T
and b = [b1 , b2 , . . . , bn ]T ,
and the m × n coefficient matrix

a11 · · · a1j · · ·
..
 ..
.
 .

A =  ai1 · · · aij · · ·
 .
..
 ..
.
am1 · · · amj · · ·

a1n
.. 
. 

ain  .
.. 
. 
amn
The matrix form is convenient for solving for unknowns x, particularly when m and/or n are large.
Example 1.22. . . . . . .
1 Matrices and linear equations
.........
29
30
1 Matrices and linear equations
1.4.3
Geometrical representation of linear equations
The solution set for a single linear equation in two unknowns, x
and y,
ax + cy = b
in the x − y plane is the straight line
y=
Example 1.23. . . . . . .
b a
− x
c c
1 Matrices and linear equations
31
If we have a set of linear equations in two unknowns we can draw
the corresponding lines and see whether we have: a unique solution;
non–unique solution; no solution.
Example 1.24. . . . . . .
1 Matrices and linear equations
32
If we have three unknowns, x, y and z, the general form of a linear
equation is:
ax + cy + dz = b
The solution set is not a line in 3–D. What is it?
Example 1.25. . . . . . .
1 Matrices and linear equations
33
What are the possible forms of solutions if we have three equations
in three unknowns?
Example 1.26. . . . . . .
1 Matrices and linear equations
34
What is the equation of a line in three dimensions?
If we have more than three unknowns we lose the geometrical
insight, but the algebra follows the same principles.
1 Matrices and linear equations
1.4.4
35
Homogeneous equations
Definition 1.16. A system of homogeneous linear equations in
matrix form is
Ax = 0 .
The linear equations in Definition 1.14 are a homogeneous system
of m linear equations when bi = 0 for all i = 1, 2, . . . , m . In matrix
form, the system of linear equations Ax = b is homogeneous when
b = 0 = [0, 0, . . . , 0]T but for b ̸= 0 the system of equations is
non-homogeneous.
Definition 1.17. The solution x = 0 = [0, 0, . . . , 0]T is the trivial
solution. That is, for n unknowns xi = 0 for all i = 1, 2, . . . , n .
The trivial solution x = 0 is always a solution of a homogeneous
equation.
A system of homogeneous linear equations either has one solution
(the trivial solution) or an infinite number of solutions.
Example 1.27. . . . . . .
1 Matrices and linear equations
1.5
36
Elementary operations
We can always use substitution to solve matrix equations of the
form Ax = b . However, substitution becomes quite awkward when
we have many unknowns. We want a systematic method to find
all solution sets of the vector of unknowns x. The elementary
operations in this section provide such a method, that is ideally
suited to be programmed for solution by computers.
1.5.1
Elementary operations on linear equations
Elementary operations are operations performed on a system of
linear equations so that the original system of equations is replace
by a new system of equations which are easier to solve. Possible
operations are:
• multiply an equation by a nonzero constant;
• interchange two equations;
• replace one equation by the sum of itself and a multiple of
another equation.
Any sequence of elementary operations applied to a system of
linear equations produces a new system of linear equations which
is equivalent to the original (that is, it has the same solution set).
1 Matrices and linear equations
Example 1.28. . . . . . .
37
1 Matrices and linear equations
1.5.2
38
Elementary row operations on matrices
Definition 1.18. For a matrix equation Ax = b where matrix A
is order m × n and column vector b is order m × 1 , the augmented
matrix [A|b] is the m × (n + 1) matrix constructed from combining
A and b:


a11 · · · a1j · · · a1n b1
..
..
.. 
 ..
.
.
. 
 .


[A|b] =  ai1 · · · aij · · · ain bi  .
 .
..
..
.. 
 ..
.
.
. 
am1 · · · amj · · · amn bm
Example 1.29. . . . . . .
Elementary matrix operations on augmented matrices are performed
across an entire row. Possible operations are:
• multiply a row by a nonzero constant;
• interchange two rows;
• replace one row by the sum of itself and a multiple of another
row.
These matrix row operations are essentially the same the elementary
operations on linear equations discussed in Section 1.5.1. Like elementary operations on linear equations, any sequence of elementary
matrix row operations applied to an augmented matrix produces a
new augmented matrix which is equivalent to the original (that is,
it has the same solution set).
1 Matrices and linear equations
Example 1.30. . . . . . .
39
40
1 Matrices and linear equations
1.5.3
Row echelon form
Definition 1.19. A matrix is in row echelon form when
• all rows not consisting entirely of zeros have 1 as the first (left
most) nonzero element and this element is called the ‘leading
1’;
• all rows consisting entirely of zeros are grouped together at
the bottom of the matrix;
• the leading 1 in a row occurs further to the right than all
leading ones in higher rows.
For example, the following
form:

1 2
0 1

0 0

0 0
0 0
augmented matrix is in row echelon

1 −3 1 2
3 3 −4 3

0 1 −1 1
.
0 0
1 1
0 0
0 0
Each row begins with a 1 except for the bottom row which only
contains zeros, and all leading 1s are to the right of the leading 1s
in the rows above them. In the previous example we performed
matrix row operations on augmented matrices until row echelon
form was obtain. Once the part of the augmented matrix to the left
of the vertical line is in row echelon form, determining the solution
set is relatively straightforward.
Say we have a matrix with m columns and we want the row echelon
form. We usually put the top row in the appropriate form, and then
the second row and so on until all m rows are in the appropriate
form. Say we have already put the top (i − 1) rows, for some
i = 1, 2, . . . , m , into row echelon form. A general procedure for
obtaining the ith row in row echelon form is:
1. In the ith row and all rows below it, locate the row that has
the leftmost nonzero element—this is called the pivot row
(there might be many possible pivot rows).
2. Move the pivot row so that it is the ith row in the matrix.
3. The first nonzero element in the pivot row is called the pivot;
multiply the pivot row by a suitable number to make the
pivot equal to 1.
4. Use the pivot row to remove any nonzero numbers in all rows
below the pivot.
These steps are used for all i = 1, 2, . . . , m .
1 Matrices and linear equations
Example 1.31. . . . . . .
41
1 Matrices and linear equations
42
Example 1.32. . . . . . .
Remark 1.3. A system of linear equations might not have a
unique row echelon form. Different sequences of row operations may
produce different row echelon matrices. However, these different
row echelon matrices are equivalent (they have the same solution
set).
1 Matrices and linear equations
1.5.4
43
Reduced row echelon form
Definition 1.20. A matrix is in reduced row echelon form when
• it is in row echelon form; and
• each column containing a leading 1 has zeros elsewhere.
For example, the following augmented matrix is in reduced row
echelon form:


1 0 −5 0 0 5
0 1 3 0 0 1


0 0 0 1 0 2 .


0 0 0 0 1 1
0 0 0 0 0 0
Notice that each column containing a leading 1 has zeros elsewhere.
In the previous section we saw how converting an augmented matrix
into row echelon form helps in solving a system of linear equations.
We shall see that reduced row echelon form further simplifies solving
the matrix equation Ax = b .
Like row echelon form, reduced row echelon form is obtained by
applying matrix row operations. Unlike row echelon form, reduced
row echelon form is unique for a given solution set.
Definition 1.21. The procedure for using matrix row operations
to obtain the reduced row echelon form is called Gauss–Jordan
elimination.
A general Gauss–Jordan elimination procedure is:
1. convert the matrix to row echelon form as shown in Section 1.5.3;
2. remove numbers above each pivot by subtracting multiples of
that pivot row.
1 Matrices and linear equations
Example 1.33. . . . . . .
44
1 Matrices and linear equations
Example 1.34. . . . . . .
45
1 Matrices and linear equations
1.6
1.6.1
46
The solution set
Defining the solution set
Definition 1.22. For a system of linear equations written in row
echelon form, the unknowns corresponding to columns with leading
1s are called basic variables and all other unknowns are called free
variables.
When the solution set contains free variables they are assigned
arbitrary values. The basic variables are defined in terms of these
arbitrary values.
Example 1.35. . . . . . .
1 Matrices and linear equations
47
Definition 1.23. A system of linear equations with at least one
solution is consistent. A system of linear equations with no solutions
is inconsistent.
Example 1.36. . . . . . .
1 Matrices and linear equations
Example 1.37. . . . . . .
48
49
1 Matrices and linear equations
1.6.2
Row operations and the solution set
In Section 1.5.2 we claimed that elementary row operations on an
augmented matrix do not change the solution set. We now prove
this claim.
Theorem 1.1. Any sequence of elementary row operations applied
to an augmented matrix produces a new augmented matrix which is
equivalent to the original (they have the same solution set).
Proof. Say we have a system of m linear equations in n unknowns
(x1 , x2 , . . . , xn ) of the general form
a11 x1
+ a12 x2
..
.
+ ···
..
.
+ a1n xn
..
.
= b1 ,
..
.
ai1 x1
+
..
.
ai2 x2
+ ···
..
.
+
..
.
=
..
.
aj1 x1
+ aj2 x2
..
.
+ ···
..
.
+ ajn xn
..
.
am1 x1 + am2 x2 + · · ·
ain xn
bi ,
= bj ,
..
.
+ amn xn = bm ,
and we know that the solution set is (x1 , x2 , . . . , xn ) = s =
(s1 , s2 , . . . , sn ) ∈ Rn . That is, we know (ai1 s1 + ai2 s2 + · · · +
ain sn ) = bi for all i = 1, 2, . . . , n . We now consider each of the
three possible matrix row operations and prove that they do not
change the solution set.
(a) Row operation Ri ↔ Rj : This row operation only exchanges
the order of two rows in the augmented matrix, which is
equivalent to changing the order of two equations the original
set of linear equations. This new arrangement must have the
same solution set as the original arrangement.
(b) Row operation Rj → Rj +kRi for k ∈ R : This row operation
gives the new augmented matrix


a11
a12
···
a1n
b1
..
..
..
..




.
.
.
.


 ai1
ai2
···
ain
bi 


..
..
..
..

.
.
.
.
.


a + ka a + ka · · · a + ka

 j1
i1
j2
i2
jn
in bj + kbj 


..
..
..
..


.
.
.
.
am1
am2
···
amn
bm
The linear equations associated with this augmented matrix
are the same as before, with the exception of the jth equation.
50
1 Matrices and linear equations
The solution set s satisfies the unchanged equations. The
jth equation is changed to
(aj1 + kai1 )x1 + (aj2 + kai2 )x2 + · · · + (ajn + kain )xn
= bj + kbi
(aj1 x1 + aj2 x2 + · · · + ajn xn )
+k(ai1 x1 + ai2 x2 + · · · + ain xn ) = bj + kbi
but we know that (ai1 s1 + ai2 s2 + · · · + ain sn ) = bi and
(aj1 s1 + aj2 s2 + · · · + ajn sn ) = bj so the solution set satisfies
the new linear equation.
(c) Row operation Rj → kRj for nonzero k ∈ R (brief proof): In
this case we obtain the new equation
k(aj1 x1 + aj2 x2 + · · · + ajn xn ) = kbj ,
which is satisfied by the solution set s .
1.6.3
Reduced row echelon form and the solution set
In Section 1.5.2 we saw that once the augmented matrix, to the
left of the vertical line, is in reduced row echelon form, then the
solution of the linear system of equations is readily obtainable. The
reduced row echelon form of a linear system of equations is unique
(unlike the row echelon form). The appearance of the reduced row
echelon augmented matrix tells us the type of solution (that is,
consistent or inconsistent, and the number of basic variables).
Say we have a system of m linear equations in n unknowns and
performed Gauss–Jordan elimination to obtain a reduced row echelon augmented matrix with p pivots. Since we have at most one
pivot per row (there are m rows) and at most one pivot per column
(there are n + 1 columns), p ≤ min(m, n + 1) . The position and
number of pivots classifies the solution set:
• for a pivot in the (n + 1)th column (the rightmost column)
there is no solution and the system is inconsistent;
• for no pivot in the (n + 1)th column the system is consistent.
There are p basic variables, (n − p) free variables and
– for p = n there is one unique solution (since number of
free variables is n − p = 0);
– for p < n there is an infinite number of solutions.
1 Matrices and linear equations
Example 1.38. . . . . . .
51
52
1 Matrices and linear equations
1.7
Inverse matrix
Definition 1.24. For n × n matrix A the inverse of A is an n × n
matrix B which satisfies
AB = BA = I .
We usually write the inverse of A as B = A−1 .
Not all matrices have inverses. If a matrix has an inverse it is
invertible. Only square matrices are invertible, but not all square
matrices are invertible.
Example 1.39. . . . . . .
1 Matrices and linear equations
53
The matrix inverse has the following properties:
• If A is invertible, then A−1 is invertible, and (A−1 )−1 = A .
• If A and B are invertible matrices with the same order, then
AB is invertible, and (AB)−1 = B −1 A−1 .
• If A is invertible and k ̸= 0 then kA is invertible, and
(kA)−1 = A−1 /k .
• If A is invertible, then the inverse A−1 is unique.
• (Ap )−1 = (A−1 )p .
• (AT )−1 = (A−1 )T .
• The identity matrix is its own inverse, I −1 = I
Remark 1.4. For n × n matrix A, if we find an n × n matrix B
such that AB = I then we know that B = A−1 and there is no
need to test if BA = I . Similarly, if we find an n × n matrix B
such that BA = I then we know that B = A−1 and there is no
need to test if AB = I .
Example 1.40. . . . . . .
1 Matrices and linear equations
Example 1.41. . . . . . .
54
1 Matrices and linear equations
1.7.1
55
Finding the inverse matrix
Given an n × n matrix A, the inverse A−1 (if it exists) is obtained
by the following steps.
1. Write the augmented matrix [A|I] where I is the n×n identity
matrix.
2. Perform Gauss–Jordan elimination until the left hand side of
the augmented matrix is in reduced row echelon form.
3. If the left hand side of the augmented matrix of the form [I|W ] ,
then the right hand side, W , is the inverse of A , W = A−1 .
If the left hand side is not the identity then A has no inverse.
Example 1.42. . . . . . .
1 Matrices and linear equations
Example 1.43. . . . . . .
56
1 Matrices and linear equations
57
For a general 2 × 2 matrix can use a formula to calculate the inverse:
1
d −b
a b
−1
.
for A =
, A =
c d
ad − bc −c a
The inverse does not exist when ad − bc = 0 .
Why does the Gauss–Jordan method work for finding matrix inverses? Because when finding the inverse of an n × n matrix using
Gauss–Jordan elimination we are essentially solving n different
systems of n linear equations in n unknowns.
Example 1.44. . . . . . .
58
1 Matrices and linear equations
1.7.2
Finding a unique solution
Consider a matrix equation
Ax = b ,
for unknowns x. If A is invertible, then one way to find x is to
multiply both sides by A−1 :
A−1 Ax = A−1 b
Ix = A−1 b
x = A−1 b .
The inverse A−1 is unique so this method provides a unique solution
for x iff A is invertible. When A is not invertible we either have
no solution or infinite solutions.
Example 1.45. . . . . . .
1 Matrices and linear equations
Example 1.46. . . . . . .
59
1 Matrices and linear equations
..........................
60
61
1 Matrices and linear equations
1.8
Determinants
In the case that we have a system of n equations in n unknowns,
with coefficients of the unknowns in the n × n matrix A there is a
unique solution if A has an inverse matrix.
1.8.1
Condition for a matrix inverse
For any square n×n matrix A, the determinant det(A), also denoted
|A|, is a scalar function of the n2 elements. The matrix A has an
inverse if, and only if, its determinant is not 0. Now consider a
system on n linear equations in n unknowns. If n = 2 the equations
represent lines in a plane and a determinant of 0 indicates that
the lines are parallel. If n = 3 the equations represent planes in
3–D space and a determinant of 0 indicates that at least two of the
three planes are parallel.
1.8.2
Determinant of a 2 × 2 matrix
Definition 1.25. A 2 × 2 order matrix A is:
a11 a12
A=
,
a21 a22
The determinant of A is defined as:
|A| = a11 a22 − a12 a21
Example 1.47. . . . . . .
1 Matrices and linear equations
62
1.8.3
Determinant of a triangular matrix
The determinant of an n × n triangular matrix is the product
of the n elements along the leading diagonal. In particular, the
determinant of a diagonal matrix is the product of elements along
the diagonal.
1.8.4
Properties of determinants
Matrix determinants have the following properties.
1. If A has a row (or column) of zeros, then |A| = 0 .
2. If A has two identical rows (or columns) then |A| = 0 .
3. If two rows (or columns) of matrix A are swapped to obtain
matrix B, then |A| = −|B| .
4. |AB| = |A| |B| .
5. A is invertible iff |A| =
̸ 0.
6. If A is invertible |A−1 | = 1/|A| .
7. |AT | = |A| .
8. for n × n matrix A and scalar c, |cA| = cn |A| .
9. |I| = 1 .
These properties enable the calculation of determinants for large
matrices.
Example 1.48. . . . . . .
63
1 Matrices and linear equations
1.8.5
Determinant of a n × n matrix
Definition 1.26. For an n × n matrix A, the determinant of A,
denoted by det(A) or |A|, is a real number which:
• for n = 1 is |A| = a11 ;
• for n > 1 is
|A| = a11 M11 − a12 M12 + a13 M13 − a14 M1 4 + · · · + (−1)n+1 a1n M1n
n
X
=
(−1)j+1 a1j M1j .
j=1
where M1j are minors of A.
Definition 1.27. For the n × n matrix A, the minor Mij is the
determinant of the (n − 1) × (n − 1) order matrix obtained from A
by omitting row i and column j.
For example,


4 −2 1
A = −1 3 2 ,
−3 −4 6
has minors
M11 =
3 2
,
−4 6
M22 =
4 1
,
−3 6
M32 =
4 1
.
−1 2
For n > 1 we have defined the determinant of the n × n matrix A
in terms of determinants of (n − 1) × (n − 1) matrices. These
determinants of (n − 1) × (n − 1) matrices are then defined in terms
of determinants of (n − 2) × (n − 2) matrices, and so on, until we
obtain matrices of order 1 × 1 . This is an example of a recursive
process.
For example, for a general 3 × 3 matrix the determinant is
a b c
e f
d f
d e
d e f =a
−b
+c
h i
g i
g h
g h i
= a(ei − f h) − b(di − f g) + c(dh − eg) .
Example 1.49. . . . . . .
1 Matrices and linear equations
..........................
64
2
Leontief economic models
Contents
2.1
2.2
Leontief open economic model . . . . . .
67
2.1.1
A profitable industry . . . . . . . . . . . 71
2.1.2
A productive economy . . . . . . . . . . 71
Leontief closed economic model . . . . .
73
Wassily Leontief (1906–1999) received the Nobel Memorial Prize in
Economic Sciences in 1973 “for the development of the input-output
method and for its application to important economic problems”.
Leontief used his method to model inter-industry relations within a
country’s economy. The modelling is linear and can be formulated
in terms of matrices.
An economy has several industries which all produce commodities
(output). Some of these commodities are required by other industries in the economy (input) in order to produce their goods. We
construct a matrix to describe the transfer of input and output
between different industries in the economy.
For example, an economy is modelled as consisting of three industries: agriculture (A), manufacturing (M) and fuels (F). The
following table, or matrix, describes the inputs and outputs of each
industry:
one unit output from
these industries
A M
F
requires A 0.5 0.1
0.1
these
M 0.2 0.5
0.3
0.4
inputs
F 0.1 0.3
Each column describes what each industry requires from all other
industries in order to produce one unit of product, or output. For
example, one unit of output from A requires 0.5 units from A,
0.2 units from M and 0.1 units from F. Similarly, one unit of output
from F requires 0.1 units from A, 0.3 units from M and 0.4 units
from F.
Example 2.1. . . . . . .
2 Leontief economic models
66
In this economy, how many units does M require from the three
industries to produce 200 units?
67
2 Leontief economic models
2.1
Leontief open economic model
Suppose an economy has n industries or sectors, numbered 1, 2, . . . , n .
There is also an unproductive sector (for example, households, charities, government, or some industry outside the economy) which
demands goods but does not produce any goods required by the
n industries in the economy. The n industries produce output to
satisfy the demand from the unproductive sector. However, they
cannot produce this output independently; they require input from
some or all of the other n industries in order to produce their
output.
The presence of the unproductive sector makes this an open model
(as opposed to a closed model).
We measure industry outputs and inputs in terms some ‘unit’ of
monetary value, which we assume does not change.
Definition 2.1 The production vector x = (x1 , x2 , . . . , xn ) describes how many units each of the n industries produce. The
demand vector d = (d1 , d2 , . . . , dn ) describes how many units the
unproductive sector demands from each of the n industries. The
n × n matrix A = [aij ] is the input-output matrix (other names
include Leontief, technological or consumption matrix) where aij is
the units required by industry j from industry i in order for industry j to make one unit of output. No element of x , d or A can be
negative.
a31 x1
a13 x3
a22 x2
a12 x2
a11 x1
a23 x3
sector 1 a21 x1 sector 2 a32 x2 sector 3
x1
x2
x3
d1
d2
a33 x3
d3
demand
The above diagram shows the flow of inputs and outputs in an
open economic model with three industries or sectors. The total
output of sector one is x1 and this must equal all the components
which are leaving sector one (to find the outputs just add up all
68
2 Leontief economic models
parts leaving the ‘sector 1’ circle in the above diagram):
x1 = d1 + a11 x1 + a12 x2 + a13 x3 .
The total output for industry i is xi units. The unproductive
sector demands di units of industry i’s output. The remainder of
industry i’s output becomes input for all industries j = 1, 2, . . . n .
Industry j has a total output of xj units and to produce this output
requires aij xj units from industry i. Therefore, to satisfy demand
xi = ai1 x1 + ai2 x2 + · · · + ain xn + di
for all i = 1, 2, . . . , n .
In matrix form:
x = Ax + d .
We rearrange the matrix equation to solve for x:
x − Ax = d
(I − A)x = d
x = (I − A)−1 d ,
provided (I − A) has an inverse. When (I − A) has an inverse there
is one unique solution for x for a given d .
Example 2.2. . . . . . .
2 Leontief economic models
..........................
69
2 Leontief economic models
Example 2.3. . . . . . .
70
71
2 Leontief economic models
Remark 2.1 Like all mathematical models, the Leontief open
economic model is a simplification of reality. For example, it
assumes that when an industry increases its production, the costs
of production increase linearly (that is, if it doubles production, the
costs double; if it triples its production, the costs triple). So, as the
production for one industry increases, the cost per item stays the
same. This assumption ignores ‘economies of scale’ which generally
allow the cost per item to fall as the production level increase.
Remark 2.2 The Leontief open economic model depends on the
demand d. If there is no demand, d = 0 = (0, . . . , 0) , then,
assuming (I − A)−1 exists, x = (I − A)−1 d = 0 and there is no
production from the n industries. In this open model, when there
is no demand the n industries do not operate and they produce no
output.
2.1.1
A profitable industry
Definition 2.2 Industry i is profitable when it makes more money
than it spends.
The total amount of money made by industry i is xi . To produce
output xi , industry i must purchase goods costing aji xi from industry j for all j = 1, 2, . . . , n . Therefore, for industry i to be
profitable,
a1i xi + a2i xi + . . . + ani xi < xi .
Dividing by xi gives
a1i + a2i + . . . + ani < 1 .
So, industry i is profitable when the ith column of the input-output
matrix sums to less than 1. This is the same as saying industry i is
profitable when the total cost of inputs needed to produce one unit
of output (the sum of column i) is less than one unit of output.
For each industry in the economy to be profitable, each column in
the input-output matrix must be less than 1.
Industry i will ‘break even’ when it makes the same money as it
spends. In this case the ith column of the input-output matrix
sums to exactly 1.
2.1.2
A productive economy
Definition 2.3 An input-output matrix A is productive when it is
able to satisfy any demand d with a realistic production vector x.
For the production vector to be realistic, no element can be negative;
xi ≥ 0 for all i = 1, 2, . . . , n .
The input-output matrix A will be productive if the matrix equation
equation (I − A)x = d has a solution for realistic x. The matrix
72
2 Leontief economic models
equation has a unique realistic solution when the inverse (I − A)−1
exists and has no negative elements.
Why should the inverse (I − A)−1 have only non-negative elements?
Say C = [cij ] = (I − A)−1 . We now look at several choices of d and
show that cij must be non-negative in order to satisfy any demand.
Say d = (d1 , 0, 0, . . . , 0) ,
x1 = c11 d1 ,
x2 = c21 d1 ,
...,
xn = cn1 d1 ,
and since xi ≥ 0 we must have c11 , c21 , . . . , cn1 ≥ 0 . Now say
d = (0, d2 , 0, 0, . . . , 0) ,
x1 = c12 d2 ,
x2 = c22 d2 ,
...,
xn = cn2 d2 ,
and so c12 , c22 , . . . , cn2 ≥ 0 . In general, for di > 0 and dj = 0 for
all j ̸= i ,
x1 = c1i di ,
x2 = c2i di ,
...,
xn = cni di ,
and so c1i , c2i , . . . , cni ≥ 0 for any i = 1, 2, . . . , n . Thus we have
shown that all the elements of (I − A)−1 must be non-negative.
Theorem 2.1 The inverse (I − A)−1 exists and has no negative
elements when the sum of each column of A is less than 1.
This theorem proves that the matrix equation (I − A)x = d for
any demand d will have a unique and realistic solution for x when
each industry in the economy is profitable. We will not prove the
above theorem formally, but we show that it appears reasonable.
A profitable industry is able to increase or decrease its production
and remain profitable. Also, since each industry in the economy is
profitable, and not simply breaking even, each industry should be
able to make a surplus of goods. Therefore, it seems reasonable that
when all industries are profitable, demand can always be satisfied
and we always have a realistic solution for x .
In the first example of Section 2.1, one column of the input-output
matrix A sums to 1, but we are still able to find the inverse (I −A)−1
and satisfy demand. This does not contradict the above theorem.
The above theorem only states when we must be able to obtain
the inverse (I − A)−1 , it does not say the inverse cannot be found
under other circumstances.
73
2 Leontief economic models
2.2
Leontief closed economic model
Like the Leontief open economic model, the Leontief closed economic
model has n industries which produce output and obtain input
from each other. However, unlike the open model, in the closed
model there is no longer an unproductive sector. In the open model
households are considered unproductive, but in the closed model
households are productive, with their output being labour and their
input being consumer demand.
We use the same notation as for the open model, that is, production
vector x = (x1 , x2 , . . . , xn ) and input-output matrix A = [aij ] .
However, since there is no longer an unproductive sector with
demand vector d we simply set d = 0 . The matrix equation to be
solved is now
x = Ax ,
or, the homogeneous equation
(I − A)x = 0 .
As in the open model we require that all elements of x and A
are non-negative. Unlike the open model, in the closed model all
industry inputs become industry outputs with no surpluses. So,
all outputs from one industry i are spent entirely on all its inputs.
The total output of industry i is xi and the input into industry i
from industry j is aji xi for j = 1, 2, . . . , n . Therefore, since the
output from industry i equals all inputs into industry i:
a1i xi + a2i xi + · · · + ani xi = xi .
Dividing by xi gives
a1i + a2i + · · · + ani = 1 .
This means that every column of A must sum to 1.
74
2 Leontief economic models
a22 x2
sector 2
x2
a12 x2
a23 x3
a21 x1 a32 x2
sector 1
x1
a11 x1
a13 x3
a31 x1
sector 3
x3
a33 x3
The above diagram shows the flow of inputs and outputs in a closed
economic model with three industries or sectors.
Example 2.4. . . . . . .
2 Leontief economic models
75
In general for the closed economic mode, |I − A| = 0 and the matrix
(I − A) is not invertible. Since (I − A) is not invertible we must find
the solution from the row echelon form of the augmented matrix.
We can have one solution (which must be the trivial solution x = 0)
or an infinite number of solutions.
Example 2.5. . . . . . .
2 Leontief economic models
76
Remark 2.3 For the open economic model we showed that when
each column sums to less than one, then each industry is profitable.
In the closed economic model each column sums to one, but for
this model this does not mean that each industry can only break
even. For the closed model we assume that each industry treats
profit as one of the costs of production and so the output price is
set so that the industry makes a profit.
2 Leontief economic models
..........................
77
2 Leontief economic models
[Pages 79 to 81 intentionally missing to preserve numbering.]
78
3
Optimisation
Contents
3.1
Inequalities . . . . . . . . . . . . . . . . .
82
3.1.1
Inequalities in one variable . . . . . . . 82
3.1.2
Inequalities in two variables . . . . . . . 84
3.2
Optimization application . . . . . . . . .
89
3.3
Linear programming . . . . . . . . . . . .
91
3.4
Graphical method of solution . . . . . .
94
3.5
3.4.1
Classifying the feasible region . . . . . . 94
3.4.2
Convex sets . . . . . . . . . . . . . . . . 96
3.4.3
Standard form and the convex feasible
region . . . . . . . . . . . . . . . . . . . 97
3.4.4
Feasible regions and objective function
solutions . . . . . . . . . . . . . . . . . . 102
Algebraic method of solution . . . . . . 104
3.5.1
Obtaining vertices algebraically . . . . . 104
3.5.2
Obtaining the optimal solution . . . . . 109
3.6
The simplex algorithm . . . . . . . . . . 110
3.7
Problem formulation . . . . . . . . . . . 122
Optimisation is when we search for the best possible outcome of a
given situation. We typically have a quantity which we either want
to maximise or minimise. For example, we might want to maximise
profits (or minimise a loss) or we might want to minimise waste.
However, the quantity which we want to optimise will be subject to
some constraints. For example, a profit will be constrained by legal
requirements (such as paying the minimum wage and satisfying
safety requirements) the availability of inputs (including labour
and capital) and the demand from consumers (which will change
as prices vary).
3.1
3.1.1
Inequalities
Inequalities in one variable
The statement ‘f (x) is greater than g(x)’ is written mathematically
as f (x) > g(x) . The statement ‘f (x) is greater than or equal to
g(x)’ is written mathematically as f (x) ≥ g(x) . Similarly, ‘f (x)
83
3 Optimisation
is less than g(x)’ is written mathematically as f (x) < g(x) and
‘f (x) is less than or equal to g(x)’ is written mathematically as
f (x) ≤ g(x) . These are inequalities in one variable x.
Example 3.1. . . . . . .
84
3 Optimisation
3.1.2
Inequalities in two variables
Example 3.2. . . . . . .
85
3 Optimisation
Inequalities in two variables, say x and y, describe a two dimensional
problem and can be visualised on the xy-plane.
Consider the linear inequalities in two variables: ax + by ≤ c
or ax + by < c , for a, b, c ∈ R . To represent these inequalities
graphically:
• Plot the equality ax + by = c . If the inequality is < draw a
dashed line (to show that the line is not part of the solution).
If the inequality is ≤ draw a solid line (to show that the line
is part of the solution).
• Choose a point which is not on the line ax + by = c, say
(x, y) = (x1 , y1 ) . Often the best choice is (x, y) = (0, 0)
(provided it doesn’t lie on the line).
• Substitute (x, y) = (x1 , y1 ) into the inequality. If ax1 +by1 < c
is true, then the inequality is satisfied by the side of the line
which contains (x1 , y1 ). If ax1 + by1 < c is not true, then
the inequality is satisfied by the side of the line which does
not contains (x1 , y1 ) . Shade the area of the xy-plane which
satisfies the inequality.
4
2x − y ≤ 1
2
y
x
2
−1
x
−1
−0.5
0.5
−2
y
1
−0.5
0.5
−2
3x + y < −1
−4
1
86
3 Optimisation
Example 3.3. . . . . . .
87
3 Optimisation
Remark 3.1. The inequality ax + by ≥ c when multiplied by −1
becomes the inequality −ax − by ≤ −c . Similarly, the inequality
ax + by > c when multiplied by −1 becomes the inequality −ax −
by < −c . So, it does not matter that in the above list we only
mention ‘≤’ and ‘<’. We can always change a ‘≥’ to a ‘≤’ and a
‘>’ to a ‘<’ by multiplying the equation by −1.
Often we have several inequalities and we need to find the region
on the xy-plane where they are all true.
Example 3.4. . . . . . .
88
3 Optimisation
Example 3.5. . . . . . .
Remark 3.2. For economic problems we are often only interested
in the region x, y ≥ 0 since many economic quantities are either
positive or zero. For example, the cost per item and the number
of goods produced are always greater than or equal to zero. The
x, y ≥ 0 region of the xy-plane is called the first quadrant.
89
3 Optimisation
3.2
Optimization application
Problem: A cereal manufacturer makes two kinds of muesli, nutty
special and fruity extra. Both mueslis contain the same amount of
oats, but oats are plentiful and very cheap so we ignore this cost.
They also contain raisins and nuts, but in different proportions.
One box of nutty special requires 0.2 boxes of raisins and 0.4 boxes
of nuts. One box of fruity extra requires 0.4 boxes of raisins and
0.2 boxes of nuts. Each day the manufacturer has 10 boxes of
raisins and 14 boxes of nuts. The profit on each box of nutty special
is $8 and the profit on each box of fruity extra is $10.
Assuming that all muesli that is made will be sold, how many boxes
of each cereal should be made to maximise profits?
Solution: Say the number of boxes of nutty special produced is x.
To produce x boxes the manufacturer requires 0.2x boxes of raisins
and 0.4x boxes of nuts.
Say the number of boxes of fruity extra produced is y. To produce y
boxes the manufacturer requires 0.4y boxes of raisins and 0.2y boxes
of nuts.
We define the profit as f (x, y) . We know that
f (x, y) = 8x + 10y .
The problem is to maximise f (x, y) . If x or y are made extremely
large, then the profit will also be extremely large, but we cannot
make x or y extremely large because they are constrained by the
availability of inputs (raisins and nuts).
Now we find the constraints on x and y. The number of boxes
produced must be either zero or a positive number so x ≥ 0 and
y ≥ 0 . The total number of boxes of nuts must be equal to or less
than 14,
0.4x + 0.2y ≤ 14 .
The total number of boxes of raisins must be equal to or less than 10,
0.2x + 0.4y ≤ 10 .
Stated mathematically, the problem is to maximise
f (x, y) = 8x + 10y ,
subject to the constraints
0.4x + 0.2y ≤ 14 ,
0.2x + 0.4y ≤ 10 ,
x ≥ 0,
y ≥ 0.
90
3 Optimisation
Definition 3.1. Linear programming (or linear optimisation) is
when a linear function is optimised (that is, either minimised
or maximised). The linear function is called the linear objective
function. The region in which all constraints are satisfied is called
the feasible region. Any solution of the objective function within
the feasible region is a feasible solution, but there is no more than
one maximum feasible solution and one minimum feasible solution.
In our muesli problem f (x, y) is the linear objective function. We
want to maximise this objective function.
First, plot the constraints on the xy-plane to illustrate the feasible
region. In the figure below the feasible region is shaded blue.
Any point (x, y) which lies in the feasible region provides a feasible
solution of the profit f (x, y), but there is only one maximum feasible
solution of f (x, y) .
80
70
f (x, y) = 0
f (x, y) = 200
f (x, y) = 340
f (x, y) = 400
y = −2x + 70
y (fruity extra)
60
50
40
30
(30, 10)
20
y = −x/2 + 25
10
0
0
5
10 15 20 25 30 35 40 45 50 55 60
x (nutty special )
We need to find the maximum profit f (x, y) = 8x + 10y , but it
must lie within the feasible region. We plot some different profit
values, shown as dashed lines in the above figure. For example
f (x, y) = 0 = 8x + 10y is the red dashed line and f (x, y) = 200 =
8x + 10y is the green dashed line.
Notice that f (x, y) = 400 lies completely outside the feasible region, so the profit cannot be that large. Also notice that as the
dashed lines move to the right, the profit increases. Therefore, the
maximum profit is the dashed line furthest to the right which also
crosses the feasible region. Inspection of the figure shows that the
maximum profit is f (x, y) = 340 which only intersects the feasible
region at one point: (x, y) = (30, 10) .
91
3 Optimisation
We found that the maximum profit is $340 which is obtained by
producing x = 30 boxes of nutty special and y = 10 boxes of fruity
extra.
3.3
Linear programming
The n variables x1 , x2 , . . . , xn are constrained by m linear inequalities,
a11 x1
+ a12 x2
..
.
+ ···
..
.
+ a1n xn
..
.
≤ b1 ,
..
.
ai1 x1
+
..
.
+ ···
..
.
+
..
.
≤
..
.
ai2 x2
am1 x1 + am2 x2 + · · ·
ain xn
bi ,
+ amn xn ≤ bm ,
where bi ∈ R and aij ∈ R for i = 1, . . . , m and j = 1, . . . , n . In
matrix form these linear inequalities are
Ax ≤ b ,
where A = [aij ] is an m × n matrix, b = (b1 , . . . , bm ) and x =
(x1 , . . . , xn ) . In many economics problems xi is zero or positive
for all i = 1, 2, . . . , n so we have the additional constraints x ≥ 0 .
The objective function is a linear function of all xi ,
f (x) = f (x1 , x2 , . . . , xn ) = c1 x1 + c2 x2 + · · · + cn xn ,
for ci ∈ R for all i = 1, 2, . . . , n .
Definition 3.2. The standard form of a linear programming problem is: find the maximum value of the objective function f (x) for
all x ∈ Rn satisfying the constraints Ax ≤ b and x ≥ 0 .
If the original problem asks for a minimum of the objective function,
then we must multiply the objective function by −1 to state the
problem in standard from. Similarly, for the constraints, all ‘≥’
must be converted to ‘≤’ (except x ≥ 0). Note that the standard
form does not allow constraints with ‘<’ or ‘>’.
92
3 Optimisation
Example 3.6. . . . . . .
93
3 Optimisation
Definition 3.3. When the feasible region is empty there is no
feasible solution of the linear program and the linear program is
called infeasible.
Example 3.7. . . . . . .
94
3 Optimisation
3.4
3.4.1
Graphical method of solution
Classifying the feasible region
Definition 3.4. A vertex is a point where two straight lines meet.
In Section 3.2 we used linear programming to find the maximum
profit for a cereal manufacturer. In the xy-plane the feasible region
had four vertices at (0, 0) , (25, 0) , (30, 10) and (35, 0) .
Example 3.8. . . . . . .
95
3 Optimisation
Definition 3.5. A region is bounded when it is fully contained
within a finite region. If a circle can be drawn around a two
dimensional region, then it is bounded (similarly, if a sphere can
be drawn around a three dimensional region, then it is bounded).
Definition 3.6. A region is closed if it contains all its boundary
points. To be closed, all boundaries of the region should be defined
by ‘≤’ or ‘≥’, not ‘<’ and ‘>’ .
Example 3.9. . . . . . .
96
3 Optimisation
Now consider the figure:
12
E
10
y
8
B
A
6
D
4
C
2
0
0
2
4
6
8
10
12
14
16
18
20
x
In the above two dimensional plot for x, y ≥ 0: region A is bounded
but not closed and has four vertices; regions B and C are bounded
and closed, and both have five vertices; region D is not bounded but
closed and E is not bounded and not closed, and both have three
vertices. Note that the regions that are both closed and bounded
are always fully enclosed by solid lines.
3.4.2
Convex sets
Definition 3.7. A subset C of Rn is convex when a straight line
joining any two points in C lies entirely within C.
convex
not convex
The above figure shows some shapes which are convex and nonconvex subsets of R2 (two dimensions). Some convex subsets of R3
(three dimensions) are a solid sphere, a solid pyramid and a solid
cube. Hollow objects are not convex. Note that convex shapes do
not have to be closed and bounded.
97
3 Optimisation
Example 3.10. . . . . . .
For all the non-convex shapes in the above figure, prove that they
are not convex.
Definition 3.8. A vertex of a closed convex set C is a point in C
which does not lie on a straight line joining any two other points
in C.
Example 3.11. . . . . . .
Find all the vertices of the closed convex shapes in the above figure.
3.4.3
Standard form and the convex feasible region
Theorem 3.1. The set {x ∈ Rn | Ax ≤ b , x ≥ 0} is a convex set.
That is, for a linear program written in standard form, the feasible
region is a convex set.
Proof. Let C = {x ∈ Rn | Ax ≤ b , x ≥ 0} for some A and b and
let x, y ∈ C .
We need to show that C is convex. That is, we need to show the
straight line which joins any x and y only contains points which
lie in C.
All points on the straight line between x and y are described by
z(v) = (1 − v)x + vy ,
for 0 ≤ v ≤ 1 (v is a real number) . So, we need to show that
z(v) ∈ C for all 0 ≤ v ≤ 1 . For z(v) to be in C, it needs to satisfy
Az(v) ≤ b and z(v) ≥ 0 .
Since x, y ≥ 0 and v, (v − 1) ≥ 0 we know that
z(v) = (1 − v)x + vy ≥ 0 .
For the other inequality:
Az(v) = A[(1 − v)x + vy]
= (1 − v)Ax + vAy .
But Ax ≤ b so (1 − v)Ax ≤ (1 − v)b . Similarly Ay ≤ b so
vAx ≤ vb . Therefore,
Az(v) = (1 − v)Ax + vAy
≤ (1 − v)b + vb = b.
98
3 Optimisation
So, Az(v) ≤ b and z(v) ≥ 0 so z(v) ∈ C for all 0 ≤ v ≤ 1 .
Since all points between x and y lie in C, C is a convex set.
Theorem 3.2. For C a convex set, any maximum (or minimum)
of some objective function f (x) requiring x ∈ C will be at a vertex
of C.
Theorems 3.1 and 3.2 tell us that, for a linear program which
can be written in standard form, if the objective function has a
maximum (or minimum) it will be at a vertex of the feasible region.
Therefore, when looking for the optimal solution of a linear program
in standard form we only need to test each vertex of the feasible
region to obtain the optimal solution. We do not need to search
the entire feasible region for the optimal solution.
Remark 3.3. The above theorems are true in any dimension.
Example 3.12. . . . . . .
99
3 Optimisation
..................
100
3 Optimisation
Remark 3.4. Theorems 3.1 and 3.2 only say that if the optimal
solution of a linear program exists, it will be at a vertex. The
theorems do not tell us whether or not a optimal solution exists.
In cases where the convex feasible solution is not closed and not
bounded an optimal solution is not guaranteed.
Example 3.13. . . . . . .
101
3 Optimisation
..................
102
3 Optimisation
Feasible regions and objective function solutions
Assuming that the feasible region is not empty (do not have an
infeasible solution), the following defines potential solutions of a
linear optimisation problem, even if it cannot be written in standard
form.
• When the feasible region is closed and bounded the objective
function has both a maximum and minimum value.
• When the feasible region is not closed or not bounded the
objective function may have both a maximum and a minimum;
a maximum but no minimum; a minimum but no minimum;
or no solution.
• When there is an optimal solution for the objective function
it will occur at a vertex of the feasible region.
• If the optimal solution occurs at two vertices, then the optimal
solution also occurs at every point on the the line connecting
these two vertices. Note that there is never more than one
optimal solution (either a maximum or a minimum), but this
optimal solution can occur at several points in the feasible
region.
For example, find both the minimum and maximum of the objective
function f (x, y) = 2y − x subject to the constraints
−x/2 + y ≤ 2 ,
x + y ≤ 3,
3x + y < 6 ,
−x + y ≥ −1 ,
and x, y ≥ 0.
3.5
f (x, y) = 4
f (x, y) = 3/2
f (x, y) = 0
f (x, y) = −1/4
f (x, y) = −1
3
2.5
2
y
3.4.4
1.5
1
0.5
0
0
0.5
1
1.5
2
2.5
3
3.5
x
The feasible region is bounded but not closed. The vertices are
at (0, 0) , (0, 2) (2/3, 7/3) , (3/2, 3/2) , (7/4, 3/4) , (1, 0) . In the
103
3 Optimisation
above figure, the five plots of f (x, y) all pass through vertices of
the feasible region.
The maximum of f (x, y) within the feasible region is 4. The
maximum is obtained for all points between the two nodes (0, 2)
and (2/3, 7/3), described by the line −x + 2y = 4 for 0 ≤ x ≤ 2/3
and 2 ≤ y ≤ 7/3 . The minimum of f (x, y) within the feasible
region is −1 and is obtained at the vertex (1, 0) .
Example 3.14. . . . . . .
104
3 Optimisation
3.5
Algebraic method of solution
The graphical method of solution for linear programming is useful
for problems in two variables, but not useful for problems in more
than two variables. For more than two variables it is usually easier
to obtain a solution using algebra. In this section we only consider
problems which are in, or can be written in, standard form.
3.5.1
Obtaining vertices algebraically
Say we have a general linear programming problem written in
standard form. That is, we have n variables, x = (x1 , x2 , . . . , xn ) ≥
0 and m linear inequalities which, in matrix form, are Ax ≤ b
for A = [aij ] an m × n matrix and b an m dimensional vector.
We need to optimise some objective function f (x) , subject to the
constraints Ax ≤ b . We first discuss how to find the vertices of
the feasible region.
Introduce m slack variables, xn+1 , xn+2 , . . . , xn+m , and say that
these slack variables are such that when added to the inequalities,
Ax ≤ b , they give the equalities:
a11 x1
+ a12 x2
..
.
+ ···
..
.
+ a1n xn
..
.
+ xn+1
..
.
= b1 ,
..
.
ai1 x1
+
..
.
+ ···
..
.
+
..
.
+
..
.
=
..
.
ai2 x2
am1 x1 + am2 x2 + · · ·
ain xn
xn+i
bi ,
+ amn xn + xn+m = bm .
This is a system of m linear equation in (n + m) unknowns. In
the standard form of a linear program, x1 , . . . , xn ≥ 0 . The same
is true for all the slack variables, xn+1 , . . . , xn+m ≥ 0 . Therefore,
xi ≥ 0 for all i = 1, 2, . . . , (n + m) .
Define a new column vector containing the original variables and
the slack variables, x′ = (x1 , x2 , . . . , xn , xn+1 , xn+2 , . . . , xn+m ) .
Also, define a new m × (n + m) matrix,


a11 a12 · · · a1n 1 0 · · · · · · 0
..
..
..
.. 
 ..
.
..
.
 .
.
.
.
0 ..
.

.
′
...
. . . .. 
.
A =
a
a
·
·
·
a
.
1
.
 i1
.
i2
in
 .

.
.
.
.
.
.
..
..
..
..
..
. . 0
 ..
am1 am2 · · · amn 0 · · · · · · 0 1
The system of m linear equation in (n + m) unknowns is written
in matrix form as A′ x′ = b .
For example, for constraints
x1 + 2x2 ≤ 70 ,
2x1 + x2 ≤ 50 ,
x1 , x2 ≥ 0 ,
105
3 Optimisation
we have m = 2 and n = 2 . We introduce two slack variables, x3
and x4 , and write two linear equations in four unknowns,
x1 + 2x2 + x3 = 70 ,
2x1 + x2 + x4 = 50 ,
with xi ≥ 0 for i = 1, 2, 3, 4 . The figure below shows the feasible
region.
50
45
x4 = 0
40
35
x2
30
25
x3 = 0
20
10
5
0
x1 = 0
15
x2 = 0
0
10
20
30
40
50
60
70
x1
Each boundary of the feasible region is defined by a line xi = 0 for
i = 1, 2, 3, 4 . All variables xi must be positive within the feasible
region. The vertices (red dots) are where two variables are zero,
that is, xi = xj = 0 for i ̸= j and i, j = 1, 2, 3, 4 , but notice that
not all these vertices are vertices of the feasible region.
Example 3.15. . . . . . .
106
3 Optimisation
................
107
3 Optimisation
.................
108
3 Optimisation
Remark 3.5. An optimisation problem in n unknowns is an n dimensional problem. We add m slack variables to have a total of
(n + m) variables. Each vertex is defined by n of the variables being
equal to zero, and the remaining m variables are solved with Gauss–
Jordan elimination. There are a total of (n + m)!/(n!m!) vertices,
although some of these vertices may not be in the feasible region.
In the previous example n = m = 2 and there were 4!/(2!2!) = 6
vertices, but only four were in the feasible region.
Definition 3.9. The m variables which are solved with Gauss–
Jordan elimination are called basic variables. The solution of these
m basic variables together with the n zero variables is a basic
solution. If all basic variables are zero or positive then the solution
is inside the feasible region and is called a basic feasible solution.
Theorem 3.3. The basic feasible solutions of A′ x′ = b are the
vertices of the convex set
C = {x ∈ Rn | Ax ≤ b , x ≥ 0},
where matrix A is order m × n and matrix A′ is order m × (n + m) .
The following steps summarise how to algebraically find the vertices of a feasible region defined by m constraints in n unknowns,
x1 , x2 , . . . , xn .
1. Introduce m slack variables, xn+1 , . . . , xn+m (one slack variable for each equation) so that each of the m inequalities (≤)
are converted into equalities (=). There are now (n + m) variables.
2. Write the m equalities as the matrix equation A′ x′ = b where
A′ is order m × (n + m) , b is a m dimensional vector and x′
is a (n + m) dimensional vector.
3. Choose a vertex by setting n of the (n + m) variables to
zero. The remaining m variables (the basic variables) are
solved using Gauss–Jordan elimination on the augmented
matrix [A′ |b] (that is, just concentrate on those columns
associated with the basic variables).
4. Determine if the vertex describes a basic feasible solution.
Discard the vertex if it is outside the feasible region (that is,
if any xi is negative).
5. Repeat Steps 3 and 4 for all (n + m)!/(n!m!) vertices.
109
3 Optimisation
3.5.2
Obtaining the optimal solution
From Theorems 3.1, 3.2 and 3.3 we know that when an optimisation
problem is written in standard form, the feasible region must be
convex and the optimal solution (if it exists) is on a vertex of the
convex feasible region. The vertices of the feasible region are are
the basic feasible solutions.
To find the optimal solution algebraically we find all basic feasible
solutions (that is, the vertices of the feasible region), as shown in
the previous section, then calculate the optimal solution at each
vertex, and then our optimal solution (if it exists) is the largest of
these optimal solutions.
Example 3.16. . . . . . .
110
3 Optimisation
3.6
The simplex algorithm
An algorithm is a list of instructions for solving a specific problem.
The simplex algorithm solves an optimisation problem algebraically.
The simplex algorithm is similar to the algebraic method discussed
in the previous section, but it is more efficient because we usually
do not need to find all vertices of the feasible region.
We are given an optimisation problem in standard form: maximise
the objective function
f (x) = c1 x1 + c2 x2 + · · · + cn xn ,
subject to constraints
Ax ≤ b ,
x ≥ 0,
where matrix A = [aij ] is order m × n, x = (x1 , . . . , xn ) is an
n dimensional vector and b is an m dimensional vector. In addition
(to avoid some complications) we only consider b ≥ 0 . The steps
below describe the simplex algorithm procedure.
Step 1 (set up the simplex tableau): Introduce m slack variables, xn+1 , . . . , xn+m , so that the constraints become A′ x′ = b for
m × (n + m) order matrix A′ and x′ = (x1 , . . . , xn , xn+1 , . . . , xn+m ) .
Also, introduce the new variable z such that f (x) = z , giving the
new equation
−c1 x1 − c2 x2 − · · · − cn xn + z = 0 .
The simplex tableau is the augmented matrix with m + 1 rows (one
for each equation) and n + m + 2 columns (one for each of the
n + m + 1 variables and the last column for b):


a11
···
a1n
1 0 · · · · · · 0 0 b1
..


..
.
.
 a21
···
a2n
0 ..
. 0 b2 
 .
.. 
..
.. . .
..
..
. . .. ..
 ..
.
.
.
.
. 
.
.
.
.

.


.
.
.
.
.
.
a(m−1)1 · · · a(m−1)n .
.
. 0 0 bm−1 


 am1
···
amn
0 · · · · · · 0 1 0 bm 
−c1
···
−cn
0 ··· ··· ··· 0 1
0
The columns to the left of the vertical line correspond to variables
x1 , . . . , xn , xn+1 , . . . , xn+m , z , respectively.
111
3 Optimisation
Example 3.17. . . . . . .
112
3 Optimisation
Step 2 (apply the optimality test): The last row of the simplex
tableau describes the objective function and is called the objective
row. The optimality test is: does the objective row contain no
negative numbers to the left of the vertical line? If the optimality
test is true, then we have found the optimal solution and the simplex
algorithm is complete. The optimal solution is described by the
basic solution of the simplex tableau. If the optimality test is false,
then we proceed to Step 3.
Example 3.18. . . . . . .
113
3 Optimisation
Step 3 (find the pivot column): Find the most negative number in the objective row (to the left of the vertical line). We
choose this number because any increase in its corresponding variable causes a larger increase in the objective function than any
other variable—we want this number to be as large as possible. The
column containing this most negative number is called the pivot
column.
Step 4 (find the pivot row): Find ratios by dividing each
element in the right-most column (to the right of the vertical line)
by the corresponding element in the pivot column. The pivot row
has the smallest non-negative ratio. We are interested in this
row because it describes the strongest constraint on the variable
corresponding to the pivot column. If there is no such row, then the
simplex algorithm stops because there is no optimal solution.
Example 3.19. . . . . . .
114
3 Optimisation
Step 5 (perform row operations): perform row operations so
that the element in the simplex tableau with is in both the pivot
row and pivot column is a 1 and the other elements in the pivot
column are zero. This makes the variable associated with the pivot
column as large as possible, given the constraints. Then return to
Step 2.
We repeat Steps 2–5 until the optimality test is true (Step 2) or the
algorithm stops without a solution (Step 4). When we complete
Steps 2–5 once we have completed one iteration. At each iteration
we are maximising one of the variables (by choosing a pivot column),
while taking into account the strongest constraint on that variable
(by choosing a pivot row), and finally performing Gauss–Jordan
elimination to obtain a basic feasible solution.
Example 3.20. . . . . . .
115
3 Optimisation
.................
116
3 Optimisation
Remark 3.6. The simplex algorithm moves from basic feasible
solution to basic feasible solution (or vertex to vertex) of the feasible
region and stops when the optimal solution is reached (or the
algorithm stops at Step 4). All vertices found by the simplex
algorithm are feasible; we do not waste time evaluating vertices
outside the feasible region. The optimality test in Step 2 is a test
of an optimal solution to see if it is the largest possible optimal
solution. The row operations in Step 4 find a new feasible vertex
(or basic feasible solution).
Example 3.21. . . . . . .
117
3 Optimisation
Example 3.22. . . . . . .
118
3 Optimisation
Example 3.23. . . . . . .
119
3 Optimisation
.................
120
3 Optimisation
Example 3.24. . . . . . .
121
3 Optimisation
.................
122
3 Optimisation
3.7
Problem formulation
When presented with an optimisation problem written entirely
in words, it is sometimes a daunting task to determine how to
formulate the problem mathematically. This section gives some
hints on how to approach such problems.
There are three ingredients in the type of optimization problem
that we have been considering:
• variables;
• objective function;
• constraints.
Each of these ingredients must be defined to formulate the problem.
Skill in problem formulation is best obtained by doing examples;
however, you can approach each problem by asking yourself the
following questions.
• What quantities can you vary in the problem? The question
will often ask “’How many of these . . . need to be produced?”
These quantities will be your variables x1 , x2 , . . .. Note that
the units are important (e.g., dollars, kilograms, ‘units’ ); you
must state the units in your answer.
• What are you being asked to maximize or minimize? This is
often the profit or cost. When you write it as a linear function
of the variables x1 , x2 , . . .., it defines the objective function.
• What factors limit x1 , x2 , . . .? These factors define the constraints.
Make sure you include all the provided information in the formulation of your problem.
Example 3.25. . . . . . .
123
3 Optimisation
.................
124
3 Optimisation
Example 3.26. . . . . . .
Download