Math 340 Karush-Kuhn-Tucker Conditions A nonlinear program can be written as N LP : max f (x) g1 (x) ≤ 0 g2 (x) ≤ 0 .. . gk (x) ≤ 0 The functions f, gi are functions from Rn to R Note that we must include any positivity constraints as just regular constraints. The names Karush-Kuhn-Tucker are often applied to the following results but they are often known just by Kuhn-Tucker or sometimes by the names Karush-Kuhn-Tucker-Lagrange making a historical reference to Lagrange Multipliers which arise if the constraints are all equalities. The KKT conditions are: 1) x is feasible (gi (x) ≤ 0 for i = 1, 2, . . . , k) and we have multipliers µ = (µ1 , µ2 , µ3 , . . . , µk ) ≥ 0 2) ∇f (x) − 3) Pk i=1 Pk i=1 µi ∇gi (x) = 0 µi gi (x) = 0 The two theorems we have stated that refer to the KKT conditions are the following. Define the following: Df (x) = {z : z is a feasible direction at x} Dg (x) = {z : zT ∇gi (x) ≥ 0 for i with gi (x) = 0} Theorem. Assume that f and gi for i = 1, 2, . . . , k are differentiable. Assume that the closure of Df (x) is Dg (x). Assume x is an optimal solution to the NLP above. Then there exists a vector µ with x, µ satisfying KKT. A variety of other conditions than the condition that the closure of Df (x) is Dg (x) still yield the result. In our cases, where the constraints are linear, this condition is trivial. We define a function f to be concave if for any x, y and any λ with 0 ≤ λ ≤ 1, f (λx + (1 − λ)y) ≥ λf (x) + (1 − λ)g(y). We note that a linear function is concave. A set of points R is convex if for any x, y and any λ with 0 ≤ λ ≤ 1, λx + (1 − λ)y ∈ R We note that the feasible region for our NLP: {x ∈ Rn : gi (x) ≤ 0 for i = 1, 2, . . . , k} will be convex if the constraints are all linear. Theorem. Assume that f and gi for i = 1, 2, . . . , k are differentiable. Assume f is concave and that the feasible region is convex. Then if x, µ satisfies KKT then x is an optimal solution to the NLP above.