The 5th International Conference on Control and Optimization with Industrial Applications, 27-29 August, 2015, Baku, Azerbaijan ON WEAK SUBGRADIENTS IN NONCONVEX OPTIMIZATION AND OPTIMALITY CONDITIONS Refail Kasimbeyli1 1 Department of Industrial Engineering, Anadolu University, Eskisehir, Turkey e-mail: rkasimbeyli@anadolu.edu.tr Consider the problem of minimizing function π: π → π over the set π ⊆ π π . The following is a well-known optimality condition in nonsmooth convex analysis which states that [1, Proposition 1.8.1, page 168] if π: π π → π is a convex function then vector π₯ minimizes π over a convex set π ⊆ π π if and only if there exists a subgradient π₯ ∗ ∈ ππ(π₯) such that π₯ ∗ (π₯ − π₯) ≥ 0 πππ πππ π₯ ∈ π, (1) ππ(π₯) = {π₯ ∗ ∈ π π βΆ π(π₯) − π(π₯) ≥ π₯ ∗ (π₯ − π₯) πππ πππ π₯ ∈ π π } (2) where is a subdifferential of πat π₯. Although the above condition is valid for both convex and nonconvex functions, for a variety of reasons, if the function π is not convex, the subdifferential ππ(β) is not a particularly helpful tool. This makes it very tempting to look for definitions of generalized derivatives and subdifferentials for a nonconvex function. The concepts of generalized differentiability appropriate for applications to optimization were defined in convex analysis: first geometrically as the normal cone to a convex set that goes back to Minkowski [2], and then – much later – analytically as the subdifferential of an extended real-valued convex function. The latter notion, inspired by the work of Fenchel [3], was explicitly introduced by Moreau [4] and Rockafellar [5]. It is well known that every convex function π: π → π on a Banach space admits the classical directional derivative π ′ (π₯; β) = lim ( π‘↓0 π(π₯ + π‘β) − π(π₯) ) π‘ in any direction β ∈ π at any point of its efficient domain πππ(π). This notion was generalized by many researchers, such as Clarke [6], Rockafellar [7,8] and others. By using a general notation π π for the directional derivatives mentioned above, the corresponding subdifferential of π at π₯ is defined by π π π(π₯) = {π₯ ∗ ∈ π π βΆ π π (π₯; β) ≥ π₯ ∗ (β) πππ πππ β ∈ π π } The 5th International Conference on Control and Optimization with Industrial Applications, 27-29 August, 2015, Baku, Azerbaijan This is a standard way to introduce subgradients via directional derivatives. For convex functions it is equivalent to the classical subdifferential of convex analysis: ππ(π₯) = {π₯ ∗ ∈ π π βΆ π ′ (π₯; β) ≥ π₯ ∗ (β) πππ πππ β ∈ π π }. One of the main purposes of introducing all these generalizations was to obtain optimality conditions for nonconvex problems. Note that only the necessary part of the optimality condition given in (1) could be obtained for nonconvex case by using different subdifferential and normal cone generalizations. Since these generalizations do not satisfy the main property of the classical subgradient given in (2) for nonconvex functions, one cannot expect to obtain the optimality condition similar to that given in (1) in nonconvex case. For example, let function π: π → π be defined as π(π₯) = |π₯|. Then, Clarke's directional derivative is π 0 (0; β) = |β| for all β ∈ π and π π π(0) = [−1,1]. It is clear that hyperplanes with normal vectors (slopes in this case) from this subdifferential cannot be used to support the epigraph of the function. A similar interpretation is also valid for Mordukhovich's subdifferential, which is defined for this function as π π π(0) = {−1,1}. The relation given in (2) for the classical subdifferential means the existence of the supporting hyperplane to epigraph of π at the point under consideration. It is obvious that to use such a hyperplane as a supporting surface for nonconvex functions may not be possible in general. This means that the investigation of nonconvex case can be made available by changing the supporting philosophy and using suitable nonlinear supporting surfaces. By using a cone of concave functions (instead of linear ones used in convex analysis), Gasimov [9] investigated duality relations and obtained optimality conditions for some classes of nonconvex optimization problems in both single-objective optimization and vector optimization. Azimov and Gasimov [10,11] constructed a duality scheme by using a special class of concave functions. They used a class of superlinear conic functions. A graph of such a function is a conical surface which can be used as a supporting surface for a certain class of nonconvex sets. By using the mentioned class of superlinear functions, they introduced the concept of weak subdifferential and derived a collection of optimality conditions and duality relations for a wide class of nonconvex optimization problems. The superlinear conic functions were applied to construct the so-called sharp augmented Lagrangian functions for nonconvex constrained optimization problems and to derive zero duality The 5th International Conference on Control and Optimization with Industrial Applications, 27-29 August, 2015, Baku, Azerbaijan gap conditions. In this paper, we study optimality conditions for nonconvex problems involving a special class of directionally differentiable functions. By using the weak subgradient notion, we generalize the necessary and sufficient optimality condition given in (1) to a nonconvex case. We show that the point π₯ minimizes π over a set π ⊆ π π if and only if there exists a weak subgradient (π₯ ∗ , πΌ) ∈ π π€ π(π₯) such that π₯ ∗ (π₯ − π₯) + πΌβπ₯ − π₯β ≥ 0 πππ πππ π₯ ∈ π. Keywords: directional derivative, weak subgradient, nonconvex optimization, duality, optimality conditions. AMS Subject Classification: 90C26, 90C30, 90C46. 1. Bertsekas D.P., Nedic A., Ozdaglar A.E., Convexity, Duality and Lagrange Multipliers, Lecture Notes, MIT, (2001). 2. Minkowski H., Theorie der Konvexen Körper, In: Gesammelte Abhandlungen, II, B.G. Teubner (Eds.), Insbesondere Begründung inhres Ober Flächenbegriffs, Leipzig (1911). 3. Fenchel W., Convex Cones, Sets and Functions, in: Lecture Notes, Princeton University, Princeton, New Jersey, (1951). 4. Moreau J.-J., Fonctionelles sous-differentiables, C. R. Acad. Sci. 257 (1963) 4117–4119. 5. Rockafellar R.T., Convex Functions and Dual Extremum Problems, Ph.D. dissertation, Department of Mathematics, Harvard University, Cambridge, Massachusetts, (1963). 6. Clarke F.H., Generalized gradients and applications, Trans. Amer. Math. Soc. 205 (1975) 247–262. 7. Rockafellar R.T., Directional Lipschitzian functions and subdifferential calculus, Proc. London Math. Soc. 39 (1979) 331–355. 8. Rockafellar R.T., The Theory of Subgradients and Its Applications to Problems of Optimization: Convex and Nonconvex Functions, Helderman Verlag, Berlin, (1981). 9. Gasimov R.N., Duality in nonconvex optimization, Ph.D. Dissertation, Department of Operations Research and Mathematical Modeling, Baku State University, Baku, (1992). 10. Azimov A.Y., Gasimov R.N., On weak conjugacy, weak subdifferentials and duality with zero gap in nonconvex optimization, Int. J. Appl. Math. 1 (1999), 171–192. 11. Azimov A.Y., Gasimov R.N., Stability and duality of nonconvex problems via augmented Lagrangian, Cybernet. Systems Anal. 3 (2002) 120–130.