Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 2010, Article ID 363012, 13 pages doi:10.1155/2010/363012 Research Article -Duality Theorems for Convex Semidefinite Optimization Problems with Conic Constraints Gue Myung Lee and Jae Hyoung Lee Department of Applied Mathematics, Pukyong National University, Pusan 608-737, South Korea Correspondence should be addressed to Gue Myung Lee, gmlee@pknu.ac.kr Received 30 October 2009; Accepted 10 December 2009 Academic Editor: Yeol Je Cho Copyright q 2010 G. M. Lee and J. H. Lee. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A convex semidefinite optimization problem with a conic constraint is considered. We formulate a Wolfe-type dual problem for the problem for its -approximate solutions, and then we prove -weak duality theorem and -strong duality theorem which hold between the problem and its Wolfe type dual problem. Moreover, we give an example illustrating the duality theorems. 1. Introduction Convex semidefinite optimization problem is to optimize an objective convex function over a linear matrix inequality. When the objective function is linear and the corresponding matrices are diagonal, this problem becomes a linear optimization problem. For convex semidefinite optimization problem, Lagrangean duality without constraint qualification 1, 2, complete dual characterization conditions of solutions 1, 3, 4, saddle point theorems 5, and characterizations of optimal solution sets 6, 7 have been investigated. To get the -approximate solution, many authors have established -optimality conditions, -saddle point theorems and -duality theorems for several kinds of optimization problems 1, 8–16. Recently, Jeyakumar and Glover 11 gave -optimality conditions for convex optimization problems, which hold without any constraint qualification. Yokoyama and Shiraishi 16 gave a special case of convex optimization problem which satisfies -optimality conditions. Kim and Lee 12 proved sequential -saddle point theorems and -duality theorems for convex semidefinite optimization problems which have not conic constraints. The purpose of this paper is to extend the -duality theorems by Kim and Lee 12 to convex semidefinite optimization problems with conic constraints. We formulate a Wolfe type dual problem for the problem for its -approximate solutions, and then prove 2 Journal of Inequalities and Applications -weak duality theorem and -strong duality theorem for the problem and its Wolfe type dual problem, which hold under a weakened constraint qualification. Moreover, we give an example illustrating the duality theorems. 2. Preliminaries Consider the following convex semidefinite optimization problem: SDP Minimize fx, subject to F0 m xi Fi 0, x1 , x2 , . . . , xm ∈ C, 2.1 i1 where f : Rm → R is a convex function, C is a closed convex cone of Rm , and for i 0, 1, . . . , m, Fi ∈ Sn , where Sn is the space of n × n real symmetric matrices. The space Sn is partially ordered by the Löwner order, that is, for M, N ∈ Sn , M N if and only if M − N is positive semidefinite. The inner product in Sn is defined by M, N TrMN, where Tr· is the trace operation. Let S : {M ∈ Sn | M 0}. Then S is self-dual, that is, S : θ ∈ Sn | θ, Z 0, for any Z ∈ S S. 2.2 m m m Let Fx : F0 i1 xi Fi , Fx : i1 xi Fi , x x1 , . . . , xm ∈ R . Then F is a linear m operator from R to Sn and its dual is defined by F∗ Z TrF1 Z, . . . , TrFm Z, 2.3 for any Z ∈ Sn . Clearly, A : {x ∈ C | Fx ∈ S} is the feasible set of SDP. Definition 2.1. Let g : Rn → R ∪ {∞} be a convex function. 1 The subdifferential of g at a ∈ dom g, where dom g {x ∈ Rn | gx < ∞}, is given by ∂ga v ∈ Rn | gx ga v, x − a, ∀x ∈ Rn , 2.4 where ·, · is the scalar product on Rn . 2 The -subdifferential of g at a ∈ dom g is given by ∂ ga v ∈ Rn | gx ga v, x − a − , ∀x ∈ Rn . 2.5 Definition 2.2. Let 0. Then x ∈ A is called an -approximate solution of SDP, if, for any x ∈ A, fx fx − . 2.6 Journal of Inequalities and Applications 3 Definition 2.3. The conjugate function of a function g : Rn → R ∪ {∞} is defined by g ∗ v sup v, x − gx | x ∈ Rn . 2.7 Definition 2.4. The epigraph of a function g : Rn → R ∪ {∞}, epi g, is defined by epig x, r ∈ Rn × R | gx r . 2.8 If g is sublinear i.e., convex and positively homogeneous of degree one, then ∂ g0 g ∗ epig ∗ 0, k. It is worth ∂g0, for all 0. If g x gx − k, x ∈ Rn , k ∈ R, then epi nothing that if g is sublinear, then epig ∗ ∂g0 × R . 2.9 Moreover, if g is sublinear and if g x gx − k, x ∈ Rn , and k ∈ R, then epi g ∗ ∂g0 × k, ∞. 2.10 Definition 2.5. Let C be a closed convex set in Rn and x ∈ C. 1 Let NC x {v ∈ Rn | v, y − x 0, for all y ∈ C}. Then NC x is called the normal cone to C at x. 2 Let 0. Let NC x {v ∈ Rn | v, y − x , for all y ∈ C}. Then NC x is called the -normal set to C at x. 3 When C is a closed convex cone in Rn , NC 0 we denoted by C∗ and called the negative dual cone of C. Proposition 2.6 see 17, 18. Let f : Rn → R be a convex function and let δC be the indicator function with respect to a closed convex subset C of Rn , that is, δC x 0 if x ∈ C, and δC x ∞ if x / ∈ C. Let 0. Then ∂ f δC x ∂0 fx ∂1 δC x . 0 0,1 0 0 1 2.11 Proposition 2.7 see 7. Let g : Rn → R be a continuous convex function and let h : Rn → R ∪ {∞} be a proper lower semicontinuous convex function. Then ∗ epi g h epig ∗ epih∗ . Following the proof of Lemma 2.2 in 1, we can prove the following lemma. 2.12 4 Journal of Inequalities and Applications ∅. Let u ∈ Rm and α ∈ R. Then the Lemma 2.8. Let Fi ∈ Sn , i 0, 1, . . . , m. Suppose that A / following are equivalent: m x ∈ C | F0 Fi xi 0 ⊂ {x ∈ Rm | u, x α}, i ii u α i1 ⎛ ∈ cl⎝ − TrZF0 − δ Z,δ∈S×R ⎞ F∗ Z 2.13 − C∗ × R ⎠. 3. -Duality Theorem Now we give -duality theorems for SDP. Using Lemma 2.8, we can obtain the following lemma which is useful in proving our -strong duality theorems for SDP. Lemma 3.1. Let x ∈ A. Suppose that F∗ Z − TrZF0 − δ Z,δ∈S×R − C∗ × R 3.1 is closed. Then x is an -approximate solution of SDP if and only if there exists Z ∈ S such that for any x ∈ C, fx − TrZFx fx − . 3.2 Proof. ⇒ Let x be an -approximate solution of SDP. Then fx fx − , for any x ∈ A. Let hx fx − fx . Then hx δA x 0, for any x ∈ Rn . Thus we have, from Proposition 2.7, 0 ∈ epih δA ∗ epih∗ epiδA ∗ ∗ epif ∗ 0, fx − epiδA , 3.3 ∗ and hence, 0, −fx ∈ epif ∗ epiδA . So there exists u, r ∈ epif ∗ such that −u, −fx−r ∈ ∗ epiδA and hence there exists u, r ∈ epif ∗ such that −u, x − fx − r for any x ∈ A. Since f ∗ u r, −u, x −fx−f ∗ u for any x ∈ A; and hence it follows from Lemma 2.8 that u − fx f ∗ u ∈ Z,δ∈S×R F∗ Z − TrZF0 − δ − C∗ × R . 3.4 Thus there exist Z, δ ∈ S × R , c∗ ∈ C∗ , and γ ∈ R such that u F∗ Z − c∗ , − fx f ∗ u − TrZF0 − δ − γ. 3.5 Journal of Inequalities and Applications 5 This gives F∗ Z, x − c∗ , x − fx u, x − fx f ∗ u 3.6 − TrZF0 − δ − γ − fx , for any x ∈ Rn . Thus we have fx − −u, x fx − TrZF0 − δ − γ fx − F∗ Z, x c∗ , x − TrZF0 − δ − γ ∗ 3.7 fx − TrZFx c , x − δ − γ fx − TrZFx for any x ∈ C. ⇐ Suppose that there exists Z ∈ S such that fx − TrZFx fx − , 3.8 fx fx − TrZFx fx − , 3.9 for any x ∈ C. Then we have for any x ∈ A. Thus fx fx − , for any x ∈ A. Hence x is an -approximate solution of SDP. Now we formulate the dual problem SDD of SDP as follows: SDD maximize fx − TrZFx, subject to 0 ∈ ∂0 fx − F∗ Z NC1 x, Z 0, 3.10 0 1 ∈ 0, . We prove -weak and -strong duality theorems which hold between SDP and SDD. Theorem 3.2 -weak duality. For any feasible solution x of SDP and any feasible solution y, Z of SDD, fx f y − Tr ZF y − . 3.11 6 Journal of Inequalities and Applications Proof. Let x and y, Z be feasible solutions of SDP and SDD respectively. Then TrZFx 0 and there exist v ∈ ∂0 fy and ω ∈ NC1 y such that v −ω F∗ Z. Thus, we have fx − f y − Tr ZF y v, x − y − 0 Tr ZF y −ω F∗ Z, x − y − 0 Tr ZF y F∗ Z, x − y − 0 − 1 Tr ZF y F∗ Z, x − F∗ Z, y − 0 − 1 Tr ZF y m m Tr Z xi Fi − Tr Z yi Fi − 0 − 1 i1 TrZF0 Tr Z i1 m 3.12 yi Fi i1 TrZFx − 0 − 1 −0 − 1 −. Hence fx fy − TrZFy − . Theorem 3.3 -strong duality. Suppose that Z,δ∈S×R F∗ Z − TrZF0 − δ − C∗ × R 3.13 is closed. If x is an -approximate solution of SDP, then there exists Z ∈ S such that x, Z is a 2-approximate solution of SDD. Proof. Let x ∈ A be an -approximate solution of SDP. Then fx fx−, for any x ∈ A. By Lemma 3.1, there exists Z ∈ S such that fx − Tr ZFx fx − , 3.14 for any x ∈ C. Letting x x in 3.14, TrZFx . Since Fx ∈ S and Z ∈ S, TrZFx 0. Thus from 3.14, fx − Tr ZFx fx − Tr ZFx 3.15 Journal of Inequalities and Applications 7 for any x ∈ C. Hence x is an -approximate solution of the following problem: maximize fx − Tr ZFx , subject to x ∈ C, 3.16 and so, 0 ∈ ∂ f − F∗ Z δC x, and hence, by Proposition 2.6, there exist 0 , 1 ∈ 0, such that 0 1 and 0 ∈ ∂0 fx − F∗ Z NC1 x. 3.17 So, x, Z is a feasible solution of SDD. For any feasible solution y, Z of SDD, fx − f y − Tr ZF y fx − Tr ZFx − f y − Tr ZF y − Tr ZFx − − Tr ZFx by -weak duality 3.18 − − −2. Thus x, Z is a 2-approximate solution to SDD. Now we characterize the -normal set to Rn . Proposition 3.4. Let x1 , . . . , xn ∈ Rn and 0. Then NR n x1 , . . . , xn n Ai , 0 i1 n i i1 i 3.19 where ⎧ ⎪ ⎨−R Ai $ i % ⎪ ⎩ − ,0 xi if xi 0, if xi > 0. 3.20 8 Journal of Inequalities and Applications Proof. Let x1 , . . . , xn ∈ Rn and 0. Then NR n x1 , . . . , xn ∂ δRn x1 , . . . , xn n ∂ δR×···×R×R ×R×···×R x1 , . . . , xn i1 3.21 n ∂i δR×···×R×R ×R×···×R x1 , . . . , xn . 0 i1 n i i1 i Let v1 , . . . , vn ∈ ∂i δR×···×R×R ×R×···×R x1 , . . . , xn where R is at the ith position in R × · · · × R × R × R × · · · × R ⇐⇒ for any y1 , . . . , yn ∈ R × · · · × R × R × R × · · · × R, i v1 y1 − x1 · · · vi yi − xi · · · vn yn − xn , i vi yi − xi , ⇐⇒ for any yi ∈ R , vj 0, for j ∈ {1, . . . , n} \ {i}, ⎧ ⎪ ⎪ ⎨−R , % ⇐⇒ vi ∈ $ ⎪ ⎪ ⎩ − i ,0 , xi 3.22 if xi 0, if xi > 0, vj 0 for j ∈ {1, . . . , n} \ {i}. Thus, we have NR1n x1 , . . . , xn n {0} × · · · × {0} × Ai × {0} × · · · × {0} 0 i1 n i i1 i n 3.23 Ai . 0 i1 n i i1 i From Proposition 3.4, we can calculate NR 2 . Journal of Inequalities and Applications 9 Corollary 3.5. Let x1 , x2 ∈ R2 and 0. Then following hold. i If x1 , x2 0, 0, then NR 2 0, 0 −R2 . ii If x1 , x2 x1 , 0 and x1 > 0, then NR 2 x1 , 0 −/x1 , 0 × −∞, 0. iii If x1 , x2 0, x2 and x2 > 0, then NR 2 0, x2 −∞, 0 × −/x2 , 0. iv If x1 , x2 x1 , x2 , and x1 > 0 and x2 > 0, then NR 2 x1 , x2 $ − 1 0,2 0, 1 2 % $ % 2 1 ,0 × − ,0 . x1 x2 3.24 Now we give an example illustrating our -duality theorems. Example 3.6. Consider the following convex semidefinite program. SDP Minimize x1 x22 , subject to 0 x1 x1 0 0, 3.25 x1 , x2 ∈ R2 . Let fx1 , x2 x1 x22 , F0 0 0 0 0 , F1 0 1 , 1 0 F2 0 0 0 0 , 3.26 and 0. Let fx1 , x2 x1 x22 and Fx1 , x2 0 x1 x1 0 . 3.27 Then A : {0, x2 ∈ R2 | x2 0} is the set of all feasible solutions of SDP and the set of all √ approximate solutions of SDP is {x1 , x2 ∈ R2 | x1 0, 0 x2 }. Let F {x1 , x2 , Z | 0 ∈ ∂0 fx1 , x2 − F∗ Z NR12 x1 , x2 , Z 0, 0 1 ∈ 0, }. Then F is the set of all feasible solution of SDD. Now we calculate the set F. 10 Journal of Inequalities and Applications : A a b 0, 0, | 0 ∈ ∂0 f0, 0 − F∗ b c a b b c NR12 0, 0, a 0, c 0, b ac, 0 1 ∈ 0, 2 0, 0, a b b c √ √ | 0 ∈ {1} × −2 0 , 2 0 − 2b, 0 − R2 , a 0, c 0, b ac, 0 1 ∈ 0, 2 0, 0, a b b c √ | 2b, 0 ∈ −∞, 1 × −∞, 2 0 , a 0, c 0, b ac, 0 1 ∈ 0, 2 0, 0, b c B : a b 0, x2 , a b 1 2 | a 0, c 0, b , b ac , 2 ∗ | x2 > 0, 0 ∈ ∂0 f0, x2 − F b c a b b c NR12 0, x2 a 0, c 0, b ac, 0 1 ∈ 0, 2 a b 0, x2 , b c √ √ | x2 > 0, 0 ∈ {1} × 2x2 − 2 0 , 2x2 2 0 − 2b, 0 % 1 2 −∞, 0 × − , 0 , a 0, c 0, b ac, 0 1 ∈ 0, x2 $ a b 0, x2 , b c $ % √ √ 1 − 2 0 , 2x2 2 0 , | x2 > 0, 2b, 0 ∈ −∞, 1 × 2x2 − x2 a 0, c 0, b ac, 0 1 ∈ 0, 2 0, x2 , a b √ | 0 < x2 b c 0 1 ∈ 0, , 0 & 0 21 1 , a 0, c 0, b , b2 ac, 2 2 Journal of Inequalities and Applications a b ∗ a b NR12 x1 , 0, | x1 > 0, 0 ∈ ∂0 fx1 , 0 − F C : x1 , 0, b c b c a 0, c 0, b ac, 0 1 ∈ 0, 2 x1 , 0, a b b c √ √ | x1 > 0, 0 ∈ {1} × −2 0 , 2 0 − 2b, 0 % 1 2 − , 0 × −∞, 0, a 0, c 0, b ac, 0 1 ∈ 0, x1 $ x1 , 0, a b b c $ % √ 1 | x1 > 0, 2b, 0 ∈ 1 − , 1 × −∞, 2 0 , x1 a 0, c 0, b ac, 0 1 ∈ 0, 2 x1 , 0, a b b c | 0 < x1 , −1 −x1 2bx1 , a 0, c 0, b 1 2 , b ac, 2 0 1 ∈ 0, , : D x1 , x2 , a b b c ∗ | x1 > 0, x2 > 0, 0 ∈ ∂0 fx1 , x2 − F a b b c NR12 x1 , x2 , a b b c x1 , x2 , a 0, c 0, b ac, 0 1 ∈ 0, 2 | x1 > 0, x2 > 0, 1 2 √ √ 0 ∈ {1} × 2x2 − 2 0 , 2x2 2 0 − 2b, 0 − 1 , 0 × − 1 , 0 , x1 x2 a 0, c 0, b2 ac, 11 12 1 , 0 1 ∈ 0, a b x1 , x2 , b c | x1 > 0, x2 > 0, 11 12 √ √ − 2 0 , 2x2 2 0 , 2b, 0 ∈ 1 − , 1 × 2x2 − x1 x2 a 0, c 0, b2 ac, 11 12 1 , 0 1 ∈ 0, 11 12 Journal of Inequalities and Applications x1 , x2 , 0 < x2 a b b c | 0 < x1 , −11 −x1 2bx1 , ' √ 0 0 212 2 , a 0, c 0, b 1 2 , b ac, 11 12 1 , 2 0 1 ∈ 0, . 3.28 ∪ B ∪ C ∪ D. We can check that for any x1 , x2 ∈ A and any y1 , y2 , Thus F A fx1 , x2 f y1 , y2 a b − Tr F y1 , y2 − , b c a b b c ∈ F, 3.29 that is, -weak duality holds. √ Let x1 , x2 ∈ A be an -approximate 0 0 solution of SDP. Then x1 0 and 0 x2 . So, we can easily that x1 , x2 , 0 0 ∈ F. check Since Tr 00 00 Fx1 , x2 0, from 3.29, a b − , fx1 , x2 f y1 , y2 − Tr F y1 , y2 b c 3.30 for any y1 , y2 , ab bc ∈ F. So x1 , x2 , a b is an -approximate solution of SDD. Hence b c -strong duality holds. Acknowledgment This work was supported by the Korea Science and Engineering Foundation KOSEF NRL Program grant funded by the Korean government MESTno. R0A-2008-000-20010-0. References 1 V. Jeyakumar and N. Dinh, “Avoiding duality gaps in convex semidefinite programming without Slater’s condition,” Applied Mathematics Report AMR04/6, University of New South Wales, Sydney, Australia, 2004. 2 M. V. Ramana, L. Tunçel, and H. Wolkowicz, “Strong duality for semidefinite programming,” SIAM Journal on Optimization, vol. 7, no. 3, pp. 641–662, 1997. 3 V. Jeyakumar, G. M. Lee, and N. Dinh, “New sequential Lagrange multiplier conditions characterizing optimality without constraint qualification for convex programs,” SIAM Journal on Optimization, vol. 14, no. 2, pp. 534–547, 2003. 4 V. Jeyakumar and M. J. Nealon, “Complete dual characterizations of optimality for convex semidefinite programming,” in Constructive, Experimental, and Nonlinear Analysis (Limoges, 1999), vol. 27 of CMS Conference Proceedings, pp. 165–173, American Mathematical Society, Providence, RI, USA, 2000. Journal of Inequalities and Applications 13 5 N. Dinh, V. Jeyakumar, and G. M. Lee, “Sequential Lagrangian conditions for convex programs with applications to semidefinite programming,” Journal of Optimization Theory and Applications, vol. 125, no. 1, pp. 85–112, 2005. 6 V. Jeyakumar, G. M. Lee, and N. Dinh, “Lagrange multiplier conditions characterizing the optimal solution sets of cone-constrained convex programs,” Journal of Optimization Theory and Applications, vol. 123, no. 1, pp. 83–103, 2004. 7 V. Jeyakumar, G. M. Lee, and N. Dinh, “Characterizations of solution sets of convex vector minimization problems,” European Journal of Operational Research, vol. 174, no. 3, pp. 1380–1395, 2006. 8 M. G. Govil and A. Mehra, “ε-optimality for multiobjective programming on a Banach space,” European Journal of Operational Research, vol. 157, no. 1, pp. 106–112, 2004. 9 C. Gutiérrez, B. Jiménez, and V. Novo, “Multiplier rules and saddle-point theorems for Helbig’s approximate solutions in convex Pareto problems,” Journal of Global Optimization, vol. 32, no. 3, pp. 367–383, 2005. 10 A. Hamel, “An ε-lagrange multiplier rule for a mathematical programming problem on Banach spaces,” Optimization, vol. 49, no. 1-2, pp. 137–149, 2001. 11 V. Jeyakumar and B. M. Glover, “Characterizing global optimality for DC optimization problems under convex inequality constraints,” Journal of Global Optimization, vol. 8, no. 2, pp. 171–187, 1996. 12 G. S. Kim and G. M. Lee, “On ε-approximate solutions for convex semidefinite optimization problems,” Taiwanese Journal of Mathematics, vol. 11, no. 3, pp. 765–784, 2007. 13 J. C. Liu, “ε-duality theorem of nondifferentiable nonconvex multiobjective programming,” Journal of Optimization Theory and Applications, vol. 69, no. 1, pp. 153–167, 1991. 14 J. C. Liu, “ε-Pareto optimality for nondifferentiable multiobjective programming via penalty function,” Journal of Mathematical Analysis and Applications, vol. 198, no. 1, pp. 248–261, 1996. 15 J.-J. Strodiot, V. H. Nguyen, and N. Heukemes, “ε-optimal solutions in nondifferentiable convex programming and some related questions,” Mathematical Programming, vol. 25, no. 3, pp. 307–328, 1983. 16 K. Yokoyama and S. Shiraishi, “An ε-optimal condition for convex programming problems without Slater’s constraint qualifications,” preprint. 17 J. B. Hiriart-Urruty and C. Lemarechal, Convex Analysis and Minimization Algorithms. I. Fundamentals, vol. 305 of Grundlehren der mathematischen Wissenschaften, Springer, Berlin, Germany, 1993. 18 J. B. Hiriart-Urruty and C. Lemarechal, Convex Analysis and Minimization Algorithms. II. Advanced Theory and Bundle Methods, vol. 306 of Grundlehren der mathematischen Wissenschaften, Springer, Berlin, Germany, 1993.