Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2013, Article ID 101974, 9 pages http://dx.doi.org/10.1155/2013/101974 Research Article π -Goodness for Low-Rank Matrix Recovery Lingchen Kong,1 Levent Tunçel,2 and Naihua Xiu1 1 2 Department of Applied Mathematics, Beijing Jiaotong University, Beijing 100044, China Department of Combinatorics and Optimization, Faculty of Mathematics, University of Waterloo, Waterloo, ON, Canada N2L 3G1 Correspondence should be addressed to Lingchen Kong; konglchen@126.com Received 21 January 2013; Accepted 17 March 2013 Academic Editor: Jein-Shan Chen Copyright © 2013 Lingchen Kong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Low-rank matrix recovery (LMR) is a rank minimization problem subject to linear equality constraints, and it arises in many fields such as signal and image processing, statistics, computer vision, and system identification and control. This class of optimization problems is generally NP hard. A popular approach replaces the rank function with the nuclear norm of the matrix variable. In this paper, we extend and characterize the concept of π -goodness for a sensing matrix in sparse signal recovery (proposed by Juditsky and Nemirovski (Math Program, 2011)) to linear transformations in LMR. Using the two characteristic π -goodness constants, πΎπ and πΎΜπ , of a linear transformation, we derive necessary and sufficient conditions for a linear transformation to be π -good. Moreover, we establish the equivalence of π -goodness and the null space properties. Therefore, π -goodness is a necessary and sufficient condition for exact π -rank matrix recovery via the nuclear norm minimization. 1. Introduction Low-rank matrix recovery (LMR for short) is a rank minimization problem (RMP) with linear constraints or the affine matrix rank minimization problem which is defined as follows: rank (π) , minimize subject to Aπ = π, (1) where π ∈ Rπ×π is the matrix variable, A : Rπ×π → Rπ is a linear transformation, and π ∈ Rπ . Although specific instances can often be solved by specialized algorithms, the LMR is NP hard. A popular approach for solving LMR in the systems and control community is to minimize the trace of a positive semidefinite matrix variable instead of its rank (see, e.g., [1, 2]). A generalization of this approach to nonsymmetric matrices introduced by Fazel et al. [3] is the famous convex relaxation of LMR (1), which is called nuclear norm minimization (NNM): min βπβ∗ s.t. Aπ = π, (2) where βπβ∗ is the nuclear norm of π, that is, the sum of its singular values. When π = π and the matrix π := Diag(π₯), π₯ ∈ Rπ , is diagonal, the LMR (1) reduces to sparse signal recovery (SSR), which is the so-called cardinality minimization problem (CMP): min βπ₯β0 s.t. Φπ₯ = π, (3) where βπ₯β0 denotes the number of nonzero entries in the vector π₯ and Φ ∈ Rπ×π is a given sensing matrix. A well-known heuristic for SSR is the β1 -norm minimization relaxation (basis pursuit problem): min βπ₯β1 s.t. Φπ₯ = π, (4) where βπ₯β1 is the β1 -norm of π₯, that is, the sum of absolute values of its entries. LMR problems have many applications and they appeared in the literature of a diverse set of fields including signal and image processing, statistics, computer vision, and system identification and control. For more details, see the recent 2 Abstract and Applied Analysis paper [4]. LMR and NNM have been the focus of some recent research in the optimization community, see; for example, [4– 15]. Although there are many papers dealing with algorithms for NNM such as interior-point methods, fixed point and Bregman iterative methods, and proximal point methods, there are fewer papers dealing with the conditions that guarantee the success of the low-rank matrix recovery via NNM. For instance, following the program laid out in the work of CandeΜs and Tao in compressed sensing (CS, see, e.g., [16–18]), Recht et al. [4] provided a certain restricted isometry property (RIP) condition on the linear transformation which guarantees that the minimum nuclear norm solution is the minimum rank solution. Recht et al. [14, 19] gave the null space property (NSP) which characterizes a particular property of the null space of the linear transformation, which is also discussed by Oymak et al. [20, 21]. Note that NSP states a necessary and sufficient condition for exactly recovering the low-rank matrix via nuclear norm minimization. Recently, Chandrasekaran et al. [22] proposed that a fixed π -rank matrix π0 can be recovered if and only if the null space of A does not intersect the tangent cone of the nuclear norm ball at π0 . In the setting of CS, there are other characterizations of the sensing matrix, under which β1 -norm minimization can be guaranteed to yield an optimal solution to SSR, in addition to RIP and null-space properties, see; for example, [23– 26]. In particular, Juditsky and Nemirovski [24] established necessary and sufficient conditions for a Sensing matrix to be “π -good” to allow for exact β1 -recovery of sparse signals with π nonzero entries when no measurement noise is present. They also demonstrated that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact SSR and to efficiently computable upper bounds on those π for which a given sensing matrix is π -good. Furthermore, they established instructive links between π -goodness and RIP in the CS context. One may wonder whether we can generalize the π -goodness concept to LMR and still maintain many of the nice properties as done in [24]. Here, we deal with this issue. Our approach is based on the singular value decomposition (SVD) of a matrix and the partition technique generalized from CS. In the next section, following Juditsky and Nemirovski’s terminology, we propose definitions of π goodness and πΊ-numbers, πΎπ and πΎΜπ , of a linear transformation in LMR and then we provide some basic properties of πΊ-numbers. In Section 3, we characterize π -goodness of a linear transformation in LMR via πΊ-numbers. We consider the connections between the π -goodness, NSP, and RIP in Section 4. We eventually obtain that πΏ2π < 0.472 ⇒ A satisfying NSP ⇔ πΎΜπ (A) < 1/2 ⇔ πΎπ (A) < 1 ⇔ A is π -good. Let π ∈ Rπ×π , π := min{π, π}, and let π = π Diag(π(π))ππ be an SVD of π, where π ∈ Rπ×π , π ∈ Rπ×π , and Diag(π(π)) is the diagonal matrix of π(π) = (π1 (π), . . . , ππ (π))π which is the vector of the singular values of π. Also let Ξ(π) denote the set of pairs of matrices (π, π) in the SVD of π; that is, Ξ (π) := { (π, π) : π ∈ Rπ×π , π ∈ Rπ×π , π = π Diag (π (π)) ππ } . (5) For π ∈ {0, 1, 2, . . . , π}, we say π ∈ Rπ×π is an π -rank matrix to mean that the rank of π is no more than π . For an π π rank matrix π, it is convenient to take π = ππ×π ππ ππ×π π×π π×π as its SVD where ππ×π ∈ R , ππ×π ∈ R are orthogonal matrices and ππ = Diag((π1 (π), . . . , ππ (π))π ). For a vector π¦ ∈ Rπ , let β ⋅ βπ be the dual norm of β ⋅ β specified by βπ¦βπ := maxV {β¨V, π¦β© : βVβ ≤ 1}. In particular, β ⋅ β∞ is the dual norm of β ⋅ β1 for a vector. Let βπβ denote the spectral or the operator norm of a matrix π ∈ Rπ×π , that is, the largest singular value of π. In fact, βπβ is the dual norm of βπβ∗ . Let βπβπΉ := √β¨π, πβ© = √Tr(ππ π) be the Frobenius norm of π, which is equal to the β2 -norm of the vector of its singular values. We denote by ππ the transpose of π. For a linear transformation A : Rπ×π → Rπ , we denote by A∗ : Rπ → Rπ×π the adjoint of A. 2. Definitions and Basic Properties 2.1. Definitions. We first go over some concepts related to π goodness of the linear transformation in LMR (RMP). These are extensions of those given for SSR (CMP) in [24]. Definition 1. Let A : Rπ×π → Rπ be a linear transformation and π ∈ {0, 1, 2, . . . , π}. One says that A is π -good, if for every π -rank matrix π ∈ Rπ×π , π is the unique optimal solution to the optimization problem min {βπβ∗ : Aπ = Aπ} . π∈Rπ×π (6) We denote by π ∗ (A) the largest integer π for which A is π -good. Clearly, π ∗ (A) ∈ {0, 1, . . . , π}. To characterize π goodness we introduce two useful π -goodness constants: πΎπ and πΎΜπ . We call πΎπ and πΎΜπ πΊ-numbers. Definition 2. Let A : Rπ×π → Rπ be a linear transformation, π½ ∈ [0, +∞] and π ∈ {0, 1, 2, . . . , π}. Then we have the following. (i) πΊ-number πΎπ (A, π½) is the infimum of πΎ ≥ 0 such that for every matrix π ∈ Rπ×π with singular value decomposiπ tion π = ππ×π ππ×π (i.e., π nonzero singular values, all equal to 1), there exists a vector π¦ ∈ Rπ such that σ΅©σ΅© σ΅©σ΅© (7) A∗ π¦ = π Diag (π (A∗ π¦)) ππ , σ΅©σ΅©π¦σ΅©σ΅©π ≤ π½, where π = [ππ×π ππ×(π−π ) ], π = [ππ×π ππ×(π−π ) ] are orthogonal matrices, and if ππ (π) = 1, {= 1, ππ (A∗ π¦) { ∈ [0, πΎ] , if ππ (π) = 0, { π ∈ {1, 2, . . . , π} . (8) If there does not exist such π¦ for some π as above, we set πΎπ (A, π½) = +∞. (ii) πΊ-number πΎΜπ (A, π½) is the infimum of πΎ ≥ 0 such that for every matrix π ∈ Rπ×π with π nonzero singular values, all equal to 1, there exists a vector π¦ ∈ Rπ such that A∗ π¦ and π share the same orthogonal row and column spaces: σ΅©σ΅© σ΅©σ΅© σ΅©σ΅© ∗ σ΅©σ΅© (9) σ΅©σ΅©π¦σ΅©σ΅©π ≤ π½, σ΅©σ΅©A π¦ − πσ΅©σ΅© ≤ πΎ. Abstract and Applied Analysis 3 If there does not exist such π¦ for some π as above, we set πΎπ (A, π½) = +∞ and to be compatible with the special case given by [24] we write πΎπ (A), πΎΜπ (A) instead of πΎπ (A, +∞), πΎΜπ (A, +∞), respectively. From the above definition, we easily see that the set of values that πΎ takes is closed. Thus, when πΎπ (A, π½) < +∞, for every matrix π ∈ Rπ×π with π nonzero singular values, all equal to 1, there exists a vector π¦ ∈ Rπ such that σ΅©σ΅© σ΅©σ΅© σ΅©σ΅©π¦σ΅©σ΅©π ≤ π½, if ππ (π) = 1, = 1, ππ (A π¦) { ∈ [0, πΎπ (A, π½)] , ∗ if ππ (π) = 0, Similarly, for every matrix π ∈ R with π nonzero singular values, all equal to 1, there exists a vector π¦Μ ∈ Rπ such that A∗ π¦Μ and π share the same orthogonal row and column spaces: σ΅©σ΅© ∗ Μ σ΅© σ΅©σ΅© Μσ΅©σ΅© (11) σ΅©σ΅©A π¦ − πσ΅©σ΅©σ΅© ≤ πΎΜπ (A, π½) . σ΅©σ΅©π¦σ΅©σ΅©π ≤ π½, ∗ Observing that the set {A π¦ : βπ¦βπ ≤ π½} is convex, we obtain that if πΎπ (A, π½) < +∞ then for every matrix π with at most π nonzero singular values and βπβ ≤ 1 there exist vectors π¦ satisfying (10) and there exist vectors π¦Μ satisfying (11). 2.2. Basic Properties of πΊ-Numbers. In order to characterize the π -goodness of a linear transformation A, we study the basic properties of πΊ-numbers. We begin with the result that πΊ-numbers πΎπ (A, π½) and πΎΜπ (A, π½) are convex nonincreasing functions of π½. Proposition 3. For every linear transformation A and every π ∈ {0, 1, . . . , π}, πΊ-numbers πΎπ (A, π½) and πΎΜπ (A, π½) are convex nonincreasing functions of π½ ∈ [0, +∞]. Proof. We only need to demonstrate that the quantity πΎπ (A, π½) is a convex nonincreasing function of π½ ∈ [0, +∞]. It is evident from the definition that πΎπ (A, π½) is nonincreasing for given A, π . It remains to show that πΎπ (A, π½) is a convex function of π½. In other words, for every pair π½1 , π½2 ∈ [0, +∞], we need to verify that πΎπ (A, πΌπ½1 + (1 − πΌ) π½2 ) ∀πΌ ∈ [0, 1] . (12) The above inequality follows immediately if one of π½1 , π½2 is +∞. Thus, we may assume π½1 , π½2 ∈ [0, +∞). In fact, from the argument around (10) and the definition of πΎπ (A, ⋅), we know that for every matrix π = π Diag(π(π))ππ with π nonzero singular values, all equal to 1, there exist vectors π¦1 , π¦2 ∈ Rπ such that for π ∈ {1, 2} σ΅©σ΅© σ΅©σ΅© σ΅©σ΅©π¦π σ΅©σ΅©π ≤ π½π , π ∈ {1, 2, . . . , π} . rank (ππ ) ≤ π − π , ππππ = 0, σ΅©σ΅© σ΅©σ΅© σ΅©σ΅©ππ σ΅©σ΅© ≤ πΎπ (A, π½π ) . (13) (14) This implies that for every πΌ ∈ [0, 1] (10) π ∈ {1, 2, . . . , π} . = 1, if ππ (π) = 1, ππ (A∗ π¦π ) { ∈ [0, πΎπ (A, π½π )] , if ππ (π) = 0, ππ ππ = 0, ππ [πΌπ1 + (1 − πΌ) π2 ] = 0, π×π ≤ πΌπΎπ (A, π½1 ) + (1 − πΌ) πΎπ (A, π½2 ) , It is immediate from (13) that βπΌπ¦1 + (1 − πΌ)π¦2 βπ ≤ πΌπ½1 + (1 − πΌ)π½2 . Moreover, from the above information on the singular values of A∗ π¦1 , A∗ π¦2 , we may set A∗ π¦π = π + ππ , π ∈ {1, 2} such that π π[πΌπ1 + (1 − πΌ)π2 ] = 0, (15) and hence rank[πΌπ1 +(1−πΌ)π2 ] ≤ π−π , π, and [πΌπ1 +(1−πΌ)π2 ] have orthogonal row and column spaces. Thus, noting that A∗ [πΌπ¦1 + (1 − πΌ)π¦2 ] = π + πΌπ1 + (1 − πΌ)π2 , we obtain that βπΌπ¦1 + (1 − πΌ)π¦2 βπ ≤ πΌπ½1 + (1 − πΌ)π½2 and ππ (A∗ (πΌπ¦1 + (1 − πΌ) π¦2 )) if ππ (π) = 1, {1, ={ π (πΌπ1 + (1 − πΌ) π2 ) , if ππ (π) = 0, { π (16) for every πΌ ∈ [0, 1]. Combining this with the fact σ΅© σ΅© σ΅© σ΅© σ΅© σ΅©σ΅© σ΅©σ΅©πΌπ1 + (1 − πΌ) π2 σ΅©σ΅©σ΅© ≤ πΌ σ΅©σ΅©σ΅©π1 σ΅©σ΅©σ΅© + (1 − πΌ) σ΅©σ΅©σ΅©π2 σ΅©σ΅©σ΅© ≤ πΌπΎπ (A, π½1 ) + (1 − πΌ) πΎπ (A, π½2 ) , (17) we obtain the desired conclusion. The following observation that πΊ-numbers πΎπ (A, π½), πΎΜπ (A, π½) are nondecreasing in π is immediate. Proposition 4. For every π σΈ ≤ π , one has πΎπ σΈ (A, π½) ≤ πΎπ (A, π½), πΎΜπ σΈ (A, π½) ≤ πΎΜπ (A, π½). We further investigate the relationship between the πΊnumbers πΎπ (A, π½) and πΎΜπ (A, π½). Proposition 5. Let A : Rπ×π → Rπ be a linear transformation, π½ ∈ [0, +∞], and π ∈ {0, 1, 2, . . . , π}. Then one has πΎ := πΎπ (A, π½) < 1 σ³¨⇒ πΎΜπ (A, = 1 π½) 1+πΎ πΎ 1 < , 1+πΎ 2 1 1 π½) πΎΜ := πΎΜπ (A, π½) < σ³¨⇒ πΎπ (A, 2 1 − πΎΜ = (18) πΎΜ < 1. 1 − πΎΜ Proof. Let πΎ := πΎπ (A, π½) < 1. Then, for every matrix π ∈ Rπ×π with π nonzero singular values, all equal to 1, there exists π¦ ∈ Rπ , βπ¦βπ ≤ π½, such that A∗ π¦ = π + π, where βπβ ≤ πΎ and π and π have orthogonal row and column spaces. 4 Abstract and Applied Analysis For a given pair π, π¦ as above, take π¦Μ := (1/(1 + πΎ))π¦. Then Μ π ≤ (1/(1 + πΎ))π½ and we have βπ¦β πΎ πΎ 1 σ΅©σ΅© ∗ Μ σ΅© , }= , σ΅©σ΅©A π¦ − πσ΅©σ΅©σ΅© ≤ max {1 − 1+πΎ 1+πΎ 1+πΎ (19) where the first term under the maximum comes from the fact that A∗ π¦ and π agree on the subspace corresponding to the nonzero singular values of π. Therefore, we obtain πΎΜπ (A, πΎ 1 1 π½) ≤ < . 1+πΎ 1+πΎ 2 (20) Now, we assume that πΎΜ := πΎΜπ (A, π½) < 1/2. Fix orthogonal matrices π ∈ Rπ×π , π ∈ Rπ×π . For an π -element subset π½ of the index set {1, 2, . . . , π}, we define a set ππ½ with respect to orthogonal matrices π, π as σ΅© σ΅© ππ½ := {π₯ ∈ Rπ : ∃π¦ ∈ Rπ , σ΅©σ΅©σ΅©π¦σ΅©σ΅©σ΅©π ≤ π½, σ΅¨ σ΅¨ A∗ π¦ = π Diag (π₯) ππ with σ΅¨σ΅¨σ΅¨π₯π σ΅¨σ΅¨σ΅¨ ≤ πΎΜ for π ∈ π½} . (21) In the above, π½ denotes the complement of π½. It is immediately seen that ππ½ is a closed convex set in Rπ . Moreover, we have the following Claim 1. ππ½ contains the β ⋅ β∞ -ball of radius (1 − πΎΜ) centered at the origin in Rπ . Proof. Note that ππ½ is closed and convex. Moreover, ππ½ is the direct sum of its projections onto the pair of subspaces where π and π Diag(π§)ππ have the same orthogonal row and column spaces, βA∗ π¦ − π Diag(π§)ππ β ≤ πΎΜ and βπ(A∗ π¦) − π§β∞ ≤ πΎΜ. Together with the definitions of ππ½ and π, this means that π contains a vector V with |Vπ − sign(π’π )| ≤ πΎΜ, ∀π ∈ {1, 2, . . . , π }. Therefore, π σ΅¨ σ΅¨ π’π V ≥ ∑ σ΅¨σ΅¨σ΅¨π’π σ΅¨σ΅¨σ΅¨ (1 − πΎΜ) = (1 − πΎΜ) βπ’β1 = 1 − πΎΜ. By V ∈ π΅π and the definition of π’, we obtain 1 − πΎΜ ≥ βVβ∞ = βπ’β1 βVβ∞ ≥ π’π V > π’πV ≥ 1 − πΎΜ, Using the above claim, we conclude that for every π½ ⊆ {1, 2, . . . , π} with cardinality π , there exists an π₯ ∈ ππ½ such that π₯π = (1 − πΎΜ), for all π ∈ π½. From the definition of ππ½ , we obtain that there exists π¦ ∈ Rπ with βπ¦βπ ≤ (1 − πΎΜ)−1 π½ such that A∗ π¦ = π Diag (π (A∗ π¦)) ππ , (22) Let π denote the projection of ππ½ onto πΏ π½ . Then, π is closed and convex (because of the direct sum property above and the fact that ππ½ is closed and convex). Note that πΏ π½ can be naturally identified with Rπ , and our claim is the image π ⊂ Rπ of π under this identification that contains the β ⋅ β∞ ball π΅π of radius (1 − πΎΜ) centered at the origin in Rπ . For a contradiction, suppose π΅π is not contained in π. Then there exists V ∈ π΅π \π. Since π is closed and convex, by a separating hyperplane theorem, there exists a vector π’ ∈ Rπ , βπ’β1 = 1 such that for every VσΈ ∈ π. (23) Let π§ ∈ Rπ be defined by {1, π§π := { 0, { π ∈ π½, otherwise. (24) By definition of πΎΜ = πΎΜπ (A, π½), for π -rank matrix π Diag(π§)ππ , there exists π¦ ∈ Rπ such that βπ¦βπ ≤ π½ and A∗ π¦ = π Diag (π§) ππ + π, (28) where ππ (A∗ π¦) = (1 − πΎΜ)−1 π₯π = 1 if π ∈ π½, and ππ (A∗ π¦)π ≤ (1 − πΎΜ)−1 πΎΜ if π ∈ π½. Thus, we obtain that πΎΜπ := πΎΜπ (A, π½) < πΎΜ 1 1 σ³¨⇒ πΎπ (A, π½) ≤ < 1. 2 1 − πΎΜ 1 − πΎΜ (29) πΏ⊥π½ = {π₯ ∈ Rπ : π₯π = 0, π ∈ π½} . π’π V > π’ π V σΈ (27) where the strict inequality follows from the facts that V ∈ π and π’ separates V from π. The above string of inequalities is a contradiction, and hence the desired claim holds. πΏ π½ := {π₯ ∈ Rπ : π₯π = 0, π ∈ π½} and its orthogonal complement (26) π=1 (25) To conclude the proof, we need to prove that the inequalities we established πΎΜπ (A, πΎ 1 π½) ≤ , 1 + πΎΜ 1+πΎ πΎπ (A, πΎΜ 1 π½) ≤ 1 − πΎΜ 1 + πΎΜ (30) are both equations. This is straightforward by an argument similar to the one in the proof of [24, Theorem 1]. We omit it for the sake of brevity. We end this section with a simple argument which illustrates that for a given pair (A, π ), πΎπ (A, π½) = πΎπ (A) and πΎΜπ (A, π½) = πΎΜπ (A), for all π½ large enough. Proposition 6. Let A : Rπ×π → Rπ be a linear transformation and π½ ∈ [0, +∞]. Assume that for some π > 0, the image of the unit β ⋅ β∗ -ball in Rπ×π under the mapping π σ³¨→ Aπ contains the ball π΅ = {π₯ ∈ Rπ : βπ₯β1 ≤ π}. Then for every π ∈ {1, 2, . . . , π} π½≥ 1 , π πΎπ (A) < 1 σ³¨⇒ πΎπ (A, π½) = πΎπ (A) , π½≥ 1 , π πΎΜπ (A) < 1 σ³¨⇒ πΎΜπ (A, π½) = πΎΜπ (A) . 2 (31) Abstract and Applied Analysis 5 Proof. Fix π ∈ {1, 2, . . . , π}. We only need to show the first implication. Let πΎ := πΎπ (A) < 1. Then for every matrix π ∈ π Rπ×π with its SVD π = ππ×π ππ×π , there exists a vector π¦ ∈ π R such that σ΅©σ΅© σ΅©σ΅© σ΅©σ΅©π¦σ΅©σ΅©π ≤ π½, A∗ π¦ = π Diag (π (A∗ π¦)) ππ , if ππ (π) = 1, if ππ (π) = 0, (33) π ∈ {1, 2, . . . , π} . Clearly, βA∗ π¦β ≤ 1. That is, σ΅© σ΅© 1 ≥ σ΅©σ΅©σ΅©A∗ π¦σ΅©σ΅©σ΅© = max {β¨π, A∗ π¦β© : βπβ∗ ≤ 1} π∈Rπ×π {β¨π’, π¦β© : π’ = Aπ, βπβ∗ ≤ 1} . = max π×π {= 1, ππ (A∗ π¦) { ∈ 1] , { [0, (32) where π = [ππ×π ππ×(π−π ) ], π = [ππ×π ππ×(π−π ) ] are orthogonal matrices, and {= 1, ππ (A∗ π¦) { ∈ [0, πΎ] , { it follows that there exist matrices ππ×(π−π ) , ππ×(π−π ) such that A∗ π¦ = π Diag(ππ (A∗ π¦))ππ where π = [ππ×π ππ×(π−π ) ], π = [ππ×π ππ×(π−π ) ] are orthogonal matrices and where π½ := {π : ππ (π) =ΜΈ 0} and π½ := {1, 2, . . . , π} \ π½. Therefore, the optimal objective value of the optimization problem if π ∈ π½, } { {= 1, ∗ ∗ min π¦ ∈ πβπβ , π (A π¦) πΎ : A ∗ π } { π¦,πΎ { ∈ [0, πΎ] , if π ∈ π½, { } { (38) (34) = Π := {conv {π ∈ Rπ×π : the SVD of π is π 0 0 π = [ππ×π ππ×(π−π ) ] ( 0π π (π)) [ππ×π ππ×(π−π ) ] } . (39) From the inclusion assumption, we obtain that max {β¨π’, π¦β© : π’ = Aπ, βπβ∗ ≤ 1} σ΅© σ΅© σ΅© σ΅© ≥ maxπ {β¨π’, π¦β© : βπ’β1 ≤ π} = πσ΅©σ΅©σ΅©π¦σ΅©σ΅©σ΅©∞ = πσ΅©σ΅©σ΅©π¦σ΅©σ΅©σ΅©π . π’∈R (37) if π ∈ π½, is at most one. For the given π with its SVD π π , let ππ×π ππ ππ×π π∈R π∈Rπ×π if π ∈ π½, (35) Combining the above two strings of relations, we derive the desired conclusion. It is easy to see that Π is a subspace and its normal cone (in the sense of variational analysis, see, e.g., [28] for details) is specified by Π⊥ . Thus, the above problem (38) is equivalent to the following convex optimization problem with set constraint: π − π = 0, π ∈ Π} . min {βπβ : A∗ π¦ − ππ×π ππ×π 3. π -Goodness and πΊ-Numbers π¦,π (40) We first give the following characterization result of π -goodness of a linear transformation A via the πΊ-number πΎπ (A), which explains the importance of πΎπ (A) in LMR. We will show that the optimal value is less than 1. For a contradiction, suppose that the optimal value is one. Then, by [28, Theorem 10.1 and Exercise 10.52], there exists a Lagrange multiplier π· ∈ Rπ×π such that the function Theorem 7. Let A : Rπ×π → Rπ be a linear transformation, and π be an integer π ∈ {0, 1, 2, . . . , π}. Then A is π -good if and only if πΎπ (A) < 1. π − πβ© + πΏΠ (π) πΏ (π¦, π) = βπβ + β¨π·, A∗ π¦ − ππ×π ππ×π (41) Proof. Suppose A is π -good. Let π ∈ Rπ×π be a matrix of rank π ∈ {1, 2, . . . , π}. Without loss of generality, let π = π be its SVD where ππ×π ∈ Rπ×π , ππ×π ∈ Rπ×π are ππ×π ππ ππ×π orthogonal matrices and ππ = Diag((π1 (π), . . . , ππ (π))π ). By the definition of π -goodness of A, π is the unique solution to the optimization problem (6). Using the firstorder optimality conditions, we obtain that there exists π¦ ∈ Rπ such that the function ππ¦ (π₯) = βπβ∗ − π¦π [Aπ − Aπ] attains its minimum value over π ∈ Rπ×π at π = π. So 0 ∈ πππ¦ (π) or A∗ π¦ ∈ πβπβ∗ . Using the fact (see, e.g., [27]) π πβπβ∗ = {ππ×π ππ×π + π : π and π have orthogonal row and column spaces, and βπβ ≤ 1} , (36) has unconstrained minimum in (π¦, π) equal to 1, where πΏΠ (⋅) is the indicator function of Π. Let (π¦∗ , π∗ ) be an optimal solution. Then, by the optimality condition 0 ∈ ππΏ, we obtain that 0 ∈ ππ¦ πΏ (π¦∗ , π∗ ) , 0 ∈ πππΏ (π¦∗ , π∗ ) . (42) Direct calculation yields that Aπ· = 0, σ΅© σ΅© 0 ∈ −π· + π σ΅©σ΅©σ΅©π∗ σ΅©σ΅©σ΅© + Π⊥ . (43) Then there exist π·π½ ∈ Π⊥ and π·π½ ∈ πβπ∗ β such that π· = π·π½ + π·π½ . Notice that [29, Corollary 6.4] implies that for π·π½ ∈ πβπ∗ β, π·π½ ∈ Π and βπ·π½ β∗ ≤ 1. Therefore, π π β¨π·, ππ×π ππ×π β© = β¨π·π½ , ππ×π ππ×π β© and β¨π·, π∗ β© = β¨π·π½ , π∗ β©. Moreover, β¨π·π½ , π∗ β© ≤ βπ∗ β by the definition of the dual 6 Abstract and Applied Analysis norm of β ⋅ β. This together with the facts Aπ· = 0, π·π½ ∈ Π⊥ and π·π½ ∈ πβπ∗ β ⊆ Π yields σ΅© σ΅© πΏ (π¦∗ , π∗ ) = σ΅©σ΅©σ΅©π∗ σ΅©σ΅©σ΅© − β¨π·π½ , π∗ β© + β¨π·, A∗ π¦∗ β© π − β¨π·π½ , ππ×π ππ×π β© + πΏΠ (π∗ ) (44) π ≥ −β¨π·π½ , ππ×π ππ×π β© + πΏΠ (π∗ ) . Thus, the minimum value of πΏ(π¦, π) is attained, πΏ(π¦∗ , π∗ ) = π −β¨π·π½ , ππ×π ππ×π β©, when π∗ ∈ Π, β¨π·π½ , π∗ β© = βπ∗ β. We obtain that βπ·π½ β∗ = 1. By assumption, 1 = πΏ(π¦∗ , π∗ ) = π π −β¨π·π½ , ππ×π ππ×π β©. That is, ∑π π=1 (ππ×π π·ππ×π )ππ = −1. Without loss of generality, let SVD of the optimal π∗ be π∗ = Μ (0π 0 ∗ ) π Μπ×(π−π ) ] and π Μπ , where π Μ := [ππ×π π Μ := π 0 π(π ) Μπ×(π−π ) ]. From the above arguments, we obtain that [ππ×π π (i) Aπ· = 0, π Μπ π·π) Μ ππ = −1, (ii) ∑π π=1 (ππ×π π·ππ×π )ππ = ∑π∈π½ (π Μπ Μ ππ = 1. (iii) ∑π∈π½ (π π·π) Clearly, for every π‘ ∈ R, the matrices ππ‘ := π+π‘π· are feasible in (6). Note that π π = ππ×π ππ ππ×π = [ππ×π π Μπ×(π−π ) ] (ππ 0) [ππ×π π Μπ×(π−π ) ] . π 0 0 (45) Μπ ππβ Μ ∗ = Tr(π Μπ ππ). Μ From the above Then, βπβ∗ = βπ equations, we obtain that βππ‘ β∗ = βπβ∗ for all small enough π‘ > 0 (since ππ (π) > 0, π ∈ {1, 2, . . . , π }). Noting that π is the unique optimal solution to (6), we have ππ‘ = π, which Μ ππ = 0 for π ∈ π½. This is a contradiction, Μπ π·π) means that (π and hence the desired conclusion holds. We next prove that A is π -good if πΎπ (A) < 1. That is, we let π be an π -rank matrix and we show that π is the unique optimal solution to (6). Without loss of generality, π let π be a matrix of rank π σΈ =ΜΈ 0 and ππ×π σΈ ππ σΈ ππ×π σΈ its SVD, π×π σΈ π×π σΈ where ππ×π σΈ ∈ R , ππ×π σΈ ∈ R are orthogonal matrices and ππ σΈ = Diag((π1 (π), . . . , ππ σΈ (π))π ). It follows from Proposition 4 that πΎπ σΈ (A) ≤ πΎπ (A) < 1. By the definition of πΎπ (A), there exists π¦ ∈ Rπ such that A∗ π¦ = π Diag(π(A∗ π¦))ππ , where π = [ππ×π σΈ ππ×(π−π σΈ ) ], π = [ππ×π σΈ ππ×(π−π σΈ ) ], and if ππ (π) =ΜΈ 0, {= 1, ππ (A∗ π¦) { ∈ 1) , if ππ (π) = 0. { [0, (46) Now, we have the optimization problem of minimizing the function π (π) = βπβ∗ − π¦π [Aπ − Aπ] ∗ = βπβ∗ − β¨A π¦, πβ© + βπβ∗ (47) over all π ∈ Rπ×π such that Aπ = Aπ. Note that β¨A∗ π¦, πβ© ≤ βπβ∗ by βA∗ π¦β ≤ 1 and the definition of dual norm. So π(π) ≥ βπβ∗ − βπβ∗ + βπβ∗ = βπβ∗ and this function attains its unconstrained minimum in π at π = π. Hence π = π is an optimal solution to (6). It remains to show that this optimal solution is unique. Let π be another optimal solution to the problem. Then π(π)−π(π) = βπβ∗ −π¦π Aπ = βπβ∗ − β¨A∗ π¦, πβ© = 0. This together with the fact βA∗ π¦β ≤ 1 implies that there exist SVDs for A∗ π¦ and π such that Μ Diag (π (A∗ π¦)) π Μπ , A∗ π¦ = π (48) Μ Diag (π (π)) π Μπ , π=π Μ ∈ Rπ×π are orthogonal matrices, Μ ∈ Rπ×π and π where π ∗ and ππ (π) = 0 if ππ (A π¦) =ΜΈ 1. Thus, for ππ (A∗ π¦) = 0, for all π ∈ {π σΈ + 1, . . . , π}, we must have ππ (π) = ππ (π) = 0. By the π Μπ Μ two forms of SVDs of A∗ π¦ as above, ππ×π σΈ ππ×π σΈ = ππ×π σΈ ππ×π σΈ π Μπ×π σΈ , π Μ σΈ are the corresponding submatrices of π, Μ π, Μ where π π×π respectively. Without loss of generality, let π = [π’1 , π’2 , . . . , π’π ] , π = [V1 , V2 , . . . , Vπ ] , Μ = [Μ π π’1 , π’Μ2 , . . . , π’Μπ ] , Μ = [ΜV1 , ΜV2 , . . . , ΜVπ ] , π (49) where π’π = π’Μπ and Vπ = ΜVπ for the corresponding index π ∈ {π : ππ (A∗ π¦) = 0, π ∈ {π σΈ + 1, . . . , π}}. Then we have π= π σΈ π σΈ ∑ππ (π) π’Μπ ΜVππ , π=1 π = ∑ππ (π) π’π Vππ . (50) π=1 π Μπ Μ From ππ×π σΈ ππ×π σΈ = ππ×π σΈ ππ×π σΈ , we obtain that π π π=π σΈ +1 π=π σΈ +1 ∑ ππ (A∗ π¦) π’Μπ ΜVππ = ∑ ππ (A∗ π¦) π’π Vππ . (51) Therefore, we deduce π ππ (A∗ π¦) π’Μπ ΜVππ + ∑ π=π σΈ +1,ππ (A∗ π¦) =ΜΈ 0 π = ∑ π ∑ π=π σΈ +1,ππ (A∗ π¦)=0 ππ (A∗ π¦) π’π Vππ + π=π σΈ +1,ππ (A∗ π¦) =ΜΈ 0 π’Μπ ΜVππ π ∑ π=π σΈ +1,ππ (A∗ π¦)=0 π’π Vππ =: Ω. (52) Clearly, the rank of Ω is no less than π − π σΈ ≥ π − π . From the Μ π, Μ we easily derive that orthogonality property of π, π and π, Ωπ π’Μπ ΜVππ = 0, Ωπ π’π Vππ = 0, ∀π ∈ {1, 2, . . . , π σΈ } . (53) Thus, we obtain Ωπ (π − π) = 0, which implies that the rank of the matrix π − π is no more than π . Since πΎπ (A) < 1, there exists π¦Μ such that {= 1, Μ { ππ (A∗ π¦) ∈ 1) { [0, if ππ (π − π) =ΜΈ 0, if ππ (π − π) = 0. (54) Μ π − πβ© = βπ − πβ∗ . Therefore, 0 = π¦Μπ A(π − π) = β¨A∗ π¦, Then π = π. Abstract and Applied Analysis 7 For the πΊ-number πΎΜπ (A), we directly obtain the following equivalent theorem of π -goodness from Proposition 5 and Theorem 7. Theorem 8. Let A : Rπ×π → Rπ be a linear transformation, and π ∈ {1, 2, . . . , π}. Then A is π -good if and only if πΎΜπ (A) < 1/2. 4. π -Goodness, NSP, and RIP This section deals with the connections between π -goodness, the null space property (NSP), and the restricted isometry property (RIP). We start with establishing the equivalence of NSP and πΊ-number πΎΜπ (A) < 1/2. Here, we say A satisfies NSP if for every nonzero matrix π ∈ Null(A) with the SVD π = π Diag(π(π))ππ , then we have π π π=1 π=π +1 ∑ππ (π) < ∑ ππ (π) . (55) For further details, see, for example, [14, 19–21] and references therein. Proposition 9. For the linear transformation A, πΎΜπ (A) < 1/2 if and only if A satisfies NSP. Proof. We first give an equivalent representation of the πΊnumber πΎΜπ (A, π½). We define a compact convex set first: ππ := {π ∈ Rπ×π : βπβ∗ ≤ π , βπβ ≤ 1} . (56) Let π΅π½ := {π¦ ∈ Rπ : βπ¦βπ ≤ π½} and π΅ := {π ∈ Rπ×π : βπβ ≤ 1}. By definition, πΎΜπ (A, π½) is the smallest πΎ such that the closed convex set πΆπΎ,π½ := A∗ π΅π½ + πΎπ΅ contains all matrices with π nonzero singular values, all equal to 1. Equivalently, πΆπΎ,π½ contains the convex hull of these matrices, namely, ππ . Note that πΎ satisfies the inclusion ππ ⊆ πΆπΎ,π½ if and only if for every π ∈ Rπ×π max β¨π, πβ© ≤ max β¨π, πβ© π∈ππ π∈πΆπΎ,π½ = max π¦∈Rπ ,π∈Rπ×π {β¨π, A∗ π¦β© + πΎβ¨π, πβ© σ΅© σ΅© : σ΅©σ΅©σ΅©π¦σ΅©σ΅©σ΅©π ≤ π½, βπβ ≤ 1} For π ∈ Rπ×π with Aπ = 0, let π = π Diag(π(π))ππ be its SVD. Then, we obtain the sum of the π largest singular values of π as βπβπ ,∗ = maxβ¨π, πβ©. π∈ππ (60) From (59), we immediately obtain that πΎΜπ (A) is the best upper bound on βπβπ ,∗ of matrices π ∈ Null(A) such that βπβ∗ ≤ 1. Therefore, πΎΜπ (A) < 1/2 implies that the maximum value of β ⋅ βπ ,∗ -norms of matrices π ∈ Null(A) with βπβ∗ = 1 is less than 1/2. That is, ∑π π=1 ππ (π) < 1/2 ∑ππ=1 ππ (π). Thus, ∑π π=1 ππ (π) < ∑ππ=π +1 ππ (π) and hence A satisfies NSP. Now, it is easy to see that A satisfies NSP if and only if πΎΜπ (A) < 1/2. Next, we consider the connection between restricted isometry constants and πΊ-number of the linear transformation in LMR. It is well known that, for a nonsingular matrix (transformation) π ∈ Rπ×π , the RIP constants of A and πA can be very different, as shown by Zhang [30] for the vector case. However, the π -goodness properties of A and πA are always the same for a nonsingular transformation π ∈ Rπ×π (i.e., π -goodness properties enjoy scale invariance in this sense). Recall that the π -restricted isometry constant πΏπ of a linear transformation A is defined as the smallest constant such that the following holds for all π -rank matrices π ∈ Rπ×π : (1 − πΏπ ) βπβ2πΉ ≤ βAπβ22 ≤ (1 + πΏπ ) βπβ2πΉ . (61) In this case, we say A possesses the RI (πΏπ )-property (RIP) as in the CS context. For details, see [4, 31–34] and the references therein. Proposition 10. Let A : Rπ×π → Rπ be a linear transformation and π ∈ {0, 1, 2, . . . , π}. For any nonsingular transformation π ∈ Rπ×π , πΎΜπ (A) = πΎΜπ (πA). Proof. It follows from the nonsingularity of π that {π : Aπ = 0} = {π : πAπ = 0}. Then, by the equivalent representation of the πΊ-number πΎΜπ (A, π½) in (59), πΎΜπ (A) = max {β¨π, πβ© : π ∈ ππ , βπβ∗ ≤ 1, Aπ = 0} (57) π,π = max {β¨π, πβ© : π ∈ ππ , βπβ∗ ≤ 1, πAπ = 0} π,π = πΎΜπ (πA) . = π½ βAπβ + πΎβπβ∗ . (62) For the above, we adopt the convention that whenever π½ = +∞, π½βAπβ is defined to be +∞ or 0 depending on whether βAπβ > 0 or βAπβ = 0. Thus, ππ ⊆ πΆπΎ,π½ if and only if maxπ∈ππ {β¨π, πβ© − π½βAπβ} ≤ πΎβπβ∗ . Using the homogeneity of this last relation with respect to π, the above is equivalent to max {β¨π, πβ© − π½ βAπβ : π ∈ ππ , βπβ∗ ≤ 1} ≤ πΎ. (58) For the RIP constant πΏ2π , Oymak et al. [21] gave the current best bound on the restricted isometry constant πΏ2π < 0.472, where they proposed a general technique for translating results from SSR to LMR. Together with the above arguments, we immediately obtain the following theorem. Therefore, we obtain πΎΜπ (A, π½) = maxπ,π {β¨π, πβ© − π½βAπβ : π ∈ ππ , βπβ∗ ≤ 1}. Furthermore, Theorem 11. πΏ2π < 0.472 ⇒ A satisfying πππ ⇔ πΎΜπ (A) < 1/2 ⇔ πΎπ (A) < 1 ⇔ A is π -good. π,π πΎΜπ (A) = max {β¨π, πβ© : π ∈ ππ , βπβ∗ ≤ 1, Aπ = 0} . π,π (59) Proof. It follows from [21, Theorem 1], Proposition 9, and Theorems 7 and 8. 8 The above theorem says that π -goodness is a necessary and sufficient condition for recovering the low-rank solution exactly via nuclear norm minimization. 5. Conclusion In this paper, we have shown that π -goodness of the linear transformation in LMR is a necessary and sufficient conditions for exact π -rank matrix recovery via the nuclear norm minimization, which is equivalent to the null space property. Our analysis is based on the two characteristic π -goodness constants, πΎπ and πΎΜπ , and the variational property of matrix norm in convex optimization. This shows that π -goodness is an elegant concept for low-rank matrix recovery, although πΎπ and πΎΜπ may not be easy to compute. Development of efficiently computable bounds on these quantities is left to future work. Even though we develop and use techniques based on optimization, convex analysis, and geometry, we do not provide explicit analogues to the results of Donoho [35] where necessary and sufficient conditions for vector recovery special case were derived based on the geometric notions of face preservation and neighborliness. The corresponding generalization to low-rank recovery is not known, currently the closest one being [22]. Moreover, it is also important to consider the semidefinite relaxation (SDR) for the rank minimization with the positive semidefinite constraint since the SDR convexifies nonconvex or discrete optimization problems by removing the rank-one constraint. Another future research topic is to extend the main results and the techniques in this paper to the SDR. Acknowledgments The authors thank two anonymous referees for their very useful comments. The work was supported in part by the National Basic Research Program of China (2010CB732501), the National Natural Science Foundation of China (11171018), a Discovery Grant from NSERC, a research grant from the University of Waterloo, ONR research Grant N00014-1210049, and the Fundamental Research Funds for the Central Universities (2011JBM306, 2013JBM095). References [1] C. Beck and R. D’Andrea, “Computational study and comparisons of LFT reducibility methods,” in Proceedings of the American Control Conference, Philadelphia, Pa, USA, June 1998. [2] M. Mesbahi and G. P. Papavassilopoulos, “On the rank minimization problem over a positive semidefinite linear matrix inequality,” IEEE Transactions on Automatic Control, vol. 42, no. 2, pp. 239–243, 1997. [3] M. Fazel, H. Hindi, and S. P. Boyd, “A rank minimization heuristic with application to minimum order system approximation,” in Proceedings of American Control Conference, pp. 4734–4739, June 2001. [4] B. Recht, M. Fazel, and P. A. Parrilo, “Guaranteed minimumrank solutions of linear matrix equations via nuclear norm minimization,” SIAM Review, vol. 52, no. 3, pp. 471–501, 2010. Abstract and Applied Analysis [5] B. P. W. Ames and S. A. Vavasis, “Nuclear norm minimization for the planted clique and biclique problems,” Mathematical Programming, vol. 129, no. 1, pp. 69–89, 2011. [6] J.-F. Cai, E. J. CandeΜs, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM Journal on Optimization, vol. 20, no. 4, pp. 1956–1982, 2010. [7] E. J. CandeΜs and B. Recht, “Exact matrix completion via convex optimization,” Foundations of Computational Mathematics, vol. 9, no. 6, pp. 717–772, 2009. [8] C. Ding, D. Sun, and K. C. Toh, “An introduction to a class of matrix cone programming,” Tech. Rep., 2010. [9] Z. Lin, M. Chen, L. Wu, and Y. Ma, “The augmented Lagrange multiplier method for exact recovery of a corrupted low-rank matrices,” http://arxiv.org/abs/1009.5055. [10] Y. Liu, D. Sun, and K. C. Toh, “An implementable proximal point algorithmic framework for nuclear norm minimization,” Mathematical Programming, vol. 133, no. 1-2, pp. 399–436, 2012. [11] Z. Liu and L. Vandenberghe, “Interior-point method for nuclear norm approximation with application to system identification,” SIAM Journal on Matrix Analysis and Applications, vol. 31, no. 3, pp. 1235–1256, 2009. [12] S. Ma, D. Goldfarb, and L. Chen, “Fixed point and Bregman iterative methods for matrix rank minimization,” Mathematical Programming, vol. 128, no. 1-2, pp. 321–353, 2011. [13] Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma, “RASL: robust alignment by sparse and low-rank decomposition for linearly correlated images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 1, pp. 2233–2246, 2010. [14] B. Recht, W. Xu, and B. Hassibi, “Necessary and sufficient conditions for success of the nuclear norm heuristic for rank minimization,” in Proceedings of the 47th IEEE Conference on Decision and Control (CDC ’08), pp. 3065–3070, Cancun, Mexico, December 2008. [15] M. Tao and X. Yuan, “Recovering low-rank and sparse components of matrices from incomplete and noisy observations,” SIAM Journal on Optimization, vol. 21, no. 1, pp. 57–81, 2011. [16] E. J. CandeΜs, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006. [17] E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203–4215, 2005. [18] D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. [19] B. Recht, W. Xu, and B. Hassibi, “Null space conditions and thresholds for rank minimization,” Mathematical Programming, vol. 127, no. 1, pp. 175–202, 2011. [20] S. Oymak and B. Hassibi, “New null space results and recovery thresholds for matrix rank minimization,” http://arxiv.org/abs/ 1011.6326. [21] S. Oymak, K. Mohan, M. Fazel, and B. Hassibi, “A simplified approach to recovery conditions for low-rank matrices,” in Proceedings of International Symposium on Information Theory (ISIT ’11), pp. 2318–2322, August 2011. [22] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky, “The convex geometry of linear inverse problems,” in Proceedings of the 48th Annual Allerton Conference on Communication, Control, and Computing, 2011. [23] A. d’Aspremont and L. El Ghaoui, “Testing the nullspace property using semidefinite programming,” Mathematical Programming, vol. 127, no. 1, pp. 123–144, 2011. Abstract and Applied Analysis [24] A. Juditsky and A. Nemirovski, “On verifiable sufficient conditions for sparse signal recovery via β1 minimization,” Mathematical Programming, vol. 127, no. 1, pp. 57–88, 2011. [25] A. Juditsky, F. Karzan, and A. Nemirovski, “Verifiable conditions of β1 -recovery for sparse signals with sign restrictions,” Mathematical Programming, vol. 127, no. 1, pp. 89–122, 2011. [26] A. Juditsky, F. Karzan, and A. S. Nemirovski, “Accuracy guarantees for β1 -recovery,” IEEE Transactions on Information Theory, vol. 57, no. 12, pp. 7818–7839, 2011. [27] G. A. Watson, “Characterization of the subdifferential of some matrix norms,” Linear Algebra and Its Applications, vol. 170, pp. 33–45, 1992. [28] R. T. Rockafellar and R. J.-B. Wets, Variational Analysis, Springer, New York, NY, USA, 2nd edition, 2004. [29] A. S. Lewis and H. S. Sendov, “Nonsmooth analysis of singular values. II. Applications,” Set-Valued Analysis, vol. 13, no. 3, pp. 243–264, 2005. [30] Y. Zhang, “Theory of Compressive Sensing via πΏ 1 -Minimization: A Non-RIP Analysis and Extensions,” 2008. [31] E. J. CandeΜs and Y. Plan, “Tight oracle inequalities for lowrank matrix recovery from a minimal number of noisy random measurements,” IEEE Transactions on Information Theory, vol. 57, no. 4, pp. 2342–2359, 2011. [32] K. Lee and Y. Bresler, “Guaranteed minimum rank approximation from linear observations by nuclear norm minimization with an ellipsoidal constraint,” http://arxiv.org/abs/0903.4742. [33] R. Meka, P. Jain, and I. S. Dhillon, “Guaranteed rank minimization via singular value projection,” http://arxiv.org/abs/ 0909.5457. [34] K. Mohan and M. Fazel, “New restricted isometry results for noisy low-rank matrix recovery,” in Proceedings of IEEE International Symposium on Information Theory (ISIT ’10), Austin, Tex, USA, June 2010. [35] D. L. Donoho, “Neighborly polytopes and sparse solution of underdetermined linear equations,” Tech. Rep., 2004. 9 Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014