Supporting Information Appendix S1........................................................................................................................ 2 The Pareto front is the locus of all points in which the gradients of the performance functions are positive-linearly dependent ....................................................................... 2 Appendix S2...................................................................................................................... 10 The Pareto front associated with 2 tasks in a 2D-mirphospace is a hyperbola ............. 10 Appendix S3...................................................................................................................... 16 The Pareto front of 2 tasks in an n-dimensional morphospace has hyperbolic projections ..................................................................................................................... 16 Appendix S4...................................................................................................................... 19 Calculation of the deviation of the Pareto front from a straight line for 2 tasks in a 2Dmorphospace ................................................................................................................. 19 Appendix S5...................................................................................................................... 22 Each Pareto front of 2 tasks in a 2D morphospace is generated by a 1-dimensional family of norm-pairs ..................................................................................................... 22 Appendix S6...................................................................................................................... 38 Generally, for 3 tasks in a 2D morphospace, the norms can be uniquely determined by the shape of the Pareto front ......................................................................................... 38 Appendix S7...................................................................................................................... 43 The boundary of the 3-tasks Pareto front is composed of the three 2-tasks Pareto fronts ....................................................................................................................................... 43 Appendix S8...................................................................................................................... 55 1 The resulting Pareto front when one of the performance function is maximized in a region ............................................................................................................................ 55 Appendix S9...................................................................................................................... 58 Bounds on the Pareto front for general performance functions show that normally it is located in a region close to the archetype ..................................................................... 58 Appendix S10.................................................................................................................... 63 The Pareto front of r strongly concave performance functions is a connected set of Hausdorff dimension of at most r-1. ............................................................................. 63 References ......................................................................................................................... 67 Appendix S1 The Pareto front is the locus of all points in which the gradients of the performance functions are positive-linearly dependent In this appendix we will analytically calculate the Pareto front for a system that needs to perform r tasks in an n-dimensional morphospace V. Each performance function has a single maximum - the archetype ๐ฃ๐∗ , and it decreases monotonically with the distance from the archetype, where the distance is derived from a general inner-product norm. Each performance function may depend on a different inner product norm. ⇒ The performance at task i is Pi (v) ๏ฝ pi ( v ๏ญ vi* 2 i ) , where v ๏ญ vi* 2 i ๏ฝ (v ๏ญ v*i )T M i (v ๏ญ v*i ) , M i is a positive definite matrix, and Pi is a monotonically decreasing function of a single argument. We say that v is Pareto optimal relative to P1 ,..., Pr , if for every v ' ๏น v there exists j ๏๏ป1,..., n๏ฝ such that p j ( v ๏ญ v*j 2 j ) ๏พ p j ( v '๏ญ v*j 2 2 j ) . Since P1 ,..., Pr are monotonically decreasing with their argument, it is equivalent to say that ๐ฃ is Pareto optimal relative to P1 ,..., Pr if for every v ' ๏น v there exists j ๏๏ป1,..., n๏ฝ such that ๏ญ v '๏ญ v*j Denote Di (v) ๏ฝ v ๏ญ vi* 2 i 2 j ๏ผ ๏ญ v ๏ญ v*j 2 j . . The Pareto front associated with P1 ,..., Pr is exactly the same as the Pareto front associated with D1 ,..., Dr . Hence, from now on, without loss of generality, we will assume that Pi (v) ๏ฝ Di (v) Here we will show that the Pareto front associated with r tasks in an n-dimensional morphospace V is given by all points ๐ฃ for which: r (I) ๏ค๏ก i ๏ณ 0 , ๏ฅ ๏ก i ๏พ 0 , s.t. i ๏ฝ1 r ๏ฅ ๏ก M (v ๏ญ v ) ๏ฝ 0 . i ๏ฝ1 i i * i Note that since Pi (v) ๏ฝ ๏ญ(v ๏ญ v*i )T M i (v ๏ญ v*i ) , this is equivalent to ๏ฎ r ๏ฅ ๏ก ๏ P (v ) ๏ฝ 0 . i ๏ฝ1 i i ๏ฎ For the rest of this Appendix we will denote gi :๏ฝ ๏ Pi (v) . In this Appendix, we will regard V as an inner-product space, using the standard Euclidean inner product (using the standard basis of measured traits). Do not be confused with the inner products associated with each performance function Pi , which is only used to define Pi by measuring distance from the respective archetype ๐ฃ๐∗ . To show that if v is Pareto optimal it satisfies property (I), we will rely on a theorem from a paper by Gerstenhaber(4). Our approach is to show that if v does not satisfy property (I), it is not Pareto optimal. Note that if v does not satisfy property (I) then ๏ขi : gi ๏น 0 , as if ๏คi : gi ๏ฝ 0 , then property (I) will be satisfied by choosing: ๏ก i ๏ฝ 1 , ๏ขj ๏น i : ๏ก j ๏ฝ 0 . Here are some relevant definitions quoted from (ref): The halfline generated by the vector u ๏V is the set of all points {๏ฌu | ๏ฌ ๏ณ 0} . 3 The convex polyhedral cone framed by u1 ,..., ur ๏V is the convex hull of their respective halflines. Note that this is equivalent to the set of all non-negative linear combinations r {๏ฅ ๏ฌi ui | ๏ฌi ๏ณ 0} . i ๏ฝ1 Let A be a convex polyhedral cone. L(A) is defined to be the convex hull of all linear subspaces contained in A. l(A) is the dimension of L(A). A convex polyhedral cone A is said to be pointed if ๐(๐ด) = 0. Note that A is pointed if and only if L(A) does not contain a nontrivial full line (a 1dimensional linear subspace), if and only if A does not contain a full line. Lemma 1: A convex polyhedral cone A framed by g1 ,..., g r is pointed if and only if g1 ,..., g r do not satisfy property (I) Proof: 1. “Only if”: If A is not pointed, it contains a full line spanned by the vector ๐ฅ ≠ 0. It means that both x and -x are in A. r x ๏ A ๏ x ๏ฝ ๏ฅ ๏ก i gi with ๏ก i ๏ณ 0 . i ๏ฝ1 r ๏ญ x ๏ A ๏ ๏ญ x ๏ฝ ๏ฅ ๏ข i g i with ๏ขi ๏ณ 0 . i ๏ฝ1 4 r r r i ๏ฝ1 i ๏ฝ1 i ๏ฝ1 0 ๏ฝ x ๏ญ x ๏ฝ ๏ฅ ๏ก i gi ๏ซ ๏ฅ ๏ขi gi ๏ฝ ๏ฅ (๏ก i ๏ซ ๏ขi ) gi ๏กi ๏ณ 0, ๏ขi ๏ณ 0 ๏ ๏กi ๏ซ ๏ขi ๏ณ 0 r As x ๏น 0 , not all ๏กi are zero, and not all ๏ขi are zero ๏ ๏ฅ ๏ก i ๏พ 0 and i ๏ฝ1 r r r i ๏ฝ1 i ๏ฝ1 i ๏ฝ1 r ๏ฅ๏ข i ๏ฝ1 i ๏พ0 ๏ ๏ฅ (๏ก i ๏ซ ๏ขi ) ๏ ๏ฅ ๏ก i ๏ซ ๏ฅ ๏ขi ๏พ 0 . We showed that g1 ,..., g r satisfy property (I). Hence, if g1 ,..., g r do not satisfy property (I), A is pointed r 2. “If”: If g1 ,..., g r satisfy property (I), there exist ๏ก1 ,..., ๏ก r ๏ณ 0, ๏ฅ ๏ก i ๏พ 0 such that i ๏ฝ1 r ๏ฅ๏ก g i ๏ฝ1 i ๏ฝ 0 . Choose i such that ๏ก i ๏น 0 . Denote x ๏ฝ ๏ก i gi . Then x ๏น 0 . i x ๏ A by definition. r ๏ฅ๏ก j ๏ฝ1 j ๏นi j g j ๏ฝ ๏ญ๏ก i gi ๏ฝ ๏ญ x . r ๏ฅ๏ก g j ๏ฝ1 j ๏นi j j ๏ A by definition ๏ ๏ญ x ๏ A . x ๏น 0, x ๏ A and ๏ญ x ๏ A ๏ A contains the non-trivial full line: {๏ฌ x | ๏ฌ ๏ } ๏ A is not pointed. Hence, if A is pointed, g1 ,..., g r do not satisfy property (I) โ 5 According to theorem 17 in (ref), a convex polyhedral cone A is pointed if and only if there exists a half plane H ๏ V such that, except for the origin, A is contained in the interior of H. The interior of a half plane is defined as {x | h ๏ x ๏พ 0} , where h is the unit vector perpendicular to the hyperplane separating the space into 2 halves. Note that for a convex polyhedral cone ๐ด framed by nonzero vectors g1 ,..., g r , and for every vector h: (๏ข0 ๏น x ๏ A : x ๏ h ๏พ 0) ๏ ๏ขi : gi ๏ h ๏พ 0 . “ ๏ ” is trivial, as gi ๏ A r r i ๏ฝ1 i ๏ฝ1 “ ๏ ” stems from the fact that ๏ข0 ๏น x ๏ A, ๏ค(๏ก i ๏ณ 0, ๏ฅ ๏ก i ๏พ 0) such that x ๏ฝ ๏ฅ ๏ก i gi Putting together all of the above, we get: g1 ,..., g r do not satisfy property (I) ⇔ The convex polyhedral cone A framed by g1 ,..., g r is pointed ⇔ There exists a half plane H ๏ V such that, except for the origin, A is contained in the interior of H ⇔ There exists a vector h such that ๏ขi : h ๏ gi ๏พ 0 (note that ๏ขi : gi ๏น 0 ) Claim 1: If there exists a vector โ, such that ๏ขi : h ๏ gi (v) ๏พ 0 , then v is not Pareto optimal. Proof: Consider a vector v ' ๏ฝ v ๏ซ ๏ฅ h . The performance of task i of v’ is Pi (v ') ๏ฝ Pi (v ๏ซ ๏ฅ h) . Since Pi is smooth, we can approximate: 6 ๏ฎ Pi (v ๏ซ ๏ฅ h) ๏ฝ Pi (v) ๏ซ ๏ฅ h ๏๏ Pi (v) ๏ซ O(๏ฅ 2 ) ๏ฝ Pi (v) ๏ซ ๏ฅ h ๏ g i ๏ซ O (๏ฅ 2 ) By assumption, h ๏ gi ๏พ 0 so for small enough ๏ฅ , ๏ขi : Pi (v ') ๏พ Pi (v) . This implies that v is not Pareto optimal, as required. From all of the above we can deduce that if g1 (v),..., g r (v) do not satisfy property (I), v is not Pareto optimal. Note that we’ve shown that a Pareto optimal point must satisfy property (I), without using any prior assumptions on the nature of the performance functions Pi (v) , beside differentiability. On the other hand, if a point ๐ฃ satisfies property (I), it is Pareto optimal: Consider the following function ( ๏กi are given by property (I)): r r i ๏ฝ1 i ๏ฝ1 f (u ) ๏ฝ ๏ฅ ๏ก i Pi (u ) ๏ฝ ๏ญ ๏ฅ ๏ก i (u ๏ญ vi* )T M i (u ๏ญ vi* ) ๏ฎ ๏ฎ r r v is an extreme point of f as (๏ f )(v) ๏ฝ ๏ฅ ๏ก i ๏ Pi (v) ๏ฝ๏ฅ ๏ก i gi (v) ๏ฝ0 . i ๏ฝ1 i ๏ฝ1 f has a single maximal point, since: ๏ฎ r ๏ฎ ๏ฎ r r i ๏ฝ1 i ๏ฝ1 ๏ ๏ฅ ๏ก i ๏ Pi (v) ๏ฝ ๏ญ ๏ ๏ฅ ๏ก i (u ๏ญ vi* )T M i (u ๏ญ vi* ) ๏ฝ ๏ญ2๏ฅ ๏ก i M i (u ๏ญ vi* ) ๏ฝ 0 i ๏ฝ1 7 implies r vmax ๏ฝ (๏ฅ ๏ก i M i ) ๏ญ1 (๏ก i M i vi* ) i ๏ฝ1 (∑๐ ๐ผ๐ ๐๐ is positive-definite as all ๐๐ are positive definite, all ๐ผ๐ are non-negative and not all of them are zero, which implies ∑๐ ๐ผ๐ ๐๐ is also invertible). And r r i ๏ฝ1 i ๏ฝ1 ๏ 2 ๏ฅ ๏ก i Pi (v) ๏ฝ ๏ญ๏ฅ ๏ก i M i which is the negative of a positive definite matrix, and hence a negative definite matrix ๏ vmax is a maximum. Also notice that vmax is a global maximum since f is defined continuously on the entire space and has a single extremum. r So if v satisfies property (I) it maximizes ๏ฅ๏ก P i i i ๏ฝ1 (i.e. v ๏ฝ vmax ). If it is not Pareto optimal, it means that there is a point v ' ๏น v such that ๏ขi : Pi (v) ๏ฃ Pi (v ') and since ๏ขi : ๏ก i ๏ณ 0 we get r r i ๏ฝ1 i ๏ฝ1 ๏ฅ๏กi Pi (v) ๏ผ ๏ฅ๏กi Pi (v ') , in opposed to the maximality of v. So if v satisfies property (I), it is Pareto optimal. โ Conclusion 1: ๐ฃ is Pareto optimal ๏ v satisfies property (I) ⇔ there exists no vector h such that ๏ขi : h ๏ gi ๏พ 0 . Conclusion 2: v is Pareto optimal ๏ v satisfies property (I) ๏ v maximizes r ๏ฅ ๏ก P (u) i ๏ฝ1 (with ๏กi given by property (I)). 8 i i In other words, the set of Pareto optimal points equals the set of all points ๐ฃ for which r there exist ๏ก i ๏ณ 0, ๏ฅ ๏ก i ๏พ 0 , such that i ๏ฝ1 ๏ฎ r ๏ฅ ๏ก ๏ P (v ) ๏ฝ 0 i ๏ฝ1 i i ๐ผ๐ Note that as ∑๐๐=1 ๐ผ๐ ≠ 0, we can define ๐ฝ๐ = ∑๐ ๐=1 ๐ผ๐ ∑๐๐=1 ๐ผ๐ ๐ป๐๐ (๐ฃ) = 0 ⇔ 1 ∑๐ ๐ผ ๐ป๐๐ (๐ฃ) ∑๐๐=1 ๐ผ๐ ๐=1 ๐ . Then ๐ฝ๐ ≥ 0, ∑๐๐=1 ๐ฝ๐ = 1, and: = 0 ⇔ ∑๐๐=1 ๐ฝ๐ ๐ป๐๐ (๐ฃ) = 0 Hence, ∃๐ผ๐ ≥ 0, ∑๐๐=1 ๐ผ๐ > 0 s.t. ∑๐๐=1 ๐ผ๐ ๐ป๐๐ (๐ฃ) = 0 ⇔ ∃ ๐ผ๐ ≥ 0, ∑๐๐=1 ๐ผ๐ = 1, s.t. ∑๐๐=1 ๐ผ๐ ๐ป๐๐ (๐ฃ) = 0 Thus, the set of Pareto optimal points equals the set of all points ๐ฃ for which there exist ๐ผ๐ ≥ 0, ∑๐๐=1 ๐ผ๐ = 1, such that ∑๐๐=1 ๐ผ๐ ๐ป๐๐ (๐ฃ) = 0 We consider performance functions of the form ๐๐ (๐ฃ) = (๐ฃ − ๐ฃ๐∗ )๐ ๐๐ (๐ฃ − ๐ฃ๐∗ ) ⇒ ๐ป๐๐ (๐ฃ) = ๐๐ (๐ฃ − ๐ฃ๐∗ ). Hence, ๐ ๐ ∑ ๐ผ๐ ๐ป๐๐ (๐ฃ) = 0 ⇒ ∑ ๐ผ๐ ๐๐ (๐ฃ − ๐ฃ๐∗ ) = 0 ⇒ ๐ฃ = (∑ ๐ผ๐ ๐๐ ) ๐=1 ๐ ๐=1 −1 (๐ผ๐ ๐๐ ๐ฃ๐∗ ) Conclusion 3: The set of Pareto optimal points, associated with ๐1 , … , ๐๐ is given by: {(∑๐ ๐ผ๐ ๐๐ )−1 ∑๐(๐ผ๐ ๐๐ ๐ฃ๐∗ ) |๐ผ๐ ≥ 0, ∑3๐=1 ๐ผ๐ = 1} 9 Appendix S2 The Pareto front associated with 2 tasks in a 2D-mirphospace is a hyperbola We would like to prove that the Pareto front associated with 2 tasks in a 2-dimensional morphospace is a section of a hyperbola or a line. As explained in Appendix S1, the performance of task ๐ is taken to be: ๐๐ = −(๐ฃ − ๐ฃ๐∗ )๐ ๐๐ (๐ฃ − ๐ฃ๐∗ ) Where ๐๐ is a positive-definite 2× 2 matrix, and ๐ฃ๐∗ = (๐ฅ๐∗ , ๐ฆ๐∗ ) is the archetype for task ๐. Positive definite matrices have positive eigenvalues and are Hermitian, and thus ๐๐ can be diagonalized by a rotation matrix. Thus ๐๐,1 ² ๐๐ = ๐ (๐๐ ) ( 0 0 ๐๐,2 ² ) ๐ (−๐๐ ) , where ๐ (๐) is an orthogonal matrix (rotation matrix by angle ๐) and ๐๐,1 > 0, ๐๐,2 > 0 (i.e. real and non-zero). Note: The contours of such performance functions ๐๐ are concentric ellipses with ๐ eccentricity ๐ = ๐๐,2 which are rotated by an angle of ๐ relative to the y axis. These ๐,2 contours and their parameters are widely used in the main text. As will be explained immediately, we can assume without loss of generality that: 1. ๐1 = I, implying - (๐ฃ − ๐ฃ1∗ )๐ ๐1 (๐ฃ − ๐ฃ1∗ ) = (๐ฃ − ๐ฃ1∗ )๐ (๐ฃ − ๐ฃ1∗ ) . 2. ๐ฃ1∗ = (0,0) 3. ๐ฃ2∗ = (1,0) 4. One of ๐2 ′๐ eigenvalues is 1. Those assumptions will be true in a rotated, translated and rescaled coordinate system. We will show that the Pareto front is a hyperbola. Since a hyperbola remains a hyperbola under such transformations, and all such transformations are invertible, it is enough to work in the coordinate system where the above assumptions hold. 10 The first and second assumptions are satisfied by the transformation ๐ฃ → ๐๐(๐ฃ1∗ )๐1 ๐ฃ where: ๐1 = ๐ (๐1 ) ( 1 ๐๐,1 0 0 1 ๐๐,2 ) (then translate such that ๐ฃ1∗ is at (0,0), rotate such that ๐1 is diagonal, and then scale such that ๐1 = ๐ผ). Under the above transformation, the second archetype moves to: ฬ {Δ๐ฅ ฬ } = ๐1−1 (๐ฃ2∗ − ๐ฃ1∗ ) Δ๐ฆ The third assumption is satisfied, while keeping assumptions 2, and keeping ๐1 scalar by further applying: ๐2 = ( ฬ Δ๐ฅ ฬ Δ๐ฆ ฬ −Δ๐ฆ ) ฬ Δ๐ฅ (Move the second archetype to the ๐ฅ axis and then scale it to be at (1,0)). ฬ ≠ 0 or Δ๐ฆ ฬ ≠0 This transformation is invertible since Δ๐ฅ We apply the transformation ๐ฃ → ๐2 ๐1 ๐ฃ. In this coordinate system: 2 ฬ 2 + Δ๐ฆ ฬ 2 )๐ฃ ๐ ๐ฃ The functional โ๐ฃ − ๐ฃ1∗ โ1 becomes โ๐2 ๐1 ๐ฃ − ๐ฃ1∗ โ12 = (Δ๐ฅ ๐ 2 The functional โ๐ฃ − ๐ฃ2∗ โ2 becomes โ๐2 ๐1 ๐ฃ − ๐ฃ2∗ โ22 = (๐ฃ − (1,0)) ๐(๐ฃ − (1,0)) Where ๐ = (๐2 ๐1 )๐ ๐2 ๐2 ๐1 Since ๐2 is positive definite and ๐2 ๐1 is invertible, ๐ is positive definite. ๐12 → ๐ = ๐ (๐) ( 0 0 ) ๐ (−๐), with ๐1 > 0, ๐2 > 0 ๐22 11 Finally, assumption 1 and 4 are reached by using the following lemma: Lemma 1: The Pareto front is invariant to scaling of any of the norms Proof: ๐ฃ is Pareto optimal with respect to ๐1 (๐ฃ), ๐2 (๐ฃ) if for every ๐ฃ ′ ≠ ๐ฃ there exists ๐ ∈ 2 2 ๐ ๐ {1,2} such that โ๐ฃ − ๐ฃ๐∗ โ < โ๐ฃ ′ − ๐ฃ๐∗ โ . Let ๐ฝ๐2 > 0 be a constant for each ๐ ∈ {1,2}. 2 2 2 2 ๐ ๐ โ๐ฃ − ๐ฃ๐∗ โ < โ๐ฃ ′ − ๐ฃ๐∗ โ ⇔ ๐ฝ๐2 โ๐ฃ − ๐ฃ๐∗ โ < ๐ฝ๐2 โ๐ฃ ′ − ๐ฃ๐∗ โ , which means that the ๐ ๐ norm-pairs {๐ฝ๐2 โ๐ฃ − 2 ๐ฃ๐∗ โ } ๐ and {โ๐ฃ − 1 2 ๐ฃ๐∗ โ } ๐ result in the same Pareto front. โ 1 We choose ๐ฝ12 = (Δ๐ฅ and ๐ฝ22 = ๐2 and we get that ฬ 2 +Δ๐ฆ ฬ 2) 1 2 ฬ 2 + Δ๐ฆ ฬ 2 )๐ฃ ๐ ๐ฃ , (๐ฃ − (1,0))๐ ๐ (๐) (๐1 (Δ๐ฅ 0 0 ) ๐ (−๐)(๐ฃ − (1,0)) results in the same ๐22 Pareto front as 1 ๐ ๐ฃ ๐ ๐ฃ , (๐ฃ − (1,0)) ๐ (๐) (0 0 ๐22 ) ๐ (−๐)(๐ฃ − (1,0)). Denote ๐2 = ๐21 ๐22 ๐21 . To conclude, we can assume without loss of generality that: 2 ||๐ฃ − ๐ฃ๐∗ ||๐ = (๐ฃ − ๐ฃ๐∗ )๐ ๐๐ (๐ฃ − ๐ฃ๐∗ ) = (๐ฃ − ๐ฃ๐∗ )๐ ๐ (๐) ( 1 0 ) ๐ (−๐)(๐ฃ − ๐ฃ๐∗ ) 0 ๐2๐ With ๐ฃ1∗ = (0,0), ๐ฃ2∗ = (1,0), ๐12 = 1, ๐22 = ๐2 As seen in Appendix S1 (conclusion 1), the Pareto front associated with 2 tasks is given by: {๐ฃ | ∃ 0 ≤ ๐ผ ≤ 1 ๐ . ๐ก. ๐ผ๐1 (๐ฃ) + (1 − ๐ผ)๐2 (๐ฃ) = 0} Where ๐๐ = ๐ป๐๐ (๐ฃ) Thus, the gradients of the 2 performance functions at a point that is Pareto optimal relative to 2 tasks point in opposite directions. 12 Let’s try to give a geometrical intuition to the above statement. The gradients of the performance functions at point ๐ฃ point in opposite directions if and only if ๐ฃ is a tangency point between 2 contours of the performance function. 2 Each point ๐ฃ is on some contour of ๐1 : ๐ถ1 = {๐ฃ ′ | ||๐ฃ ′ − ๐ฃ1∗ ||1 = ||๐ฃ − ๐ฃ1∗ ||1 2 }, and on a 2 contour of ๐2 : ๐ถ2 = {๐ฃ ′ | ||๐ฃ ′ − ๐ฃ2∗ ||2 = ||๐ฃ − ๐ฃ2∗ ||2 2 }. As mentioned in the main text, ๐ถ1 and ๐ถ2 are ellipses. Point ๐ฃ is a common point to ๐ถ1 and ๐ถ2 . It can be an intersection point, an internal tangency point (when the intersection of the interiors of both ellipses is non-empty), or an external tangency point. The gradients of the performance functions at point ๐ฃ point in opposite directions ⇔ ๐ฃ is an external tangency point between the 2 contours. If the gradients of the performance function at a point ๐ฃ do not point in opposite directions, it is either an internal tangency point or an intersection point between ๐ถ1 and 2 2 ๐ถ2 . In both cases ๐ถ1 โ ∩ ๐ถ2 โ ≠ ๐, where ๐ถ๐ โ = {๐ฃ ′ | ||๐ฃ ′ − ๐ฃ๐∗ ||๐ < ||๐ฃ − ๐ฃ๐∗ ||๐ } - all points that outperform ๐ฃ in the ๐-th task (see figure S1a). It means that there exists ๐ฃ ′ โ ๐ถ1 โ ∩ ๐ถ2 โ ⇒ ๐1 (๐ฃ ′ ) > ๐1 (๐ฃ) and ๐2 (๐ฃ ′ ) > ๐2 (๐ฃ), i.e., if the gradients at point ๐ฃ don’t point at opposite directions, we can find a point ๐ฃ ′ at the neighborhood of ๐ฃ that performs both tasks better than it. That is why a point for which the gradients don’t point in opposite directions is not Pareto optimal. In case of an external tangency point ๐ฃ, ๐ถ1 โ ∩ ๐ถ2 โ = {๐ฃ}, in which case there are no points that outperform ๐ฃ in both tasks (figure S1b). 13 Figure S1: The Pareto front is composed of tangency points between performance functions’ contours. Archetypes are marked as red dots, Pareto front in blue. (a) A point whose contours intersect is not Pareto optimal; points in the intersection area outperform it in both tasks. (b) A point whose contours are tangent is optimal – there is no intersection area with other outperforming points. Denote the Pareto front by P.F., as seen in Appendix S1 (conclusion 3): ๐. ๐น = {(๐ผ๐1 + (1 − ๐ผ)๐2 )−1 (๐ผ๐1 ๐ฃ1∗ + (1 − ๐ผ)๐2 ๐ฃ2∗ ) | 0 ≤ ๐ผ ≤ 1} ⇒ ∀0 ≤ ๐ผ ≤ 1 ∃๐ฃ ∈ ๐. ๐น โถ ๐ฃ = (๐ผ ๐1 + (1 − ๐ผ)๐2 )−1 (๐ผ ๐1 ๐ฃ1∗ + (1 − ๐ผ)๐2 ๐ฃ2∗ ) = −1 1 0 (๐ (๐) (๐ผ ๐ผ + (1 − ๐ผ) ( )) ๐ (−๐) ) 0 ๐2 1 0 ((1 − ๐ผ) ๐ (๐) ( 0 ) ๐ (−๐ ′ )(1,0)) = ๐2 (−1 + ๐ผ)(−๐ผ + (−2 + ๐ผ)๐2 + ๐ผ(−1 + ๐2 )Cos[2๐]) (−1 + ๐ผ)๐ผ(−1 + ๐2 )Cos[๐]Sin[๐] (− , ) −2๐ผ + 2(−1 + ๐ผ)๐2 ๐ผ + ๐2 − ๐ผ๐2 When ๐ผ → 0, ๐ฃ → (1, 0). When ๐ผ → 1, ๐ฃ → (0,0). Thus, the Pareto front lies on a curve between the archetypes ๐ฃ1∗ ′ and ๐ฃ2∗ ′ . We next characterize this curve. Denote ๐ฃ = (๐ฅ, ๐ฆ). By eliminating ๐ผ, we get that our curve is a quadratic curve that satisfies: 14 ๐ฅ 2 (1 − ๐2 )๐๐๐2 (2 ๐) − 2๐ฅ๐ฆ(1 − ๐2 )๐๐๐(2 ๐)๐ถ๐๐ (2 ๐) − ๐ฆ 2 (1 − ๐2 )๐๐๐2 (2 ๐) − ๐ฅ (1 − ๐2 )๐๐๐2 (2 ๐) + ๐ฆ ((1 + ๐2 − (1 − ๐2 )๐ถ๐๐ (2 ๐)) ๐๐๐(2 ๐) = 0 A general quadratic curve is of the form ๐ด๐ฅ๐ฅ ๐ฅ 2 + 2๐ด๐ฅ๐ฆ ๐ฅ๐ฆ + ๐ด๐ฆ๐ฆ ๐ฆ 2 + 2๐ด๐ฅ ๐ฅ + 2๐ด๐ฆ ๐ฆ + ๐ด=0 If J = ๐ด๐ฅ๐ฅ ๐ด๐ฆ๐ฆ - ๐ด2๐ฅ๐ฆ ๐ด๐ฅ๐ฅ < 0 and Δ = ๐ท๐๐ก ( ๐ด๐ฅ๐ฆ ๐ด๐ฅ ๐ด๐ฅ๐ฆ ๐ด๐ฆ๐ฆ ๐ด๐ฆ ๐ด๐ฅ ๐ด๐ฆ ) ≠ 0 then the equation ๐ด represents a hyperbola. For our curve: J = ๐ด๐ฅ๐ฅ ๐ด๐ฆ๐ฆ , ๐ด2๐ฅ๐ฆ = −(−1 + λ2 )2 sin2 (2๐) , Δ = 4๐2 (−1 + ๐2 ) sin([2๐]4 ) J < 0 and Δ ≠ 0 unless ๐ = 1 ๐๐ ๐ = 0 ๐๐ ๐ 2 and then J = 0 and Δ = 0. This means that the Pareto optimal points lie on a section of a hyperbola between the 2 archetypes, unless ๐ ๐ = 1 ๐๐ ๐ ∈ {0, 2 } and then they lie on the line between the archetypes. This matches previous results [1] since if ๐ = 1 both norms are equal. This also shows that if both ๐ norms’ contours are perpendicular to the line between them (๐๐ ∈ {0, 2 }), the Pareto front is the line between the archetypes. To conclude, we showed that the Pareto front associated with 2 archetypes in a 2-D trait space with performance functions that decrease monotonically with a general innerproduct norm distance from the archetype is a section of a hyperbola (or a line) between the archetypes. 15 Appendix S3 The Pareto front of 2 tasks in an n-dimensional morphospace has hyperbolic projections In this Appendix we will characterize the Pareto front for 2 tasks in an n-dimensional morphospace. We would like to prove that the Pareto front in an n-dimensional trait space for a system that needs to perform 2 tasks, when each performance function depends on a different inner-product norm, is a 1-dimensional curve connecting the archetypes whose projections on the principle planes of a certain coordinate system are hyperbolae (or lines). As shown in Appendix S1, the performance functions ๐๐ can be written as ๐๐ (๐ฃ) = −(๐ฃ − ๐ฃ๐∗ )๐ ๐๐ (๐ฃ − ๐ฃ๐∗ ) with positive-definite ๐๐ that can be decomposed as ๐2๐,1 ๐๐ = ๐ ๐ ( 0 0 0 0 … 0 ) ๐ ๐๐ 0 ๐2๐,๐ where ๐ ๐ is an orthogonal matrix. The Pareto front is the locus of all points satisfying ๐ผ1 ๐1 (๐ฃ − ๐ฃ1∗ ) + ๐ผ2 ๐2 (๐ฃ − ๐ฃ2∗ ) = 0, ๐ผ1 + ๐ผ2 = 1 Moreover, there exists a basis in which ๐1 = ๐ผ, simply redefine: ๐๐,1 ๐ฃ โถ= ( 0 0 0 0 … 0 ) ๐ ๐๐ ๐ฃ 0 ๐๐,๐ In which ๐2 (describing ๐2 ) is still positive-definite (as it does not depend on the basis). Denote ๐ด๐ = ๐ผ๐ ๐๐ , ๐ด = ๐ด1 + ๐ด2 , ๐ต๐ = ๐ด−1 ๐ด๐ 16 ๐ต1 + ๐ต2 = ๐ด−1 (๐ด1 + ๐ด2 ) = ๐ด−1 ๐ด = ๐ผ ๐ฃ = ๐ด−1 (๐ด1 ๐ฃ1∗ + ๐ด2 v2∗ ) = ๐ต1 ๐ฃ1∗ + ๐ต2 ๐ฃ2∗ The eigenvalues of ๐๐ are positive. The eigenvalues of ๐ด๐ are non-negative as ๐ผ๐ ≥ 0. As ๐ผ๐ are real, ๐ด๐ are symmetric. ๐ด1 has a single eigenvalue - ๐ผ1 , since ๐1 is assumed to be the identity matrix. The eigenvalues of ๐ด2 are ๐ผ2 ๐๐2 . As ๐ด1 is scalar, the eigenvectors of ๐ด = ๐ด1 + ๐ด2 are the same as those of ๐ด2 , with eigenvalues ๐ผ1 + ๐ผ2 ๐๐2 . These eigenvalues are positive since at least one ๐ผ๐ is positive, therefore A is invertible. The eigenvalues of ๐ด−1 are ๐ผ 1 1 + ๐ผ2 ๐2๐ . ๐ด is symmetric as a sum of 2 symmetric matrices. As ๐ด1 is scalar, it commutes with ๐ด2 . [๐ด1 , ๐ด2 ] = ๐ด1 ๐ด2 − ๐ด2 ๐ด1 = 0 ⇒ [๐ด, ๐ด1 ] = 0 and [๐ด, ๐ด2 ] = 0 ⇒ [๐ด−1 , ๐ด1 ] = 0 ๐ด๐ ๐ด๐ −1 ๐ [๐ด−1 , ๐ด2 ] = 0 ⇒ ๐ต๐๐ = (๐ด−1 ๐ด๐ )๐ = ๐ด๐๐ ๐ด−1 = and = ๐ด๐ ๐ด−1 = ๐ต๐ . Also, ๐ด−1 has the same eigenvectors as ๐ด, ๐ด1 and ๐ด2 so the eigenvalues of ๐ต1 are ๐ผ1 ๐ผ1 +๐ผ2 ๐2๐ = ๐ผ ๐ผ+(1−๐ผ) ๐2๐ and of ๐ต2 are 1−๐ผ ๐ผ 1−๐ผ+ 2 ๐ ๐ ๐ต1 and ๐ต2 are mutually diagonalizable: Let D be such that ๐ท๐ต1 ๐ท−1 = Λ1 . Now ๐ต2 = ๐ผ − ๐ต1 ⇒ ๐ท๐ต2 ๐ท−1 = ๐ท(๐ผ − ๐ต1 )๐ท−1 = ๐ผ − Λ1 ๐ฃ = ๐ท−1 Λ1 ๐ท ๐ฃ1∗ + ๐ท−1 (๐ผ − Λ1 )๐ท ๐ฃ2∗ = ๐ฃ2∗ + ๐ท−1 Λ1 ๐ท (๐ฃ1∗ − ๐ฃ2∗ ) → ๐ท(๐ฃ − ๐ฃ2∗ ) = Λ1 ๐ท(๐ฃ1∗ − ๐ฃ2∗ ) Consider the rotated coordinate system, ๐ฃ โ ๐ท−1 ๐ฃ, and rename ๐ฃ2∗ โถ= ๐ท−1 ๐ฃ2∗ . In this ๐ผ system ๐ฃ − ๐ฃ2∗ = Λ1 (๐ฃ − ๐ฃ1∗ ) ๐ผ+(1−๐ผ) ๐21 =( โฎ 0 … 0 โฑ … โฎ ๐ผ ) (๐ฃ − ๐ฃ1∗ ). ๐ผ+(1−๐ผ) ๐2๐ ∗ ∗ ∗ ∗ Denote ๐ฃ = (๐ฃ1 , … , ๐ฃ๐ ), ๐ฃ1∗ = (๐ฃ1,1 , … , ๐ฃ1,๐ ), ๐ฃ2∗ = (๐ฃ2,1 , … , ๐ฃ2,๐ ) For every 2 coordinates ๐, ๐ we get that: 17 ๐ฃ๐ ๐ฃ๐ (๐๐ − ๐๐ ) + ๐ฃ๐ (๐ฃ1,๐ ๐๐ − ๐ฃ2,๐ ๐๐ ) + ๐ฃ๐ (๐ฃ2,๐ ๐๐ − ๐ฃ1,๐ ๐๐ ) + ๐ฃ1,๐ ๐ฃ2,๐ ๐๐ − ๐ฃ1,๐ ๐ฃ2,๐ ๐๐ =0 Thus, the projection of the Pareto front on any main plane is a quadratic curve, with parameters: 2 J = ๐ด๐ฅ๐ฅ ๐ด๐ฆ๐ฆ − ๐ด2๐ฅ๐ฆ = −(๐๐ − ๐๐ ) , Δ = −(๐๐ − ๐๐ ) 2 For components for which ๐๐ ≠ ๐๐ , ๐ฝ < 0 and Δ ≠ 0, so the projection on their plane is a hyperbola. For components for which ๐๐ = ๐๐ , ๐ฝ = 0 and Δ = 0, so the projection on their plane is a line. To conclude, we get that the Pareto front associated with 2 archetypes in an ndimensional trait space is a 1-dimensional curve between the 2 archetypes. There exists a coordinate system such that the projection of this curve on each principal plane is a section of a hyperbola or a line between the projections of the archetypes. 18 Appendix S4 Calculation of the deviation of the Pareto front from a straight line for 2 tasks in a 2D-morphospace In this Appendix, we calculate the maximal deviation of the Pareto front associated with 2 tasks in a 2D morphospace from the line between the archetypes (which is the Pareto front in case the norms are equal). The deviation of the front from the line between the archetypes is defined as the maximal height โ of a point on the front with respect to the line, divided by the Euclidean distance between the archetypes, D. 1 1 We can assume without loss of generality that the archetypes are at (− 2 , 0) and (2 , 0), since this assumption can be satisfied by a combination of translations/rotations and isometric scaling, all of which preserve distance ratios. Notice that in this case ๐ท = 1 so โ the ratio ๐ท is simply โ. Norm ๐ depends on the parameters ๐๐ , ๐๐ , i.e - ๐๐ = R(θi ) ( 1 0 ) ๐ (−θi ). During the 0 ๐2๐ ๐ solution we assume that 0 ≤ ๐๐ < 2 . This is possible since the Pareto front is symmetric ๐ 1 under the transformation: ๐๐ , ๐๐ → ๐๐ + 2 , ๐ (see Appendix S5) ๐ Appendix S1 (Conclusion 3) gives a parametric representation of the front, {๐ฅ(๐ข), ๐ฆ(๐ข)}, with a parameter 0 ≤ ๐ข ≤ 1. As the line between the archetypes lies on the ๐ฅ axis, the maximal deviation is given by max |๐ฆ(๐ข) |. max |๐ฆ(๐ข) | = |๐ฆ(๐ข๐๐๐ฅ )|, for ๐ข๐๐๐ฅ with ๐๐ข ๐ฆ|๐ข๐๐๐ฅ = 0 (The maximum is not obtained at the edges, since ๐ฆ(0) = ๐ฆ(1) = 0). So, from Appendix S1: ๐ฆ(๐ข) = − 1 (−1 + ๐ข)๐ข (−(−1 + ๐21 )Sin[2θ1 ](Cos[θ2 ]2 + ๐22 Sin[θ2 ]2 ) − (−1 + ๐22 )(−1 − ๐21 + (−1 + ๐21 )Cos[2θ1 ])Sin[2θ2 ]) 2 2๐22 + ๐ข(1 + ๐21 + (−3 + ๐21 )๐22 + ๐ข(−1 + ๐21 + ๐22 − ๐21 ๐22 )) + (−1 + ๐ข)๐ข(−1 + ๐21 )(−1 + ๐22 )Cos[2(θ1 − θ2 )] Straightforward calculation shows that ๐๐ข ๐ฆ = 0 for ๐ข๐๐๐ฅ = 1 λ 1+ 1 λ2 19 Substituting this into the expression for ๐ฆ, we get that the maximal deviation for given parameters ๐1 , ๐1 and ๐2 , ๐2 is: โ(๐1 , ๐1 ๐2 , ๐2 ) = |− (−1+๐21 )(1+๐22 )Sin[2θ1 ]−(−1+๐22 )((−1+๐21 )Sin[2(θ1 −θ2 )]+(1+๐21 )Sin[2θ2 ]) 2(1+๐21 +4λ1 λ2 +๐22 +๐21 ๐22 −(−1+๐21 )(−1+๐22 )Cos[2(θ1 −θ2 )]) | It is bounded by: |− ≤ = (−1 + ๐21 )(1 + ๐22 )Sin[2θ1 ] − (−1 + ๐22 )((−1 + ๐21 )Sin[2(θ1 − θ2 )] + (1 + ๐21 )Sin[2θ2 ]) | 2(1 + ๐21 + 4λ1 λ2 + ๐22 + ๐21 ๐22 − (−1 + ๐21 )(−1 + ๐22 )Cos[2(θ1 − θ2 )]) |(−1 + ๐21 )(1 + ๐22 )Sin[2θ1 ]| + |(−1 + ๐22 )(−1 + ๐21 )Sin[2(θ1 − θ2 )]| + |(1 + ๐21 )(−1 + ๐22 )Sin[2θ2 ]| 2||1 + ๐21 + 4λ1 λ2 + ๐22 + ๐21 ๐22 | − |(−1 + ๐21 )(−1 + ๐22 )Cos[2(θ1 − θ2 )]|| |(−1 + ๐21 )(1 + ๐22 )Sin[2θ1 ]| + |(−1 + ๐22 )(−1 + ๐21 )Sin[2(θ1 − θ2 )]| + |(1 + ๐21 )(−1 + ๐22 )Sin[2θ2 ]| 2(1 + ๐21 + 4λ1 λ2 + ๐22 + ๐21 ๐22 − (1 − ๐21 )(1 − ๐22 )|Cos[2(θ1 − θ2 )]|) ≤ (1 − ๐21 )(1 + ๐22 ) + (1 − ๐22 )(1 − ๐21 ) + (1 + ๐21 )(1 − ๐22 ) 2(1 + ๐21 + 4λ1 λ2 + ๐22 + ๐21 ๐22 − (1 − ๐21 )(1 − ๐22 )) = − −3 + ๐21 + ๐22 + ๐21 ๐22 4(λ1 + λ2 )2 So, the maximal deviation from the line between the archetypes, for any given ๐1 , ๐2 is bounded by 3−๐21 −๐22 −๐21 ๐22 . 4(λ1 +λ2 )2 Let’s focus on the special case when one of the performance functions depends on Euclidean norm, i.e. – λ1 = 1. In this case the upper bound becomes โ ≤ 1−๐2 โ(1, ๐1 , ๐2 , ๐2 ) = When setting ๐2 = ๐ 4 , and this is a tight bound: 2(1+๐2 ) (1 − ๐2 )๐๐๐[2๐2 ] 2(1 + ๐2 ) this bound is obtained (as expected, ๐1 is irrelevant when ๐1 = 1). Hence, when task 1 depends on Euclidean norm, the maximal deviation of the front per given ๐2 is obtained for ๐2 = ๐ 4 and is given by: โ(1, ๐1 , ๐2 , ๐2 ) ≤ 20 (1 − ๐2 ) 2(1 + ๐2 ) In this case, the deviation of the Pareto front from the line between the archetypes, maximized on all ๐2 ′๐ , and on all ๐2 ′ ๐ is half the distance between the archetypes. The deviation approaches this value as ๐2 → 0 (the contours of the second norm becomes more and more eccentric). 21 Appendix S5 Each Pareto front of 2 tasks in a 2D morphospace is generated by a 1-dimensional family of norm-pairs Here, we would like to deal with the following question: Let PF be a Pareto front associated with 2 tasks in a 2D-morphospace, where each performance decays from its archetype with a different inner-product norm. In Appendix S2 we showed that under the above assumptions, the Pareto front is a segment of a hyperbola (or a line) that connects the archetypes ๐ฃ1∗ and ๐ฃ2∗ . In that case, can we deduce the norms that the performance functions decay with from the exact shape of the Pareto front? When approaching the question presented above, we assume that the position of the archetypes is known. This is a reasonable assumption – for a given hyperbola/line-shaped data set, the 2 edge points of the front are assumed to be the archetypes. We will show that not all hyperbola-shaped datasets can be explained by the model with its current assumptions – there are hyperbolae that are not generated by any pair of norms. However, if for a hyperbola-shaped dataset there exists a pair of norms that generates it, then there exists a one-dimensional family of norm-pairs that generate it. This is expected. A hyperbola is a quadratic curve defined by an equation of the form: ๐ด๐ฅ๐ฅ ๐ฅ 2 + 2๐ด๐ฅ๐ฆ ๐ฅ๐ฆ + ๐ด๐ฆ๐ฆ ๐ฆ 2 + 2๐ด๐ฅ ๐ฅ + 2๐ด๐ฆ ๐ฆ + ๐ด = 0. Hence, it is defined by 5 free parameters (we can normalize the equation by one of the coefficients). We assume that the position of the archetypes is known. A hyperbola-shaped front must pass through the 2 archetypes, leaving it with 3 free parameters (each point yields a single equation the hyperbola’s coefficients must satisfy, reducing the number of free parameters by one). The norms that the performance functions depend on are represented by the matrices ๐1 , ๐2 , where ๐๐ = ๐ (๐๐ ) ( 1 0 ) ๐ (−๐๐ ) (Appendix S2). Hence, besides the position 0 ๐2๐ of the archetype ๐ฃ๐∗ , each norm can be describes by 2 parameters - the ratio between the eigenvalues of the norm’s matrix, ๐2๐ , and the angle of the rotation matrix that 22 1 diagonalizes it, ๐๐ (๐๐ = −(๐ฃ − ๐ฃ๐∗ )๐ ๐ (๐๐ ) ( 0 0 ) ๐ (−๐๐ )(๐ฃ − ๐ฃ๐∗ )). It means that if the ๐2๐ location of the archetypes is known, the 2 performance functions together have 4 free parameters. Thus, the problem of deducing the norms from the shape of the Pareto front is expectedly degenerate, since we try to determine 4 free parameters (the norms of the performance functions) using only 3 observed parameters (the hyperbola). Also, we expect the family of norm-pairs that generate each hyperbola to depend only on 1 parameter – i.e. - to be 1-dimensional. Consider a hyperbola/line-shaped dataset that is generated by a pair of norms ๐1 and ๐2 , with archetypes ๐ฃ1∗ , ๐ฃ2∗ . We can transform to a coordinate system where ๐ฃ1∗ = (0,0) and ๐ฃ2∗ = (1,0), ๐1 = ๐ผ and ๐2 is a positive definite matrix, using a transformation under which a hyperbola/line remains a hyperbola/line. Such transformation was shown to exist in Appendix S2. There is a 1-to-1 correspondence between norm-pairs that generate the transformed front in the transformed coordinate system and the norm-pairs that generate the front in the original coordinate system. Hence, if we show that there is a 1dimensional family of norm pairs that generate the front in the transformed system, the conclusion will also hold in the original system. Assume that in the transformed coordinate system, the quadratic curve that the Pareto front lies on is represented by: ๐ด๐ฅ๐ฅ ๐ฅ 2 + 2๐ด๐ฅ๐ฆ ๐ฅ๐ฆ + ๐ด๐ฆ๐ฆ ๐ฆ 2 + 2๐ด๐ฅ ๐ฅ + 2๐ด๐ฆ ๐ฆ + ๐ด = 0 We constrain the curve to go through the archetypes at (0,0) and (1,0), and get that ๐ด = 0 and ๐ด๐ฅ๐ฅ = − 2๐ด๐ฅ So we expect the Pareto front to be of the form (๐๐น) ๐ด๐ฅ๐ฅ ๐ฅ 2 + 2๐ด๐ฅ๐ฆ ๐ฅ๐ฆ + ๐ด๐ฆ๐ฆ ๐ฆ 2 − ๐ด๐ฅ๐ฅ ๐ฅ + 2๐ด๐ฆ ๐ฆ = 0 Each quadratic curve is associated with 2 parameters, Δ and ๐ฝ, defined by: J = ๐ด๐ฅ๐ฅ ๐ด๐ฆ๐ฆ - ๐ด2๐ฅ๐ฆ ๐ด๐ฅ๐ฅ and Δ = det ( ๐ด๐ฅ๐ฆ ๐ด๐ฅ ๐ด๐ฅ๐ฆ ๐ด๐ฆ๐ฆ ๐ด๐ฆ 23 ๐ด๐ฅ ๐ด2 ๐ด ๐ด๐ฆ ) = − ๐ฅ๐ฅ ๐ฆ๐ฆ − ๐ด๐ฅ๐ฅ ๐ด๐ฅ๐ฆ ๐ด๐ฆ − ๐ด๐ฅ๐ฅ A2y 4 ๐ด For a hyperbola ๐ฝ < 0 and Δ ≠ 0. For a line, ๐ฝ = 0 and Δ = 0. We would like to find all norm-pairs (๐1 , ๐2 ) that generate (๐๐น), given that the pair (๐ผ, ๐2 ) generates it. Let ๐1 depend on parameters ๐1 , ๐1 and ๐2 depend on parameters ๐2 , ๐2 . Namely 1 ๐1 = ๐ (๐1 ) ( 0 0 1 ) ๐ (−๐1 ) ; ๐2 = ๐ (๐2 ) ( ๐12 0 0 ) ๐ (−๐2 ) ๐22 As will be shown in Appendix S7 (Lemma 1), the quadratic curve on which lies a Pareto front associated with 2 tasks can be given by the equation: ๐๐ง (๐1 (๐ฃ) × ๐2 (๐ฃ)) ≡ 0, where ๐๐ (๐ฃ) = ∇๐๐ (๐ฃ) = ๐๐ (๐ฃ − ๐ฃ๐∗ ). Denote ๐๐น(๐1 , ๐2 , ๐1 , ๐2 ) = ๐๐ง (๐1 (๐ฃ) × ๐2 (๐ฃ)) ⇒ ๐๐น(๐1 , ๐2 , ๐1 , ๐2 ) = ๐ฆ (1 + λ12 + λ22 + λ12 λ22 + (−1 + λ12 )(1 + λ22 )Cos[2θ1 ] − (−1 + λ12 )(−1 + λ22 )Cos[2(θ1 − θ2 )] − (1 + λ12 )(−1 + λ22 )Cos[2θ2 ]) + 2๐ฅ๐ฆ(−(−1 + λ12 )(1 + λ22 )Cos[2θ1 ] + (1 + λ12 )(−1 + λ22 )Cos[2θ2 ]) + ๐ฅ((−1 + λ12 )(1 + λ22 )Sin[2θ1 ] − (−1 + λ1)(−1 + λ22 )Sin[2(θ1 − θ2 )] − (1 + λ12 )(−1 + λ22 )Sin[2θ2 ]) + ๐ฆ 2 ((−1 + λ12 )(1 + λ22 )Sin[2θ1 ] + (−1 + λ12 )(−1 + λ22 )Sin[2(θ1 − θ2 )] − (1 + λ12 )(−1 + λ22 )Sin[2θ2 ]) + ๐ฅ 2 (−(−1 + λ12 )(1 + λ22 )Sin[2θ1 ] + (−1 + λ12 )(−1 + λ22 )Sin[2(θ1 − θ2 )] + (1 + λ12 )(−1 + λ22 )Sin[2θ2 ]) The Pareto front associated with ๐1 , ๐2 is thus given by ๐๐น(๐1 , ๐2 , ๐1 , ๐2 ) = 0. 1 ๐ Here we observe the invariance of the Pareto front under ๐๐ → ๐ , ๐๐ → ๐๐ + 2, which is ๐ inherent to the problem since the norms are symmetric under this transformation. This is only a “technical” degeneracy rather than a genuine one, since ๐๐ (๐๐ , ๐๐ ) is essentially the 1 ๐ same as ๐๐ (๐ , ๐๐ + 2 ). The resulting norms only differ by a factor of ๐, and generate the ๐ 24 same contours and the same Pareto front (see Appendix S2). Hence, we can assume 0 ≤ ๐ ๐ < 2. We would like to know when ๐๐น = 0 and ๐๐น(๐1 , ๐2 , ๐1 , ๐2 ) = 0 represent the same line/hyperbola. This happens if and only if these quadratic forms are equal up to a factor, i.e. - ๐๐น = ๐ฝ ๐๐น(๐1 , ๐2 , ๐1 , ๐2 ). We know that ๐๐น and ๐๐น(๐1 = 1, ๐, ๐1 = 0, ๐) represent the same quadratic curve. Hence, ∃๐ฝ ๐ . ๐ก. ๐๐น = ๐ฝ ๐๐น(๐1 = 1, ๐, ๐1 = 0, ๐). Examining the expression for ๐๐น(๐1 = 1, ๐, ๐1 = 0, ๐), we find out that the coefficients of ๐ฅ 2 and of ๐ฆ 2 are opposite to one another. That is – if the hyperbola was indeed generated by the given norm-pair, it mush have ๐ด๐ฅ๐ฅ = −๐ด๐ฆ๐ฆ . First, let’s find which norm pairs generate ๐๐น given that it is a line. If the front is a line, then both ๐ฝ = 0 and Δ = 0. As ๐ด๐ฅ๐ฅ = −๐ด๐ฆ๐ฆ and ๐ฝ = −๐ด2๐ฆ๐ฆ − ๐ด2๐ฅ๐ฆ we get that ๐ด๐ฆ๐ฆ = 0 and ๐ด๐ฅ๐ฆ = 0 We look for all other ๐1 , ๐2 , ๐1 , ๐2 , for which exists ๐ฝ such that ๐๐น(๐1 , ๐2 , ๐1 , ๐2 ) = ๐ฝ๐๐น. We know that when this happens, the coefficients of ๐ฅ 2 and of ๐ฆ 2 are opposite to one another. This results in the equation: (๐ด) − 2(−1 + λ1 )(−1 + λ2 )Sin[2(θ1 − θ2 )] = 0 ๐ This equation is satisfied if λ1 = 1, λ2 = 1 or ๐1 = ๐2 as all angles are taken modulo 2 . Also, we know that the coefficients of ๐ฅ 2 , ๐ฆ 2 and ๐ฅ๐ฆ are zero. This results in the following equation: (๐ต)((−1 + λ12 )(1 + λ22 )Sin[2θ1 ] + (−1 + λ12 )(−1 + λ22 )Sin[2(θ1 − θ2 )] − (1 + λ12 )(−1 + λ22 )Sin[2θ2 ]) = 0 Substituting into (๐ต) each of the solutions found for equation (๐ด), we get that: 25 If ๐1 = 1, equation (B) becomes: −2(−1 + λ2 )Sin[2θ2 ] = 0. It is satisfied if λ2 = 1 or ๐ θ2 = 0 (as all angles are taken modulo 2 ). If ๐2 = 1, equation (B) becomes: 2(1 − λ1 )Sin[2θ1 ] = 0. It is satisfied if λ1 = 1 or θ1 = 0. If ๐1 = ๐2 , equation (B) becomes: 2(๐1 − λ2 )Sin[2θ2 ] = 0. It is satisfied if θ2 = 0, or ๐1 − λ2 = 0 To conclude, equations (๐ด) and (๐ต), which must be satisfied in order that ๐๐น(๐1 , ๐2 , ๐1 , ๐2 ) will represents the same line as ๐๐น, are satisfied if and only if: (I) ๐1 = ๐2 and ๐1 = ๐2 ๐๐ (II) ๐1 = ๐2 = 0 Note that if ๐๐ = 1, ๐๐ has no effect on the norm and we can take it to be whatever fits. On the other hand, it can be easily checked that if one of those conditions holds, ๐๐น(๐1 , ๐2 , ๐1 , ๐2 ) represents a line (or intersecting lines) that the line between the archetypes lies on. It means that 2 norms generate the line between the archetypes if and only if one of the above conditions holds. The meaning of (I) is that ๐1 = ๐2 , and (II) means that ๐1 ′๐ and ๐2 ′๐ elliptic contours each have an axis parallel to the ๐ฅ axis (which is parallel to ๐ฃ1∗ − ๐ฃ2∗ ) Note that if a dataset is shaped like a line, 2 Euclidean norms will always generate it. This means that in order for the above conclusions to hold, we only need to translate, rotate and isometrically scale the coordinate system such that the archetypes are at (0,0) and (1,0). This implies that the above conclusions also hold in the original coordinate system. Equality of matrices is not affected by change of basis, which covers (I). Regarding case (II) – notice that the contours’ axes transform like regular vectors when applying translations / rotations, and parallel vectors remain parallel under any linear transformation. 26 To summarize – for a given archetype pair (๐ฃ1∗ , ๐ฃ2∗ ), the norm-pair (๐1 , ๐2 ) generates a Pareto front which is the line between them if and only if ๐1 = ๐2 or both ๐1 and ๐2 have elliptic contours with an axis that is parallel to ๐ฃ1∗ − ๐ฃ2∗ . After taking care of which norms generate a line-shaped front, we will assume from now on that the front is not a straight line but a hyperbola. We saw that on the frame we work on, the Pareto front lies on a curve given by the equation: ๐ด๐ฅ๐ฅ ๐ฅ 2 + 2๐ด๐ฅ๐ฆ ๐ฅ๐ฆ − ๐ด๐ฅ๐ฅ ๐ฆ 2 − ๐ด๐ฅ๐ฅ ๐ฅ + 2๐ด๐ฆ ๐ฆ = 0 We know that ๐ด๐ฅ๐ฅ ≠ 0, since otherwise ๐ฅ = 0 and the resulting Pareto front is not a hyperbola, so we can normalize the coefficient of ๐ฅ 2 to be 1, and equate both (normalized) representations: ๐ฅ 2 − ๐ฅ + 2๐ด๐ฅ๐ฆ ๐ฅ๐ฆ − ๐ฆ 2 + 2๐ด๐ฆ ๐ฆ = 0 And ๐ฅ2 − ๐ฅ + ๐ฆ (1 + λ12 + λ22 + λ12 λ22 + (−1 + λ12 )(1 + λ22 )Cos[2θ1 ] − (−1 + λ12 )(−1 + λ22 )Cos[2(θ1 − θ2 )] − (1 + λ12 )(−1 + λ22 )Cos[2θ2 ]) (−(−1 + λ12 )(1 + λ22 )Sin[2θ1 ] + (−1 + λ12 )(−1 + λ22 )Sin[2(θ1 − θ2 )] + (1 + λ12 )(−1 + λ22 )Sin[2θ2 ]) + 2๐ฅ๐ฆ + ๐ฆ2 (−(−1 + λ12 )(1 (−(−1 + λ12 )(1 + λ22 )Cos[2θ1 ] + (1 + λ12 )(−1 + λ22 )Cos[2θ2 ]) + λ22 )Sin[2θ1 ] + (−1 + λ12 )(−1 + λ22 )Sin[2(θ1 − θ2 )] + (1 + λ12 )(−1 + λ22 )Sin[2θ2 ]) ((−1 + λ12 )(1 + λ22 )Sin[2θ1 ] + (−1 + λ12 )(−1 + λ22 )Sin[2(θ1 − θ2 )] − (1 + λ12 )(−1 + λ22 )Sin[2θ2 ]) (−(−1 + λ12 )(1 + λ22 )Sin[2θ1 ] + (−1 + λ12 )(−1 + λ22 )Sin[2(θ1 − θ2 )] + (1 + λ12 )(−1 + λ22 )Sin[2θ2 ]) =0 These normalized representations describe the same hyperbola if and only if the coefficients are equal. This results in 3 equations (equation set E1): (−1+λ21 )(1+λ22 )Sin[2θ1 ]+(−1+λ21 )(−1+λ22 )Sin[2(θ1 −θ2 )]−(1+λ21 )(−1+λ22 )Sin[2θ2 ] (I) −1 + (II) −2๐ด๐ฆ + (III) −๐ด๐ฅ๐ฆ + (1−λ21 )(1+λ22 )Sin[2θ1 ]+(−1+λ21 )(−1+λ22 )Sin[2(θ1 −θ2 )]+(1+λ21 )(−1+λ22 )Sin[2θ2 ] =0 (1+λ21 )(1+λ22 )+(−1+λ21 )(1+λ22 )Cos[2θ1 ]−(−1+λ21 )(−1+λ22 )Cos[2(θ1 −θ2)]−(1+λ21 )(−1+λ22 )Cos[2θ2 ] (1−λ21 )(1+λ22 )Sin[2θ1 ]+(−1+λ22 )((−1+λ21 )Sin[2(θ1 −θ2 )]+(1+λ21 )Sin[2θ2 ]) (1−λ21 )(1+λ22 )Cos[2θ1 ]+(1+λ21 )(−1+λ22 )Cos[2θ2 ] 2 2 (1−λ1 )(1+λ2 )Sin[2θ1 ]+(−1+λ22 )((−1+λ21 )Sin[2(θ1 −θ2 )]+(1+λ21 )Sin[2θ2 ]) Consider equation (I): 27 =0 =0 2(−1 + λ12 )(−1 + λ22 )Sin[2(θ1 − θ2 )] =0 −(−1 + λ12 )(1 + λ22 )Sin[2θ1 ] + (−1 + λ22 )((−1 + λ12 )Sin[2(θ1 − θ2 )] + (1 + λ12 )Sin[2θ2 ]) It is satisfied when λ12 = 1 or λ22 = 1 or θ1 = θ2 . Substituting either ๐๐ = 1 into equations (I) and (III) yields equations that don’t depend on ๐๐ (this is expected since in case ๐๐ = 1 the respective norm is Euclidean and ๐๐ is meaningless). So it is safe to assume that in this case it is true, in a sense, that ๐1 = ๐2 . From now on we’ll simply use ๐ to denote the common angle. Equations (I) and (II) now become (regardless of which condition satisfied equation (I)): −2๐ด๐ฅ๐ฆ + 2 cot[2๐] = 0 2๐ด๐ฆ (λ12 − λ22 ) + λ12 cot[๐] + λ22 tan[๐] =0 −λ12 + λ22 This means that ๐ โ θ1 = θ2 = arccot[๐ด๐ฅ๐ฆ ] 2 For the second equation we see that it reduces to 2๐ด๐ฆ (1 − λ22 λ22 ) + cot[๐] + tan[๐] = 0 λ12 λ12 2 λ 2 λ It means that from this equation we can only deduce (λ2 ) . Denote ๐ถ = (λ2 ) . C depends 1 1 on the parameters of the hyperbola: arccot[๐ด๐ฅ๐ฆ ] ] 2 ๐ถ= arccot[๐ด๐ฅ๐ฆ ] −2๐ด๐ฆ + tan[ ] 2 −2๐ด๐ฆ − cot[ Note that for consistency, the value deduced for C from equation (II) must be positive. If ๐ถ ≤ 0, it means that there is no pair of norms that generate this hyperbola. We know that the Pareto front is generated by ๐1 = ๐ผ, ๐2 = ๐2 , so their parameters must satisfy 28 equation (II). arccot[๐ด๐ฅ๐ฆ ] ๐ tan[ 2 arccot[๐ด๐ฅ๐ฆ ] ๐1 = 1, ๐2 = ๐, ๐1 = 0, ๐2 = ๐ ⇒ 2๐ด๐ฆ (1 − ๐) + cot[ 2 ]+ ]=0 arccot[๐ด๐ฅ๐ฆ ] ] 2 ⇒ ๐2 = =๐ถ arccot[๐ด๐ฅ๐ฆ ] −2๐ด๐ฆ + tan[ ] 2 −2๐ด๐ฆ − cot[ ๐2 > 0 , therefore ๐ถ > 0. This means that for every choice of ๐1 > 0, choosing ๐22 = ๐ถ ๐12 will result in ๐22 greater than zero, and hence ๐2 that depends on ๐2 will represent an inner-product norm. Hence, the parameters ๐1 = ๐2 = arccot[๐ด๐ฅ๐ฆ ] , ๐1 , ๐2 = √๐ถ๐1 2 define 2 norms, ๐1 and ๐2 that generate (PF), for every ๐1 > 0. This means that there is a 1-dimensional family of norm-pairs that generate PF, parameterized by ๐1 . Note that choosing ๐1 determines ๐2 uniquely. However, note that there exist hyperbola-shaped data sets that cannot be describes as a Pareto front associated with 2 tasks. This is since the existence of any solution relies on C being positive. C is defined by the parameters of the hyperbola (๐ถ = arccot[๐ด๐ฅ๐ฆ ] ] 2 arccot[๐ด๐ฅ๐ฆ ] −2๐ด๐ฆ +tan[ ] 2 −2๐ด๐ฆ −cot[ ). We can find a hyperbola with parameters that define a negative C, which means that it is not the Pareto front of any norm pairs. This also means that given a hyperbola, it is easy to check whether or not it is generated by a norm-pair by simply calculating ๐ถ. An example for a hyperbola with ๐ถ < 0 is ๐ฅ 2 − 2๐ฅ๐ฆ − ๐ฆ 2 − ๐ฅ + 2๐ฆ = 0. So, knowing at least one norm pair that generates the Pareto front, we can deduce all norm pairs. However, another question can be asked: can we determine from the parameters of the hyperbola whether there are norm pairs that generate the hyperbola, and 29 if so what are the norms? We will show that in the common case, it is numerically possible. To approach this question, assume we have data shaped like a hyperbola section. From the data, we can identify the edge points of the hyperbola, and change coordinate system such that the edge points are at (0,0) and (1,0). Those points are assumed to be the 2 archetypes - ๐ฃ1∗ = (0,0) and ๐ฃ2∗ = (1,0). The new coordinate system results from the old coordinate system by a rotation, translation and isometric scaling. Note the difference between this transformation and the one described earlier – in both of them we transform the archetypes to be at (0,0) and (1,0). However, earlier we transformed such that one of the norms that generates the front is Euclidean. Here we can’t do so as we don’t know which norm pairs generate the hyperbola. As before, we get that under such transformations, a hyperbola remains a hyperbola, and there is a 1-to-1 correspondence between norm-pairs that generate the transformed front in the transformed coordinate system and the norm-pairs that generate the front in the original coordinate system. Hence, once we find which norms generate the front in the transformed coordinate system, we can transform back and find the norms that generate the front in the original coordinate system. If we find that there isn’t a norm pair that generates the hyperbola in the transformed coordinate system, this conclusion will hold in the original coordinate system. In this coordinate system, the hyperbola is given by the equation ๐ด๐ฅ๐ฅ ๐ฅ 2 + 2๐ด๐ฅ๐ฆ ๐ฅ๐ฆ + ๐ด๐ฆ๐ฆ ๐ฆ 2 − ๐ด๐ฅ๐ฅ ๐ฅ + 2๐ด๐ฆ ๐ฆ = 0 The ๐ฅ coefficient and free parameter are determined since the hyperbola has to pass through the archetypes at (0,0) and (0,1). We would like to know which pairs of norms, if exist, generate a Pareto front with those parameters. We search for parameters ๐1 , ๐2 , ๐1 , ๐2 such that ๐๐น(๐1 , ๐2 , ๐1 , ๐2 ), defined before, describes the same curve as ๐ด๐ฅ๐ฅ ๐ฅ 2 + 2๐ด๐ฅ๐ฆ ๐ฅ๐ฆ + ๐ด๐ฆ๐ฆ ๐ฆ 2 − ๐ด๐ฅ๐ฅ ๐ฅ + 2๐ด๐ฆ ๐ฆ = 0. 30 We know that ๐ด๐ฅ๐ฅ ≠ 0, since otherwise ๐ฅ = 0 and the Pareto front is a line, so we can normalize the coefficient of ๐ฅ 2 to be 1, and equate both (normalized) representations: ๐ฅ 2 − ๐ฅ + 2๐ด๐ฅ๐ฆ ๐ฅ๐ฆ + ๐ด๐ฆ๐ฆ ๐ฆ 2 + 2๐ด๐ฆ ๐ฆ = 0 and ๐ฅ2 − ๐ฅ + ๐ฆ (1 + λ12 + λ22 + λ12 λ22 + (−1 + λ12 )(1 + λ22 )Cos[2θ1 ] − (−1 + λ12 )(−1 + λ22 )Cos[2(θ1 − θ2 )] − (1 + λ12 )(−1 + λ22 )Cos[2θ2 ]) (−(−1 + λ12 )(1 + λ22 )Sin[2θ1 ] + (−1 + λ12 )(−1 + λ22 )Sin[2(θ1 − θ2 )] + (1 + λ12 )(−1 + λ22 )Sin[2θ2 ]) + 2๐ฅ๐ฆ + ๐ฆ2 (−(−1 + λ12 )(1 (−(−1 + λ12 )(1 + λ22 )Cos[2θ1 ] + (1 + λ12 )(−1 + λ22 )Cos[2θ2 ]) + λ22 )Sin[2θ1 ] + (−1 + λ12 )(−1 + λ22 )Sin[2(θ1 − θ2 )] + (1 + λ12 )(−1 + λ22 )Sin[2θ2 ]) ((−1 + λ12 )(1 + λ22 )Sin[2θ1 ] + (−1 + λ12 )(−1 + λ22 )Sin[2(θ1 − θ2 )] − (1 + λ12 )(−1 + λ22 )Sin[2θ2 ]) (−(−1 + λ12 )(1 + λ22 )Sin[2θ1 ] + (−1 + λ12 )(−1 + λ22 )Sin[2(θ1 − θ2 )] + (1 + λ12 )(−1 + λ22 )Sin[2θ2 ]) =0 These normalized representations describe the same hyperbola if and only if the coefficients are equal. This results in 3 equations (equation set E2): (I) −๐ด๐ฆ๐ฆ + (II) −2๐ด๐ฆ + (III) −๐ด๐ฅ๐ฆ + (−1+λ21 )(1+λ22 )Sin[2θ1 ]+(−1+λ21 )(−1+λ22 )Sin[2(θ1 −θ2 )]−(1+λ21 )(−1+λ22 )Sin[2θ2 ] (1−λ21 )(1+λ22 )Sin[2θ1 ]+(−1+λ21 )(−1+λ22 )Sin[2(θ1 −θ2 )]+(1+λ21 )(−1+λ22 )Sin[2θ2 ] =0 (1+λ21 )(1+λ22 )+(−1+λ21 )(1+λ22 )Cos[2θ1 ]−(−1+λ21 )(−1+λ22 )Cos[2(θ1 −θ2)]−(1+λ21 )(−1+λ22 )Cos[2θ2 ] (1−λ21 )(1+λ22 )Sin[2θ1 ]+(−1+λ22 )((−1+λ21 )Sin[2(θ1 −θ2 )]+(1+λ21 )Sin[2θ2 ]) (1−λ21 )(1+λ22 )Cos[2θ1 ]+(1+λ21 )(−1+λ22 )Cos[2θ2 ] (1−λ21 )(1+λ22 )Sin[2θ1 ]+(−1+λ22 )((−1+λ21 )Sin[2(θ1 −θ2 )]+(1+λ21 )Sin[2θ2 ]) =0 =0 The solution of this equation set behaves quite differently depending on ๐ด๐ฆ๐ฆ . First assume that ๐ด๐ฆ๐ฆ = −1. In that case, equation set E2 becomes equation set E1. Note that as mentioned under equation set E1, this scenario happens if ๐1 = ๐2 , ๐1 = 1 or ๐2 = 1 (one of the norms is Euclidean). We know that the resulting solution is as calculated above for E1, and that it is valid only if the hyperbola’s parameters are such that C > 0. Here, given only the fit of the hyperbola, we can determine if there are norms that generate the hyperbola, and what are the norms if they exist. When ๐ด๐ฆ๐ฆ ≠ −1, solving equations (E2.I)+(E2.III) results in: λ12 = 1 + 2(1 + ๐ด๐ฆ๐ฆ ) −1 − ๐ด๐ฆ๐ฆ + (−1 + ๐ด๐ฆ๐ฆ )Cos[2θ1 ] + 2๐ด๐ฅ๐ฆ Sin[2θ1 ] 31 λ22 = 1 + 2(1 + ๐ด๐ฆ๐ฆ ) −1 − ๐ด๐ฆ๐ฆ + (−1 + ๐ด๐ฆ๐ฆ )Cos[2θ2 ] + 2๐ด๐ฅ๐ฆ Sin[2θ2 ] Substituting this into equation (II) we get: 2 (IV) Sin[2θ1 ] (4(๐ด๐ฅ๐ฆ + ๐ด๐ฆ + ๐ด๐ฆ๐ฆ ๐ด๐ฆ )Cos[2θ2 ] + (−4๐ด๐ฅ๐ฆ 2 + (1 + ๐ด๐ฆ๐ฆ ) ) Sin[2θ2 ]) + 4 Cos[2θ1 ] (๐ด๐ฆ๐ฆ Cos[2θ2 ] − (๐ด๐ฅ๐ฆ ๐ด๐ฆ๐ฆ + ๐ด๐ฆ + ๐ด๐ฆ๐ฆ ๐ด๐ฆ )Sin[2θ2 ]) = 0 ๐ For every ๐2 , we will show that there exists a single angle ๐1 (modulo 2 ), such that equation (IV) is satisfied. (An example for specific parameters ๐ด๐ฆ๐ฆ , ๐ด๐ฅ๐ฆ , ๐ด๐ฆ is shown in Fig.S2) 2 For convenience denote ๐ โ (1 + ๐ด๐ฆ๐ฆ ) − 4๐ด๐ฅ๐ฆ 2 , ๐ = 4(๐ด๐ฅ๐ฆ + ๐ด๐ฆ + ๐ด๐ฆ๐ฆ ๐ด๐ฆ ) There are 2 cases: (A) ๐2 ≠ ๐ 4 ๐ (๐๐๐ 2 ): Now Cos[2θ2 ] ≠ 0 and we can divide the equation by it: (Sin[2θ1 ](๐ + ๐ Tan[2θ2 ]) + 4 Cos[2θ1 ] (๐ด๐ฆ๐ฆ Cos[2θ2 ] − (๐ด๐ฅ๐ฆ ๐ด๐ฆ๐ฆ + ๐ด๐ฆ + ๐ด๐ฆ๐ฆ ๐ด๐ฆ )Sin[2θ2 ])) = 0 Again, there are 2 cases: ๐ (A.1) Tan[2θ2 ] = − ๐ Substituting this into equation (IV) we get the equation: 2 8(1 + ๐ด๐ฆ๐ฆ ) (๐ด๐ฆ๐ฆ + 4๐ด๐ฆ (๐ด๐ฅ๐ฆ + ๐ด๐ฆ )) ( ) Cos[2θ1 ] = 0 ๐ 2 (1 + ๐ด๐ฆ๐ฆ ) (๐ด๐ฆ๐ฆ + 4๐ด๐ฆ (๐ด๐ฅ๐ฆ + ๐ด๐ฆ )) ≠ 0. This is because we assume that ๐ด๐ฆ๐ฆ ≠ −1, and, in addition, the Δ property of the quadratic curve is ๐ฅ = 4(๐ด๐ฆ๐ฆ + 4๐ด๐ฆ (๐ด๐ฅ๐ฆ + ๐ด๐ฆ )) and for a hyperbola ๐ฅ ≠ 0. Hence, cos[2θ1 ] must equal 0 for the equation to be satisfied, ๐ ๐ meaning ๐1 = 4 (๐๐๐ 2 ). 32 ๐ (A.2) Tan[2θ2 ] ≠ − ๐ : ๐ ๐ In that case, we can assume that ๐1 ≠ 4 . This is because when ๐1 = 4 , Cos[2θ1 ] = 0, Sin[2θ1 ] = 1, and ๐1 = ๐ 4 solves the equation only if ๐ + ๐ Tan[2θ2 ] = 0. Note that either ๐ ≠ 0 or ๐ ≠ 0, since if both ๐ = 0 and ๐ = 0, it can be shown that the Δ property of the quadratic curve becomes 0. This means that this equation has a solution only if ๐ Tan[2θ2 ] = − ๐ , which is assumed not to be case. ๐ So we assume ๐1 ≠ 4 . In that case, Cos[2๐1 ] ≠ 0 and we can divide the equation by it. The equation becomes: 8(๐ด๐ฆ๐ฆ − (๐ด๐ฅ๐ฆ ๐ด๐ฆ๐ฆ + ๐ด๐ฆ + ๐ด๐ฆ๐ฆ ๐ด๐ฆ )) Tan[2θ2 ] + Tan[2θ1 ](๐ + ๐ Tan[2θ2 ]) = 0 Note that if ๐ + ๐ Tan[2θ2 ] ≠ 0, as was shown earlier to be the case here, then ๐1 such that: Tan[2θ1 ] = −4๐ด๐ฆ๐ฆ + 4(๐ด๐ฆ + ๐ด๐ฆ๐ฆ ๐ด๐ฅ๐ฆ + ๐ด๐ฆ๐ฆ ๐ด๐ฆ ) Tan[2θ2 ] (๐ + ๐ Tan[2θ2 ]) solves equation (IV). ๐ Note that once Tan[2๐1 ] is determined, ๐1 is determined modulo 2 . However, we assume ๐ that 0 ≤ ๐1 , ๐2 < 2 , so determining ๐1 up to ๐ ๐ 2 is enough. Moreover, we know that if 1 ๐1 → ๐1 + 2 , then if also ๐1 → ๐ the Pareto front doesn’t change. Examining the term for 1 ๐1 , we see this symmetry. ๐ To conclude, for each ๐2 ≠ 4 , there is a single ๐1 in the range 0 ≤ ๐1 < ๐1 , ๐2 , ๐(๐1 ), ๐(๐2 ) generate the hyperbola. ๐ (B) ๐2 = 4 : In that case, ๐1 = ๐ 4 is a solution to the equation if and only if ๐ = 0. 33 ๐ 2 such that ๐ Otherwise, ๐1 ≠ 4 , and we can divide by cos[2๐1 ] to get: 2(๐ + ๐ tan[2θ1 ]) = 0 Again, for a hyperbola, it is not possible that both ๐ and ๐ are zero, so the only way that ๐ the equation will be satisfied is if tan[2θ1 ] = ๐ . To conclude, for every value of ๐2 , we can determine a unique value for ๐1 in the range ๐ [0, 2 ) such that the hyperbola on which lies the Pareto front that is generated by the 2 norms ๐1 (๐1 (๐1 ), ๐1 ), ๐2 (๐2 (๐2 ), ๐2 ) is the same as the hyperbola described by ๐ฅ 2 − ๐ฅ + 2๐ด๐ฅ๐ฆ ๐ฅ๐ฆ + ๐ด๐ฆ๐ฆ ๐ฆ 2 + 2๐ด๐ฆ ๐ฆ = 0. The ๐1 that matches ๐2 is given by: θ1 = ๐ If ๐2 ≠ 4 , tan[2๐2 ] ≠ 4๐ด๐ฆ๐ฆ + ๐ tan[2θ2 ] 1 arctan[ ] (๐ + ๐ tan[2θ2 ]) 2 ๐ ๐ θ1 = ๐ ๐ 4 ๐ If ๐2 ≠ 4 , tan[2๐2 ] = ๐ , ๐ ≠ 0 θ1 = If ๐2 = 1 ๐ arctan [ ] 2 ๐ ๐ 4 Note that this describes a 1-dimensional continuous curve. When ๐2 ≠ ๐ 1 ๐๐๐[2๐2 ] → ๐ , then ๐1 = 2 ๐ด๐๐๐๐๐ [ 4๐ด๐ฆ๐ฆ +๐ tan[2θ2 ] (๐+๐ tan[2θ2 ]) ๐ 1 ]→ 2 ๐ 4 and ๐ด๐๐๐๐๐[∞] = ๐ 4 ๐ which is the solution when ๐2 ≠ 4 , ๐๐๐[2๐2 ] = ๐ . ๐ ๐ 1 When ๐2 → 4 , then the solution when ๐2 ≠ 4 , θ1 = 2 arctan[ 1 ๐ ๐ θ1 = 2 arctan [๐ ] which is the solution when ๐2 = 4 . 34 4๐ด๐ฆ๐ฆ +๐ tan[2θ2 ] (๐+๐ tan[2θ2 ]) ] goes to Note that ๐2 ≠ ๐1 since we assume ๐ด๐ฆ๐ฆ ≠ −1, so when ๐2 → ๐ 4 ๐ 4 it is okay to assume ๐1 ≠ . To conclude, for every ๐2 , there is a single ๐1 (๐2 ), such that norms with the parameters ๐1 (๐2 ), ๐2 , ๐12 (๐1 (๐2 )), ๐22 (๐2 ) generate the hyperbola. However, those parameters will represent matrices of inner-product norms only if ๐12 (๐1 (๐2 )) > 0 and ๐22 (๐2 ) > 0. Hence, if there exists ๐2 such that ๐12 (๐1 (๐2 )) > 0 and ๐22 (๐2 ) > 0, the norms represented by ๐1 (๐2 ), ๐2 , ๐12 (๐1 (๐2 )), ๐22 (๐2 ) will generate the hyperbola. We can find all norms that generate a given hyperbola by considering all ๐2 ′๐ for which ๐12 (๐1 (๐2 )) > 0 and ๐22 (๐2 ) > 0. In that case, we know that there is a 1-dimensional family of norms that generate this hyperbola. If there doesn’t exist such ๐2 , it means that there doesn’t exist a pair of norms that generates the given hyperbola. We can examine the existence of such ๐2 numerically: we plot ๐12 (๐1 (๐2 )) and ๐22 (๐2 ), and check if there is an area where both ๐12 and ๐22 are positive. An example for such a plot is given in Fig S3. To conclude, we showed that we cannot uniquely determine norm pairs that generated a given front. Some hyperbolae cannot be described as the Pareto front of 2 tasks in 2D. For those who can be described, there is a one-dimensional family of norm-pairs that generate them. However, it is generally enough to determine a single parameter of one of the norms, to completely determine both norms. If such a parameter could be obtained by other means (e.g. – a biomechanical model, etc.), the above method can be used to exactly determine the norms, and therefore the relative importance of each trait to the performance. 35 Fig S2: When considering the set of norm-pairs parameters that generate a given hyperbola, ๐ฝ๐ is a function of ๐ฝ๐ . Here, an example is shown for the hyperbola with parameters as displayed in the figure. Fig S3: ๐๐๐ (๐ฝ๐ (๐ฝ๐ )) and ๐๐๐ (๐ฝ๐ ), for 2 different hyperbolae. a) A plot of ๐๐๐ (๐ฝ๐ (๐ฝ๐ )) and ๐ ๐๐๐ (๐ฝ๐ ) for a hyperbola with the parameters: ๐จ๐๐ = −๐, ๐จ๐๐ = −๐, ๐จ๐ = − . It can be ๐ seen that there exist ๐ฝ๐ ′๐ for which both ๐๐๐ (๐ฝ๐ (๐ฝ๐ )) and ๐๐๐ (๐ฝ๐ ) are positive. This means 36 that there are norms whose Pareto front is the above hyperbola. b) A plot of ๐๐๐ (๐ฝ๐ (๐ฝ๐ )) and ๐ ๐๐๐ (๐ฝ๐ ) for a hyperbola with the parameters: ๐จ๐๐ = ๐, ๐จ๐๐ = ๐, ๐จ๐ = − . It can be seen ๐ that there isn’t any ๐ฝ๐ such that both ๐๐๐ (๐ฝ๐ (๐ฝ๐ )) and ๐๐๐ (๐ฝ๐ ) are positive. This means that this hyperbola cannot be described as the Pareto front of any norm. 37 Appendix S6 Generally, for 3 tasks in a 2D morphospace, the norms can be uniquely determined by the shape of the Pareto front Let us assume that we have 3 tasks in a 2๐ท-morphospace. The matrices that describe the norms that the performance functions depend on are denoted by ๐๐ , for ๐ ∈ {1,2,3}. Denote the parameters that ๐๐ depends on with ๐๐ , ๐๐ . The Pareto front in this case is given by (Appendix S1): 3 ๐ฃ = (∑ ๐ผ๐ ๐๐ ) ๐=1 −1 3 3 (∑ ๐ผ๐ ๐๐ ๐ฃ๐∗ ๐=1 ) ๐ . ๐ก. 0 ≤ ๐ผ๐ ≤ 1, ∑ ๐ผ๐ = 1 ๐=1 As proven in Appendix S7, the boundary of ๐1,2,3, the 3-tasks Pareto front, is composed of the three 2-tasks fronts. Hence, given a dataset, we can take its boundary and assume that it represents 3 hyperbolae. Denote by ๐๐,๐ the 2-tasks front between ๐ฃ๐∗ and ๐ฃ๐∗ . Since there are 3 tasks, each one is associated with two 2-tasks fronts. From the front ๐๐,๐ we can deduce the 1-dimensional family of norm-pairs that generates it, as explained in ๐ Appendix S5. Denote this family by โฑ๐,๐ . Denote by โฑ๐,๐ the family of norms that are associated with task ๐ and were calculated from the front ๐๐,๐ . Thus, there are 2 families of ๐ ๐ potential norms that task ๐ might depend on - โฑ๐,๐ and โฑ๐,๐ (๐ ≠ ๐ ≠ ๐). However, each task can depend only on one norm. It means that if the hyperbolae triad is indeed ๐ generated by a norms-triad, then there must be at least one common member between โฑ๐,๐ ๐ ๐ ๐ and โฑ๐,๐ . Denote โฑ ๐ = โฑ๐,๐ ∩ โฑ๐,๐ , then โฑ ๐ ≠ ๐. Choosing a member ๐๐ ∈ โฑ ๐ determines (see Appendix S5) a member ๐๐,๐ of โฑ๐,๐ and a member ๐๐,๐ of โฑ๐,๐ . Those in turn determine a member ๐๐ ∈ โฑ๐ and ๐๐ ∈ โฑ ๐ . Define ๐๐,๐ = {(๐๐ , ๐๐ )|๐๐ ∈ โฑ ๐ }. ๐๐,๐ includes all pairs of norms on which the performance of task ๐ and task ๐ can depend, deduced from looking on task ๐. On the other hand, task j and task k generate ๐๐,๐ . This means that the existence of a triplet of norms that generate the Pareto front requires that ๐๐,๐ ∩ ๐๐,๐ ≠ ๐. 38 Hence, three cases are possible: (I) |๐๐,๐ ∩ โฑ๐,๐ | = 1. In that case, there is only one triplet of norms that generates the given Pareto front. (II) |๐๐,๐ ∩ โฑ๐,๐ | > 1. In that case, the norms-triplet cannot be determined uniquely from the front. (III) |๐๐,๐ ∩ โฑ๐,๐ | = 0. In that case, the hyperbole-bound triangular shape corresponds to no triplet of norms and hence cannot be explained in the scope of the current model. Let’s check under what conditions each of the above cases occur. But first, we prove the following lemma: Lemma 1: Let ๐๐,๐ be the Pareto front associated with tasks ๐ and ๐. If one of the following occurs: ๐ ๐ (I) ๐ผ ∈ โฑ๐,๐ or ๐ผ ∈ โฑ๐,๐ (where ๐ผ is identity matrix, representing the Euclidean norm) (II) ∃(๐1 , ๐2 ) ∈ โฑ๐,๐ such that ๐1 = ๐2 . Then for every ๐ > 0 there is a norm-pair that generate ๐๐,๐ given by 1 0 ๐1 = ๐ [๐๐,๐ ] ( 0 ) ๐ [−๐๐,๐ ] ๐2 1 0 ๐2 = ๐ [๐๐,๐ ] (0 ๐ถ ๐2 ) ๐ [−๐๐,๐ ] ๐,๐ where ๐ถ๐,๐ > 0 and ๐๐,๐ are constant determined by the parameters of ๐๐,๐ . ฬ๐∗ = (0,0), ๐ฃ๐∗ โฆ ฬ Proof: We transform to a coordinate system where ๐ฃ๐∗ โฆ ๐ฃ ๐ฃ๐∗ = (1,0). This transformation is a combination of a translation, a rotation by an angle ๐ฬ, and an isomorphic scaling. Under those transformations, the parameters of the norms transform in the following way: 39 ๐1 โฆ ๐ฬ1 = ๐1 , ๐2 โฆ ๐ฬ2 = ๐2 , ๐1 โฆ ๐ฬ1 = ๐1 + ๐ฬ, ๐2 โฆ ๐ฬ2 = ๐2 + ๐ฬ It means that if condition (I) or (II) holds in the original coordinate system, it will hold in the transformed coordinate system. So either way, from Appendix S5 we know that in the transformed coordinate system, ๐๐,๐ โฆ ๐ฬ ๐,๐ whose parameter ๐ด๐ฆ๐ฆ = −1. In that scenario, ฬ the family of norm pairs that generate ๐ฬ ๐,๐ , โฑ๐,๐ , was fully classified in Appendix S5: ฬ2 ฬ2 ฬ ฬ ฬ โฑ ฬ 1, ๐ ฬ 2 ) ๐ . ๐ก. ๐ฬ1 = ๐ฬ2 = ๐ฬ ๐,๐ = (๐ ๐,๐ , ๐2 = ๐ถ๐,๐ ๐1 , for ๐๐,๐ , ๐ถ๐,๐ > 0 that are determined by the parameters of the hyperbola ๐ฬ ๐,๐ . Going back to the original coordinate system, we ฬ can deduce that โฑ๐,๐ = (๐1 , ๐2 ) ๐ . ๐ก. ๐1 = ๐2 = ๐๐,๐ , ๐22 = ๐ถ๐,๐ ๐12, for ๐๐,๐ = ๐ฬ ๐,๐ − ๐ , ๐ถ๐,๐ = ๐ถฬ ๐,๐ . โ Consider a hyperbole-bound triangular shape with vertices ๐ฃ1∗ , ๐ฃ2∗ , ๐ฃ3∗ . Denote โฑ๐,๐ as above. For now we assume that there is a triplet of norms – ๐1 , ๐2 , ๐3 - that generate that shape, i.e. โฑ๐,๐ ∩ ๐๐,๐ ≠ ๐ท. We will attempt to characterize when this norm-triplet is unique, and show that if it is not unique, there exists a 1-dimensional family of normtriplets that generate the same ๐1,2,3. We choose coordinate system such that ๐1 = ๐ผ. This is possible as shown in Appendix S2. In that case, from lemma 1 we know that โฑ1,2 - the family of norm pairs that generate ๐1,2 , contains norms for which ๐1 = ๐2 =: ๐1,2 and λ22 λ21 =: ๐ถ1,2 , where ๐ถ1,2 > 0, ๐1,2 are defined by the parameters of ๐1,2 . Lemma 1 also tell us that โฑ1,3- the family of norm pairs that generate ๐1,3 - contains norms for which ๐1 = ๐3 =: ๐1,3 and λ23 λ21 =: ๐ถ1,3 , where ๐ถ1,3 > 0, ๐1,3 are defined by the parameters of ๐1,3 . There are 2 cases, either ๐1,3 ≠ ๐1,2 or ๐1,3 = ๐1,2 . Claim 1: If ๐1,3 ≠ ๐1,2 , then โฑ 1 = {๐1 } = {๐ผ}, and the norm-triplet is unique. 1 1 Proof: Let ๐′ ∈ โฑ 1 ⇒ ๐′ ∈ โฑ1,2 and ๐′ ∈ โฑ1,3 . From the above conclusions on โฑ1,2 and โฑ1,3 we deduce that on one hand ๐๐′ = ๐1,3 and on the other hand ๐๐′ = ๐1,2 . If ๐1,3 ≠ ๐1,2 , the only possibility to resolve the conflict is if ๐๐′ = 1 and then ๐′ = ๐ผ, 40 since in this case the norm is Euclidean and ๐ is meaningless. ๐1 is thus determined uniquely, and since ๐2 is determined uniquely by ๐1 and also ๐3 is determined uniquely by ๐1 . This means that there is only one triplet of norms that generates the Pareto front ⇒ we can uniquely deduce the norms that he performance functions depend on from the Pareto front. โ Claim 2: If ๐1,3 = ๐1,2 , then there’s a 1-dimensional family of norm-triplets that generates the given ๐1,2,3. 1 1 1 Proof: In that case โฑ1,2 = โฑ1,3 . This is true since according to lemma 1 โฑ1,2 contains all 1 norms with ๐ = ๐1,2 and any ๐ > 0, and โฑ1,3 contains all norms with ๐ = ๐1,3 and any 1 1 1 1 1 ๐ > 0. Since โฑ 1 = โฑ1,2 ∩ โฑ1,3 we get that โฑ 1 = โฑ1,2 = โฑ1,3 . โฑ1,2 is infinite since โฑ1,2 is infinite (Appendix S5) and hence โฑ 1 is infinite. Under the current assumptions, there is ๐1 ∈ โฑ 1 with (๐2 , ๐3 ) ∈ โฑ2,3 (i.e. the norm ๐2 that is paired with ๐1 in โฑ1,2 and the norm ๐3 that is paired with ๐1 in โฑ1,3 generate ๐2,3 ). Note that ๐2 and ๐3 both have the same angle ๐. In that case, from lemma 1 we get that (๐2 , ๐3 ) ∈ ๐2,3 ⇔ ๐2 = ๐3 = ๐2,3 ๐๐๐ ๐23 = ๐ถ2,3 ๐22, for ๐2,3 and ๐ถ2,3 determined by the parameters of the hyperbola. (๐2 , ๐3 ) generates ๐2,3, and thus ๐2๐2 ๐2๐3 = ๐ถ2,3 and θM2 = θM3 = ๐2,3 . In addition, (๐1 , ๐2 ) ∈ โฑ1,2 ⇒ ๐2M2 = ๐ถ1,2 ๐2๐1 . (๐1 , ๐3 ) ∈ โฑ1,3 ⇒ ๐2M3 = ๐ถ1,3 ๐2๐1 . ⇒ ๐ถ2,3 = ๐2M3 ๐2M2 ๐ถ1,3 ๐2๐1 =๐ถ 2 1,2 ๐๐1 Now take another ๐1′ ๐ถ = ๐ถ1,3 , and θM1 = θM2 = ๐1,2 = ๐1,3 = θM3 = ๐2,3. 1,2 1 ∈ โฑ . We know that, ๐2๐′ 2 = ๐ถ1,2 ๐2๐′ 1 , ๐2๐′ 3 = ๐ถ1,3 ๐2๐′ 1 ⇒ ๐2 ′ ๐ 3 ๐2 ′ ๐ = 2 ๐ถ1,3 ๐2 ′ ๐ 1 ๐ถ1,2 ๐2 ′ ๐1 = ๐ถ1,3 ๐ถ1,2 = ๐ถ2,3 . We also know that θ๐1′ = θ๐2′ , θ๐1′ = θ๐3′ ⇒ θ๐2′ = θ๐3′ ⇒ (๐2′ , ๐3′ ) ∈ โฑ2,3. Hence, every ๐1′ ∈ โฑ 1 is a part of a norm-triplet that generates the hyperbolae triplet. Since โฑ 1 is infinite, it means that there are infinite number of normtriplets that generate the given hyperbolae-bound shape. The family is defined by one of 41 the ๐ parameters, since all ๐s are known, and every other ๐ is determined by the respective ๐ถ constant. โ Those conclusions are true in the frame where ๐1 = ๐ผ – i.e. – there is a norm triplet that generates the Pareto front in which the norm associated with task 1 is Euclidean. We can go back to the original coordinate system by rotating, translating and rescaling the space. Those transformations are invertible, so there is a 1-to-1 correspondence between normtriplets that generates the shape in the transformed coordinate system and norm-triplets that generates the shape in the original coordinate system. It can be seen that most of hyperbolae-bound triangular shapes that correspond to norms triplets correspond to a single triplet, since it is much more common that ๐1 ≠ ๐2 . This method can be used to find the norms that generate a given hyperbolae-bound 1 1 triangular shape. We can fit ๐1,2 and ๐1,3 , and find โฑ1,2 and โฑ1,3. Then, we find โฑ1,2 , โฑ1,3 1 1 and โฑ 1 = โฑ1,2 ∩ โฑ1,3 . If โฑ 1 = ฯ we deduce that the shape corresponds to no norms- triplet. Otherwise, we choose ๐1 ∈ โฑ 1 . We change coordinate system such that ๐1 = ๐ผ while ๐ฃ1∗ , ๐ฃ2∗ are at (0,0), (1,0), respectively. From the transformed ๐1,2 and ๐1,3 , we find ๐1,2 and ๐1,3 . Examining if ๐1,2 = ๐1,3 determines if the solution is unique or degenerate. 42 Appendix S7 The boundary of the 3-tasks Pareto front is composed of the three 2-tasks Pareto fronts In this Appendix, we will show that the boundary of the Pareto front associated with 3 tasks in a 2D morphospace is composed of the 3 Pareto fronts associated with each pair of tasks Consider the Pareto front defined by tasks i, j , denoted by ๐๐,๐ . As seen before - this front is a section of a hyperbola (or a line as a special case). Denote the hyperbola branch that contains ๐๐,๐ by ๐ต๐,๐ and the entire hyperbola by ๐ป๐,๐ (in case ๐๐,๐ is a line, ๐ต๐,๐ = ๐ป๐,๐ ). The hyperbola branch divides the space into 3 parts – one side of ๐ต๐,๐ , ๐ต๐,๐ itself and the other side of ๐ต๐,๐ . Denote by ๐1,2,3 the Pareto front associated with all three tasks. First note that ∀๐, ๐: ๐๐,๐ ⊆ ๐1,2,3 : If ๐ฃ ∈ ๐๐,๐ , then for every ๐ฃ ′ ≠ ๐ฃ, there is ๐ ∈ {๐, ๐} such that ๐๐ (๐ฃ ′ ) < ๐๐ (๐ฃ), and specifically there is ๐ ∈ {1,2,3} such that ๐๐ (๐ฃ ′ ) < ๐๐ (๐ฃ) ⇒ ๐ฃ ∈ ๐1,2,3 ⇒ ๐๐,๐ ⊂ ๐1,2,3 . For convenience sake, and without loss of generality, we assume that the gradient ๐๐ = ๐ป๐๐ is normalized (๐๐ ≠ 0 except at the archetype ๐ฃ๐∗ , everywhere else we can redefine ๐ ๐๐ โ โ๐๐ โ). ๐ For now, assume that for ๐, ๐, ๐ ∈ {1,2,3} such that ๐ ≠ ๐ ≠ ๐, ๐ป๐,๐ ≠ ๐ป๐,๐ . The case where there are overlapping hyperbolae will be discussed later. We will implicitly consider the 2-dimensional morphospace V as embedded in a 3dimensional vector space in the trivial way ((๐ฅ, ๐ฆ) โฆ (๐ฅ, ๐ฆ, 0)) for the use of operations such as cross product. So expressions such as ๐๐ × ๐๐ should be understood as operations in the 3-dimensional space, while operations such as ๐๐ should be understood as operations in the original 2-dimensional space. 43 Consider the functional ๐๐,๐ = ๐๐ง โ (๐๐ × ๐๐ ), where ๐๐ง is the standard projection function ๐๐ง (๐ฅ, ๐ฆ, ๐ง) = ๐ง. ๐๐ × ๐๐ = (0,0, โ๐๐ โโ๐๐ โ sin[๐๐,๐ ]), where ๐๐,๐ is the angle between ๐๐ and ๐๐ , measured anticlockwise from ๐๐ to ๐๐ , so we get ๐(๐ฃ) = โ๐๐ (๐ฃ)โโ๐๐ (๐ฃ)โ sin[๐๐,๐ ] Lemma 1: On ๐ต๐,๐ , ๐๐,๐ ≡ 0. On one side of ๐ต๐,๐ , close enough to it, ๐๐,๐ > 0, and on the other side of ๐ต๐,๐ , close enough to it, ๐๐,๐ < 0. Proof: First note that ๐ is continuous as a projection of a cross product of 2 continuous functions. As we demonstrated before – on the Pareto front ๐๐,๐ , the gradients point in opposite directions, meaning sin[๐๐,๐ ] = 0, which implies ๐๐,๐ = 0. We’ve also seen that ๐๐,๐ is part of a hyperbola (or line) ๐ป๐,๐ . ๐ป๐,๐ is given by a quadratic (or linear) form ๐ป such that ๐ป๐,๐ = {๐ฃ|๐ป(๐ฃ) = 0} ๐๐,๐ is also a quadratic (or linear) form (as a cross product of two linear forms), and ๐๐,๐ |๐ ≡ ๐ป|๐๐,๐ ≡ 0, which implies ๐ = ๐ผ๐ป for some ๐ผ ∈ โ. ๐,๐ In case ๐๐,๐ is a line – ๐๐,๐ is a linear functional and it is trivial that it is negative on one half space, and positive on the other. In the case where ๐๐,๐ is a hyperbola – by definition ๐๐,๐ = ๐ผ๐ป = 0 on the hyperbolae (both branches). We’ll show that it changes sign between the 3 connected components of โ2 โ ๐ป๐,๐ - let h be the line between the 2 foci of the hyperbola. h intersects each branch 44 of the hyperbola exactly once. Consider the function ๐๐,๐ โ โ - it is a quadratic real function of a single parameter (since ๐๐,๐ : โ → โ2 is quadratic in 2 variables and โ: โ → โ is linear). ๐๐,๐ โ โ = 0 on both intersections of โ with ๐ป๐,๐ . It means that ๐|โ has to change sign once it passes the hyperbola. So on one side of a branch of the hyperbola ๐๐,๐ |โ is positive and on the other side it is negative. Of course ๐๐,๐ is continuous, and ๐๐,๐ ≠ 0 on โ2 โ ๐ป๐,๐ , so its sign is constant across the connected components. Hence, on one side of ๐ต๐,๐ , ๐๐,๐ is positive, and on the other side ๐๐,๐ is negative (as long as the other branch of the hyperbola is not approached). All in all, in a neighborhood of ๐ต๐,๐ (and hence of ๐๐,๐ ), ๐๐,๐ is positive on one side of ๐ต๐,๐ , negative on the other side of ๐ต๐,๐ , and 0 on ๐ต๐,๐ . โ ๐๐,๐ > 0 implies that ๐๐ is less than ๐ radians anticlockwise than ๐๐ , ๐๐,๐ < 0 implies that ๐๐ is more than ๐ radians anticlockwise than ๐๐ . We would like to show that ๐๐,๐ is at the boundary of ๐1,2,3, the Pareto front associated with all 3 tasks. To do so, we need to show that ∀ ๐ฃ ∈ ๐๐,๐ ๐๐๐ ∀๐ > 0: ∃๐ฃ ′ ∈ ๐ต0 (๐ฃ, ๐) ๐ . ๐ก. ๐ฃ ′ ∈ โ2 \๐1,2,3 where ๐ต0 (๐ฃ, ๐) is an open ball of radius ๐ around ๐ฃ. Theorem 2: ∀๐, ๐ (๐ ≠ ๐): ๐๐,๐ ⊆ ๐๐1,2,3 Proof: Assume without loss of generality that ๐ = 1, ๐ = 2 (the proof will be identical for any pair of ๐, ๐ as long as ๐ ≠ ๐). Assume by negation ๐1,2 is not at the boundary of ๐1,2,3, it means that there exist ๐ฃ ∈ ๐1,2 and ๐ > 0 such that ๐ต0 (๐ฃ, ๐) ⊂ ๐1,2,3. 45 Let ๐ > 0, and ๐ข ∈ ๐ต0 (๐ฃ, ๐). Claim 2: For ๐ข ∈ ๐1,2,3, ๐๐,๐ (๐ข) (๐ ≠ ๐) and ๐๐,๐ (๐ข) (๐ ≠ ๐, ๐) can’t be neither both positive nor both negative. Proof: Assume ๐๐,๐ > 0 ⇒ ๐๐ is less than ๐ radians anticlockwise to ๐๐ . In this case ๐๐,๐ can’t be positive: If ๐๐,๐ > 0, it means that ๐๐ is less than ๐ radians anticlockwise to ๐๐ . In this case all three gradients lie in the same half-space -choose the gradient ๐๐ that has the maximal angle with ๐๐ (๐ = ๐๐๐๐๐๐ฅ๐ {๐๐,๐ }). The angle between ๐๐ and ๐๐ is smaller than ๐ (because both gradients are less than ๐ radians anticlockwise from ๐๐ . Choose โ๐ข to be the unit vector bisecting the angle between ๐๐ and ๐๐ (โ๐ข = ๐ฬ๐ +๐ฬ๐ 2 ๐ ). The angle between โ๐ข and each gradient is smaller than 2 . Then, ∀๐: โ๐ข ⋅ ๐๐ > 0. However, we showed in Appendix S1 that ๐ข is Pareto optimal if and only if there doesn’t exists a vector โ๐ข such that ∀๐: โ๐ข โ g n > 0. Since ๐ข ∈ ๐1,2,3 (e.g. is Pareto optimal), we must conclude that ๐๐,๐ โฏ 0. To show that ๐๐,๐ , and ๐๐,๐ can’t both be negative, follow the above proof while changing the word “anticlockwise” to “clockwise”. โ Corollary 1: For ๐ข ∈ ๐1,2,3, such that ∀๐, ๐: ๐ข ∉ ๐ป๐,๐ - ๐ ๐๐ (๐1,2 (๐ข)) = ๐ ๐๐ (๐2,3 (๐ข)) = ๐ ๐๐ (๐3,1 (๐ข)). 46 Proof: This results directly from the claim, the anti-symmetry of ๐๐,๐ , and the fact that ๐๐,๐ (๐ข) = 0 ⇔ ๐ข ∈ ๐ป๐,๐ Claim 3: If ๐ข ∉ ๐1,2,3 and ๐ข ∉ ๐ป๐,๐ for any ๐, ๐, not all ๐1,2 (๐ข), ๐2,3 (๐ข), ๐3,1 (๐ข) have the same sign (see Figure S4) Proof: According to Conclusion 1 from Appendix S1, since ๐ข is not Pareto optimal, all three gradients ๐1 , ๐2 , ๐3 lie in the same half-space. Choose a vector ๐ on the line separating the two half-spaces such that ๐๐ง (๐ × ๐๐ ) are all positive (they are either all positive or all negative since they are all in the same half-space). If we order the gradients according to their anticlockwise angle from ๐ (they are all smaller than ๐) and name them ๐๐ , ๐๐ , ๐๐ we get that ๐๐,๐ > 0, ๐๐,๐ > 0 but ๐๐,๐ < 0, since the anticlockwise angles from ๐ to ๐, from ๐ to ๐ and from ๐ to ๐ are smaller than ๐ radians (they are all non-zero since ๐ข is not on any hyperbola), and since ๐ is anti-symmetric ๐๐,๐ ≥ 0 ⇒ ๐๐,๐ ≤ 0. This proves the claim for every assignment of ๐, ๐, and ๐. Corollary 2: a point ๐ข ∈ ๐, which is not on any hyperbola ๐ป๐,๐ is Pareto optimal if, and only if, ๐ ๐๐ (๐1,2 (๐ข)) = ๐ ๐๐ (๐2,3 (๐ข)) = ๐ ๐๐ (๐3,1 (๐ข)). Now, it is clear that if ๐ข ∈ ๐๐,๐ , in every neighborhood of ๐ข there are points from both sides of ๐๐,๐ , meaning points with ๐๐,๐ > 0 and points with ๐๐,๐ < 0. This means that every point on ๐๐,๐ that is not any other hyperbola ๐ป๐,๐ (i.e. not an intersection point), if it has Pareto optimal points on one side of ๐๐,๐ , on the other side there are no Pareto optimal points (since only ๐๐,๐ changes sign on ๐ป๐,๐ ). If the point ๐ข ∈ ๐๐,๐ is an intersection of two or more hyperbolae ๐ป๐,๐ and some ๐ป๐,๐ (not necessarily on ๐๐,๐ ), then in every neighborhood there are points on ๐๐,๐ that are not 47 intersection points (the number of intersection points between 2 hyperbolae/lines is finite) and therefore in that neighborhood there are points that are not Pareto optimal. Note that the archetypes, for example, are at the intersection of ๐๐,๐ with ๐๐,๐ . Thus – any point ๐ข ∈ ๐๐,๐ for every pair of ๐, ๐, is not in the interior of ๐1,2,3, but since it is in ๐1,2,3 it must be in ๐๐1,2,3. โ Theorem 3: For every ๐ข ∈ ๐1,2,3 , if ๐ข ∉ ๐๐,๐ for any ๐, ๐, then ๐ข ∉ ๐๐1,2,3 Proof 3: First, we will show that ๐ข is not on any hyperbola. If it were on some ๐ป๐,๐ , then ๐๐ (๐ข) × ๐๐ (๐ข) = 0. But since ๐ข ∉ ๐๐,๐ then the gradients ๐๐ and ๐๐ cannot point to opposite directions, so they must point in the same direction. i.e. ๐๐ + ๐ผ๐๐ = 0 but ๐ผ < 0. Since ๐ข is Pareto optimal, then the third gradient ๐๐ (๐ ≠ ๐, ๐) must point in the opposite direction (otherwise there was โ s.t. ∀๐: โ ⋅ ๐๐ > 0, namely โ = ๐๐ +๐๐ 2 ). So we get that ๐๐ and ๐๐ must point in opposite directions, which means that ๐ข ∈ ๐๐,๐ , which is in contradiction to the assumption. So ๐ข is not on any hyperbola, and hence there is a neighborhood of ๐ข that doesn’t include any point of ๐ป๐,๐ for each i,j. If ๐ข is Pareto optimal, then the signs of ๐1,2 , ๐2,3 , ๐3,1 are all equal on ๐ข, but since all ๐′๐ change signs only on the hyperbolae, they all have the same sign in a neighborhood of ๐ข, which means that there is a neighborhood of ๐ข which is Pareto optimal, i.e. – ๐ข is in the interior of ๐1,2,3.โ Theorem 4: ๐๐1,2,3 = ๐1,2 ∪ ๐2,3 ∪ ๐3,1 We showed that ∀๐, ๐: ๐๐,๐ ⊂ ๐๐1,2,3, and that for every ๐ข ∈ ๐1,2,3, such that ๐ข ∉ ๐๐,๐ for any ๐, ๐, then ๐ข ∉ ๐๐1,2,3. All that remains to be shown is that if ๐ข ∉ ๐1,2,3, then ๐ข ∉ ๐๐1,2,3. That is true because ๐1,2,3 is a closed set. In Appendix S1 we showed that: 48 ๐1,2,3 = {(∑๐ ๐ผ๐ ๐๐ )−1 ∑๐(๐ผ๐ ๐๐ ๐ฃ๐∗ ) |๐ผ๐ ≥ 0, ∑3๐=1 ๐ผ๐ = 1} So ๐1,2,3 is the image of the (compact) unit triangle (0 ≤ ๐ผ1 + ๐ผ2 ≤ 1) under the continuous mapping −1 ๐ผ1 , ๐ผ2 โฆ ๐ผ1 , ๐ผ2 , 1 − ๐ผ1 − ๐ผ2 โฆ (∑ ๐ผ๐ ๐๐ ) ๐ ∑(๐ผ๐ ๐๐ ๐ฃ๐∗ ) ๐ (It is continuous since ∑๐ ๐ผ๐ ๐๐ is a non negative combination of positive definite matrices, and not all coefficients are zero, and hence it is positive definite, and hence always invertible). The image of compact sets are compact, and specifically closed, so there cannot be any points outside ๐1,2,3 in ๐๐1,2,3. โ Lemma: For every ๐ ≠ ๐ ≠ ๐, ๐ป๐,๐ and ๐ป๐,๐ can intersect only once except in the archetype ๐ด๐ . Proof: First note that ๐ป๐,๐ goes through any intersection point between ๐ป๐,๐ and ๐ป๐,๐ . Let ๐ก be an intersection point between ๐ป๐,๐ and ๐ป๐,๐ . ๐ก ∈ ๐ป๐,๐ . As shown earlier this implies that ๐๐ (๐ก) โฅ ๐๐ (๐ก). ๐ก ∈ ๐ป๐,๐ ⇒ ๐๐ (๐ก) โฅ ๐๐ (๐ก). From the above two conclusions we can conclude that ๐๐ (๐ก) โฅ ๐๐ (๐ก) ⇒ ๐ก is on ๐ป๐,๐ . We see that except the archetypes, each intersection point between 2 of the hyperbolae is actually an intersection point between all 3 hyperbolae. Two different hyperbolae can intersect only 4 times. As each pair of hyperbolae intersect at an archetypes, it means that excluding the archetypes, there are at most 3 intersection points between the hyperbolae, and specifically a total of at most 3 intersection points between any ๐๐,๐ and any ๐๐,๐ . We showed that ๐๐1,2,3 = ๐1,2 ∪ ๐2,3 ∪ ๐3,1. We now want to show that there are points ๐ฃ ∈ ๐1,2,3 such that ๐ฃ ∉ ๐๐,๐ for any {๐, ๐} ∈ {1,2,3} (i.e. – there are Pareto optimal points beside the 3 2-tasks Pareto fronts). 49 Lemma: Let ๐ฃ ∈ ๐๐,๐ such that ๐ฃ is not an intersection point between ๐ป๐,๐ and any other ∗) hyperbola ๐ป๐,๐ . Then, if ๐ฃ = (∑๐ ๐ผ๐ ๐๐ )−1 ∑๐(๐ผ๐ ๐๐ ๐ฃ๐ for ๐ผ๐ ≥ 0, ∑3๐=1 ๐ผ๐ = 1, then ๐ผ๐ = 0, for ๐ ∈ {1,2,3}, ๐ ≠ ๐, ๐. ∗) ∗ Proof: ๐ฃ = (∑๐ ๐ผ๐ ๐๐ )−1 ∑๐(๐ผ๐ ๐๐ ๐ฃ๐ ⇒ ∑๐ ๐ผ๐ ๐๐ (๐ฃ − ๐ฃ๐ ) = 0 . However, we showed that we can ∗ )๐ ∗ ๐๐ (๐ฃ) = −(๐ฃ − ๐ฃ๐ ๐๐ (๐ฃ − ๐ฃ๐ ). Hence, assume that ∑๐ ๐ผ๐ ๐๐ (๐ฃ) = 0. ๐ฃ ∈ ๐๐,๐ , so ๐๐ ||๐๐ ⇒ ๐ผ๐ ๐๐ + ๐ผ๐ ๐๐ = ๐ ๐๐ ⇒ ๐ ๐๐ + ๐ผ๐ ๐๐ = 0 (๐ ∈ {1,2,3}, ๐ ≠ ๐, ๐). Since ๐ฃ is not an intersection point between ๐ป๐,๐ and any other hyperbola, ๐ฃ is not an archetype, so ๐๐ (๐ฃ) ≠ 0. Also, it means that ๐๐ (๐ฃ) โฆ ๐๐ (๐ฃ), as otherwise ๐ฃ would be on ๐ป1,3. It means that ๐ ๐๐ + ๐ผ๐ ๐๐ = 0 ⇔ ๐ = 0 ๐๐๐ ๐ผ๐ = 0. โ Theorem 5: Let ๐ฃ ∈ ๐๐,๐ for any ๐ ≠ ๐. Let ๐ be an open environment of ๐ฃ. Then, there is ๐ฃ ′ ∈ ๐ such that ๐ฃ ′ ∈ ๐1,2,3 โ ๐1,2 ∪ ๐2,3 ∪ ๐3,1 . Proof: As mentioned earlier, ๐1,2,3 is the image of the triangle ๐ = {0 ≤ ๐ผ1 + ๐ผ2 ≤ 1} under the map: −1 ๐: ๐ผ1 , ๐ผ2 โฆ (∑ ๐ผ๐ ๐๐ ) ๐ ∑(๐ผ๐ ๐๐ ๐ฃ๐∗ ) ๐ As shown earlier, ๐ is continuous. ๐({๐ผ1 , ๐ผ2 }) = ๐ฃ ⇒ ๐ฃ = (∑๐ ๐ผ๐ ๐๐ )−1 ∑๐(๐ผ๐ ๐๐ ๐ฃ๐∗ ) where ๐ผ3 = 1 − ๐ผ1 − ๐ผ2 Let ๐ฃ ∈ ๐๐,๐ . Either ๐ฃ is an intersection point between ๐ป๐,๐ and another hyperbola ๐ป๐,๐ , or it is not. First assume that ๐ฃ is not an intersection point between ๐ป๐,๐ and any other hyperbola ๐ป๐,๐ . Let ๐ be an open environment of ๐ฃ that doesn’t intersect any hyperbola besides ๐1,2 . The above lemma implies that for every ๐ฃ ′ such that ๐ฃ ′ ∈ ๐๐,๐ ∩ ๐ , ๐ −1 (๐ฃ ′ ) ⊆ ๐ can’t 50 contain points for which ๐ผ๐ + ๐ผ๐ ≠ 1. Let ๐ผฬ๐ , ๐ผฬ๐ be such that ๐ผฬ๐ ๐๐ (๐ฃ) + ๐ผฬ๐ ๐๐ (๐ฃ) = 0. Such ๐ผฬ๐ , ๐ผฬ๐ exist since ๐ฃ ∈ ๐๐,๐ . Since ๐ is continuous, the origin of ๐ is open in ๐. {๐ผฬ๐ , ๐ผฬ๐ } ∈ ๐ −1 (๐) ⇒ ๐ −1 (๐) is a non-empty open environment of {๐ผฬ๐ , ๐ผฬ๐ } and hence must contain points from the interior of the triangle ๐, i.e there are {๐ผ๐′ , ๐ผ๐′ } ∈ ๐ −1 (๐), ๐ผ๐′ + ๐ผ๐′ < 1 such that ๐({๐ผ๐′ , ๐ผ๐′ }) ∈ ๐. ๐({๐ผ๐′ , ๐ผ๐′ }) can’t be on ๐๐,๐ . Otherwise, as ๐ doesn’t intersect any other hyperbola, we would have gotten by the lemma that ๐ผ๐′ = 0 , or ๐ผ๐′ + ๐ผ๐′ = 1, in contradiction to the assumption. So, ๐({๐ผ1′ , ๐ผ2′ }) is not on ๐๐,๐ but also not on any other hyperbola, as by assumption ๐ does not intersect them. However, ๐({๐ผ1′ , ๐ผ2′ }) ∈ ๐1,2,3 . So, we get that ๐({๐ผ1′ , ๐ผ2′ }) ∈ ๐1,2,3 \๐1,2 ∪ ๐2,3 ∪ ๐3,1 If ๐ฃ is an intersection point between ๐ป๐,๐ and any other hyperbola ๐ป๐,๐ , then since the number of intersection points between the different hyperbolae is finite, there exists ๐ข ∈ ๐ ∩ ๐๐,๐ that is not an intersection point between ๐๐,๐ and any other hyperbola. Let ๐ ⊆ ๐ be an open environment of ๐ข, then by the above proof there exists ๐ฃ ′ ∈ ๐ ⊆ ๐ such that ๐ฃ ′ ∈ ๐1,2,3 โ (๐1,2 ∪ ๐2,3 ∪ ๐3,1 ). This is the required ๐ฃ′โ So we showed that the boundary of the 3-tasks Pareto front is exactly the 3 2-tasks Pareto fronts, but they are not identical (i.e. it is not ‘empty’). Now we would like to better characterize the 3-tasks Pareto front. Examine ℜ2 ⁄โ๐,๐ ๐ป๐,๐ . It is divided into connected components, ๐๐ . Theorem 6: Each component ๐๐ is either entirely Pareto optimal or not Pareto optimal at all. Proof: Let ๐ฅ๐ ∈ ๐๐ . By corollary 2, ๐ฅ๐ is Pareto optimal ⇔ ๐1,2 (๐ฅ๐ ) = ๐2,3 (๐ฅ๐ ) = ๐3,1 (๐ฅ๐ ). By lemma 1, ๐๐,๐ changes signs only on ๐ป๐,๐ , which means that the sign of each ๐๐,๐ is constant across each ๐๐ . Hence, if ๐ฅ๐ is Pareto optimal, ๐๐ is entirely Pareto optimal, and if ๐ฅ๐ is not Pareto optimal, ๐๐ is entirely not Pareto optimal. 51 Theorem 7: If ๐๐ is unbounded it is not Pareto optimal Proof: The Pareto front is compact as a continuous image of a compact set (the triangle ๐), and hence it is bounded. Theorem 8: 2 adjacent connected components, ๐๐ and ๐๐ , cannot be both Pareto optimal. Proof: Assume ๐๐ is Pareto optimal. It means that in this area, ๐1,2 = ๐2,3 = ๐3,1. Assume that without loss of generality that ๐๐ and ๐๐ are separated by ๐ป1,2. It means that ๐1,2 (and only ๐1,2 ) has a different sign on ๐๐ and on ๐๐ . Hence, ๐1,2 โข ๐2,3 = ๐3,1 on ๐๐ , so ๐๐ is not Pareto optimal. Theorem 9: Let ๐ข ∈ ๐๐,๐ . Then ๐ข is at the boundary of a Pareto optimal region. Proof: Each ๐ข is at the boundary of 2 connected components ๐๐ . Theorem 5 shows that in every open environment ๐ of ๐ข there are Pareto optimal points that are not on ๐๐,๐ . This implies that one of the connected components is entirely Pareto optimal. According to theorem 8, the second one is necessarily not Pareto optimal. This implies that ๐ข is at the boundary of a Pareto optimal region. Theorem 10: Let ๐ข ∈ ๐ป๐,๐ โ ๐1,2 ∪ ๐2,3 ∪ ๐3,1 . Then the connected components on which boundary ๐ข is, are not Pareto optimal. Proof: ๐ข cannot be Pareto optimal. Since ๐ข ∈ ๐ป๐,๐ ⁄๐๐,๐ , ๐๐ and ๐๐ point to the same direction. ๐๐ (๐ ≠ ๐ ≠ ๐) must also point to the same direction (otherwise ๐๐ would point to the opposite direction than ๐๐ and ๐๐ , implying that ๐ข ∈ ๐๐,๐ , ๐๐,๐ in contradiction). So, all three gradients are in the same half plane. As shown in Appendix S1, this means ๐ข is not Pareto optimal. Since the Pareto front is compact, it is also closed, and has an open compliment. Hence, there is an environment U of ๐ข which is not Pareto optimal. If ๐ข is on the boundary of a connected component ๐๐ , then ๐ ∩ ๐๐ ≠ Φ, meaning that ๐๐ contains points that are not Pareto optimal, and hence ๐๐ is not Pareto optimal. 52 We showed that ๐๐1,2,3 = ๐1,2 ∪ ๐2,3 ∪ ๐3,1, if for every ๐ ≠ ๐ ≠ ๐ ∈ {1,2,3}, ๐ป๐,๐ ≠ ๐ป๐,๐ , yet this statement is true also if there are such ๐, ๐, ๐. Assume that ๐ป๐,๐ = ๐ป๐,๐ . As shown, ๐ป๐,๐ = {๐ข|๐๐,๐ (๐ข) = 0 ⇔ ๐๐ (๐ข) × ๐๐ (๐ข) = 0}, and ๐ป๐,๐ = {๐ข|๐๐ (๐ข) × ๐๐ (๐ข) = 0}. It means that on ๐ป๐,๐ , ๐๐ is parallel to ๐๐ , and ๐๐ is parallel to ๐๐ ⇒ ๐๐ is parallel to ๐๐ ⇒ ๐๐ × ๐๐ = 0 on ๐ป๐,๐ ⇒ ๐๐,๐ = 0 on ๐ป๐,๐ . As explained earlier, this implies that ๐ป๐,๐ = ๐ป๐,๐ . ๐ป๐,๐ divides the space into 2 (if it’s a line) or 3 (if it’s a hyperbola) parts. Each of ๐๐,๐ , ๐๐,๐ , ๐๐,๐ changes sign when passing any branch of ๐ป๐,๐ . So, either they always have the same sign (expect on ๐ป๐,๐ itself), or they never have the same sign. As seen before - a point outside of ๐ป๐,๐ is Pareto optimal if and only if ๐๐,๐ , ๐๐,๐ and ๐๐,๐ all have the same sign. So either the entire space (maybe except parts of ๐ป๐,๐ ) is Pareto optimal, or none of it is. However, we showed that the Pareto front is compact, and hence bounded, and hence it cannot be the entire space. So, the Pareto front is placed on ๐ป๐,๐ (which has no interior), i.e. - ๐1,2,3 = ๐๐1,2,3. To show that ๐๐1,2,3 = ๐1,2 ∪ ๐2,3 ∪ ๐3,1 we need to show that ๐1,2,3 = ๐1,2 ∪ ๐2,3 ∪ ๐3,1 . On one hand, ๐1,2 ∪ ๐2,3 ∪ ๐3,1 ⊆ ๐1,2,3. On the other hand, ๐1,2,3 ⊆ ๐ป๐,๐ , so if ๐ข ∈ ๐1,2,3, ๐ข ∈ ๐ป๐,๐ and then ๐1 (๐ข), ๐2 (๐ข), ๐3 (๐ข) are either all aligned, or one is pointing away from the other two. The first case happens if and only if the point is not Pareto optimal, so for ๐ข ∈ ๐1,2,3 the latter must happen. However, in that case ๐ข ∈ ๐๐,๐ for ๐, ๐ ∈ {1,2,3} ⇒ ๐ข ∈ ๐1,2 ∪ ๐2,3 ∪ ๐3,1 ⇒ ๐1,2,3 ⊆ ๐1,2 ∪ ๐2,3 ∪ ๐3,1 ⇒ ๐1,2,3 = ๐1,2 ∪ ๐2,3 ∪ ๐3,1 . 53 Figure S4: For 3 tasks in 2D, the Pareto front is the locus of all points such that all ๐๐,๐ have the same sign. Each of the 2 tasks Pareto fronts, ๐ท๐,๐ (in blue), lies on a hyperbola ๐ฏ๐,๐ . ๐๐,๐ switches sign when passing ๐ฏ๐,๐ . ๐ฏ๐,๐ , ๐ฏ๐,๐ and ๐ฏ๐,๐ are plotted in green, gray and pink, respectively. The Pareto front associated with the three tasks, ๐ท๐,๐,๐ is plotted in light blue. 54 Appendix S8 The resulting Pareto front when one of the performance function is maximized in a region Consider again the case of 3 tasks in ๐ dimensions, each with a point archetype ๐๐ , and a performance function ๐ฬ๐ that decays with an inner-product norm from the archetype. Denote by ๐ the resulting Pareto front. Now consider the case where ๐ฬ1 is truncated – instead of a point archetype there is a region, ๐ด1 , that maximizes performance. Denote the truncated performance function by ๐1 . Since ๐1 is truncated, it means that outside the archetypal region, its contours are identical to those of ๐ฬ1. For convenience - denote ๐ฬ2 , ๐ฬ3 by ๐2 , ๐3 . We would now like to calculate the Pareto front relative to ๐1 , ๐2 , ๐3 . Theorem: The Pareto front relative to ๐1 , ๐2 , ๐3 is composed of ๐2,3 ∪ (๐⁄๐ด1° ), where ๐2,3 is the Pareto front related to ๐2 and ๐3 . Proof: Lemma: If ๐ฅ ∉ ๐, it is not Pareto optimal. Proof: ๐ฅ ∉ ๐. In Appendix S1 we saw that this means that there is ๐ฆ in the vicinity of ๐ฅ such that ∀๐: ๐ฬ๐ (๐ฅ) < ๐ฬ๐ (๐ฆ). This implies that ๐2 (๐ฅ) < ๐2 (๐ฆ), ๐3 (๐ฅ) < ๐3 (๐ฆ). When instead of considering ๐ฬ1 we consider ๐1 , it is possible that instead of performing task 1 better than ๐ฅ, ๐ฆ performs task 1 the same as ๐ฅ (This happens if both ๐ฅ and ๐ฆ are on ๐ด1 ). Hence, ๐ฅ is not Pareto optimal also relative to ๐1 , ๐2 and ๐3 . Lemma: If ๐ฅ ∈ ๐ and ๐ฅ ∉ ๐ด1° , then ๐ฅ is Pareto optimal. Proof: Let ๐ฅ ∈ ๐, ๐ฅ ∉ ๐ด1° . If there was a point ๐ฆ that dominated ๐ฅ, it would mean that ๐๐ (๐ฅ) ≤ ๐๐ (๐ฆ), with at least one proper inequality. ๐2 (๐ฅ) ≤ ๐2 (๐ฆ) or ๐3 (๐ฅ) ≤ 55 ๐3 (๐ฆ) ⇒ ๐ฬ2 (๐ฅ) ≤ ๐ฬ2 (๐ฆ) or ๐ฬ3 (๐ฅ) ≤ ๐ฬ3 (๐ฆ). If ๐1 (๐ฅ) < ๐1 (๐ฆ), it means that ๐ฅ ∉ ๐ด1 and certainly ๐ฬ1 (๐ฅ) < ๐ฬ1 (๐ฆ). For ∈ ๐๐ด1 ๐1 (๐ฅ) = ๐1 (๐ฆ) ⇒ ๐ฬ1 (๐ฅ) ≤ ๐ฬ1 (๐ฆ), and for ๐ฅ ∈ ๐ด1 ๐1 (๐ฅ) = ๐1 (๐ฆ) ⇒ ๐ฬ1 (๐ฅ) = ๐ฬ1 (๐ฆ). So, for ๐ฅ ∉ ๐ด1° , ๐1 (๐ฅ) = ๐1 (๐ฆ) ⇒ ๐ฬ1 (๐ฅ) ≤ ๐ฬ1 (๐ฆ). Thus, ๐๐ (๐ฅ) ≤ ๐๐ (๐ฆ) ⇒ ๐ฬ๐ (๐ฅ) ≤ ๐ฬ๐ (๐ฆ), and ๐๐ (๐ฅ) < (๐ฆ) ⇒ ๐ฬ๐ (๐ฅ) < (๐ฆ), so if ๐ฅ is not Pareto optimal relative to {๐๐ } it is not Pareto optimal relative to {๐ฬ๐ }. Since ๐ฅ ∈ ๐, it is Pareto optimal relative to {๐ฬ๐ }, and hence it is Pareto optimal relative to {๐๐ }. Lemma: If ๐ฅ ∈ ๐ ∩ ๐ด1° , then it is either on ๐2,3 or it is not Pareto optimal. Proof: Let ๐ฅ ∈ ๐ ∩ ๐ด10 . Assume ๐ฅ is not on ๐2,3 . We would like to show that in this case, ๐ฅ is not Pareto optimal. There is ๐1 > 0 such that ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ๐ต0 (๐ฅ, ๐1 ) ⊆ ๐ด1 , since ๐ฅ ∈ ๐ด1° . There is ๐2 > 0 such that ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ๐ต0 (๐ฅ, ๐2 ) does not intersect ๐2,3 since ๐ฅ is not on that front (It was shown in Appendix S7 that the Pareto front is closed. The argument brought there applies to any number of dimensions). Let ๐ = min(๐1 , ๐2 ). ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ๐ต0 (๐ฅ, ๐) ⊆ ๐ด1 and does not intersect ๐2,3 . ๐ฅ is not Pareto optimal relative to ๐2 and ๐3 since it is not on ๐2,3 . We saw in Appendix S1 that if x is not Pareto optimal (when the performance functions decay we a norm from a point archetype), there is a direction such that points in this direction close enough to the point dominate it. This means that there is ๐ฆ ∈ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ๐ต0 (๐ฅ, ๐) that performs tasks 2 ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ ฬ and 3 better than ๐ฅ does. ๐ฆ ∈ ๐ต 0 (๐ฅ, ๐) ⊆ ๐ด1 ⇒ ๐ฆ ∈ ๐ด1 ⇒ ๐ฆ performs task 1 the same as ๐ฅ does since they are both on the archetype of task 1 ⇒ ๐ฅ is dominated by ๐ฆ ⇒ ๐ฅ is not Pareto optimal. Lemma: If ๐ฅ ∈ ๐2,3 , then ๐ฅ is pareto optimal (even if it is on ๐ด1 ) Proof: If ๐ฅ ∈ ๐2,3 it means that no other point performs both task 2 and task 3 better than it, which in turn means that no other point performs all 3 tasks better than it, without regard to performance 1. Hence, ๐ฅ is Pareto optimal. 56 When considering the case of performance functions that decay with Euclidean norm, we get a circular shaped archetypal region, and a resulting Pareto front as depicted in Fig. 12 in the main text. For the special case where ๐2 = ๐3 , ๐2 = ๐3 , we effectively get the case of 2 tasks, one maximized in a region and one at a single point. Such a case, where ๐1 and ๐2 /๐3 decay with different inner product norms, is depicted in Fig. 11 in the main text. 57 Appendix S9 Bounds on the Pareto front for general performance functions show that normally it is located in a region close to the archetype We would like to find conditions as general as possible under which the Pareto front will be constrained to an area close to the line between the archetypes. We consider the case of more general performance functions, and of archetypes that are regions instead of points. We start with the case of 2 point-like archetypes: ๐1 and ๐2 . Denote by ๐ถ21 the contour of the first performance function, ๐1 , on which lies ๐2 . Define ๐ต21 as the set of points in which performance 1 is greater or equal to its value on the contour (i.e. ๐ต21 = {๐ฅ|๐1 (๐ฅ) ≥ ๐1 (๐2 )}). In the same manner, denote by ๐ถ12 the contour of the second performance function, ๐2 , on which lies ๐1 , and ๐ต12 = {๐ฅ|๐2 (๐ฅ) ≥ ๐2 (๐1 )}. Claim: Under these conditions, the Pareto front is bounded in ๐ต21 ∩ ๐ต12. Proof: Let ๐ง ∉ ๐ต21 ∩ ๐ต12. There are 3 options: ๐ง ∈ ๐ต21⁄๐ต12 , ๐ง ∈ ๐ต12⁄๐ต21 , ๐ง ∉ ๐ต12 ∪ ๐ต21 . If ๐ง ∈ ๐ต21⁄๐ต12 , it is dominated by ๐1 as ๐2 (๐ง) < ๐2 (๐1 ) since ๐2 ∈ ๐ต12 and ๐1 (๐ง) < ๐1 (๐1 ) since ๐1 is the archetype of task 1 so it maximizes the performance of that task. The same argument shows that if ๐ง ∈ ๐ต12⁄๐ต21 , then ๐ง is dominated by ๐2 . If ๐ง ∉ ๐ต12 ∪ ๐ต21 , it is dominated by any point in ๐ต12 ∪ ๐ต21. Hence, the Pareto front is restricted to the area ๐ต12 ∩ ๐ต21. In this area the Pareto front does not have to be connected. Such scenario can arise when the performance functions are not monotonic, or do not depend monotonically on strictly concave functions (See Fig S5). 58 Consider now the case of ๐ point-like archetypes. Denote by ๐ถ๐๐ the contour of the ๐ ๐กโ performance function, ๐๐ , on which ๐๐ lies. Denote by ๐ต๐๐ the set of points in which performance ๐ is greater or equal to its value on the contour (๐ต๐๐ = {๐ฅ|๐๐ (๐ฅ) ≥ ๐๐ (๐๐ )}). For each ๐, the Pareto front must lie in โ๐๐=1, ๐ต๐๐ , since each point outside this set is ๐≠๐ dominated by ๐๐ . If ๐ฅ ∉ โ๐๐=1, ๐ต๐๐ ⇒ ∀๐ ≠ ๐: ๐ฅ ∉ ๐ต๐๐ ⇒ ๐ฅ performs task ๐ for ๐ ≠ ๐ worse ๐≠๐ than any phenotype in ๐ต๐๐ . As ∀๐: ๐๐ ∈ ๐ต๐๐ , ๐ฅ performs task ๐ worse than ๐๐ . ๐ฅ performs task ๐ worse than ๐๐ . Since ๐๐ is the archetype of task ๐, ๐ฅ performs task ๐ worse than ๐๐ ⇒ ๐ฅ performs all tasks worse than ๐๐ ⇒ ๐ฅ is not Pareto optimal. Conclusion: ๐ฅ ∈ Pareto front ⇒ ๐ฅ ∈ โ๐๐=1, ๐ต๐๐ for every ๐. ๐≠๐ ⇒Pareto front ⊆ โ๐๐=1 โ๐๐=1, ๐ต๐๐ ๐≠๐ Consider again the case of 2 archetypes. Assume now that instead of a point, one of the archetypes is a region. We would like to find a bound to the Pareto front in this scenario. We assume the first archetype ๐ด1 is a closed bounded region. Let ๐ฅ1,2 be the point on ๐ด1 with the best performance of task 2. Since ๐ด1 is compact and ๐2 is continuous, such points exist. As before, let ๐ต21 be the set of points outperforming ๐2 in task 1, ๐ต21 = {๐ฅ|๐1 (๐ฅ) ≥ ๐1 (๐2 )}. Let ๐ต12 be the set of points outperforming ๐ฅ1,2 in task 2, ๐ต12 = {๐ฅ|๐2 (๐ฅ) ≥ ๐2 (๐ฅ1,2 )}. Take a point ๐ง outside of ๐ต21 ∩ ๐ต12. Again there are 3 options: ๐ง ∈ ๐ต12⁄๐ต21 , ๐ง ∈ ๐ต21⁄๐ต12 , ๐ง ∉ ๐ต21 ∪ ๐ต12. If ๐ง ∈ ๐ต12⁄๐ต21 , then ๐1 (๐ง) < ๐1 (๐2 ) since ๐2 ∈ ๐ถ21 and ๐ง ∉ ๐ถ21 , and ๐2 (๐ง) < ๐2 (๐2 ) since ๐2 is the archetype of task 2 so it maximizes the performance of that task. It means that ๐ง is dominated by ๐2 , and hence it is not Pareto optimal. If ๐ง ∈ ๐ต21⁄๐ต12 , then ๐2 (๐ง) < ๐2 (๐ฅ1,2 ) since ๐ฅ1,2 ∈ ๐ต12 and ๐ง ∉ ๐ต12 , and ๐1 (๐ง) ≤ ๐1 (๐ฅ1,2 ) since ๐ฅ1,2 is on ๐ด1 , the archetype of task 1 so it maximizes the performance of that task. Therefore, ๐ง is dominated by ๐ฅ12 and is not Pareto optimal. If ๐ง ∉ ๐ต21 ∪ ๐ต12, it is 59 dominated by any point in ๐ต21 ∪ ๐ต12. Hence, the Pareto front is restricted to the area ๐ต21 ∩ ๐ต12 in this case as well. We further ask what happens in the case where there are 3 archetypes – the first archetype, ๐ด1 is a region, and the 2 other are points - ๐2 , ๐3 . Let ๐ฅ1,2 and ๐ฅ1,3 be the point 3 2 on ๐ด1 with the best performance of task 2 and 3, respectively. Let ๐ต1,2 , ๐ต1,2 be the set of 3 2 points that outperform ๐ฅ1,2 in task 2 and 3, respectively. Let ๐ต1,3 , ๐ต1,3 be the set of points ๐ that outperform ๐ฅ1,3 in task 2 and 3, respectively. By definition - ∀๐ฅ ∈ ๐ต1,๐ , ๐ง ∉ ๐ ๐ต1,๐ : ๐๐ (๐ง) < ๐๐ (๐ฅ). 3 2 In that case, the Pareto front must be contained in ๐ต1,2 ∪ ๐ต1,2 : Any point outside of this set is dominated by ๐ฅ1,2 , since ๐ฅ1,2 is contained in both contours, it performs tasks 2 and 3 better than any point outside of those contours, and task 1 better than any other point 3 2 since it is on ๐ด1 . The same argument applies to ๐ต1,3 ∪ ๐ต1,3 . From the same arguments brought earlier, the front is contained in ๐ต21 ∪ ๐ต23 and in ๐ต31 ∪ ๐ต32. Hence, the front is 3 3 2 2 bounded in (๐ต1,2 ∪ ๐ต1,2 ) ∩ (๐ต1,3 ∪ ๐ต1,3 ) ∩ (๐ต21 ∪ ๐ต23 ) ∩ (๐ต31 ∪ ๐ต32 ) ๐ ๐ If all 3 archetypes are regions, each term can be replaced by (๐ต๐๐ ∪ ๐ต๐๐ ) → (๐ต๐,๐ ∪ ๐ต๐,๐ )∩ ๐ ๐ ๐ ๐ (๐ต๐,๐ ∪ ๐ต๐,๐ ), resulting in the Pareto front bounded by โ3๐=1 โ3๐≠๐≠๐,๐,๐=1(๐ต๐,๐ ∪ ๐ต๐,๐ )∩ ๐ ๐ (๐ต๐,๐ ∪ ๐ต๐,๐ ) ๐ ๐ Conclusion: The Pareto Front ⊆ โ3๐=1 (๐ต๐,๐ ∪ ๐ต๐,๐ ) ๐≠๐≠๐ 60 Figure S5: The Pareto front does not have to be connected for non monotonic performance functions. A and B: A plot of 2 chosen non-monotonic performance functions, 5.2๐ −(๐ฅ2 +๐ฆ2 ) 0.1 + 4.9๐ −((๐ฅ−1)2 +๐ฆ2 )2 .5 + 5๐ −((๐ฅ)2 +(๐ฆ−1)2 )2 .5 −((๐ฅ)2 +(๐ฆ+1)2 )2 .5 + −((๐ฅ−1)2 +(๐ฆ−1−1)2 )2 .5 + + 5.1๐ −((๐ฅ+1)2 +๐ฆ2 )2 .5 and 2๐ ๐1 = 4.1๐ ๐2 = 4.1๐ −((๐ฅ+1−1)2 +(๐ฆ−1)2 )2 .5 + 5.1๐ −((๐ฅ−1)2 +(๐ฆ−1)2 ) 0.1 + 4.9๐ −((๐ฅ−1)2 +(๐ฆ+1−1)2 )2 .5 −((๐ฅ−1−1)2 +(๐ฆ−1)2 )2 .5 + 3๐ −((๐ฅ−1.9)2 +(๐ฆ+1.3)2 )2 .2 + 5๐ . C: The Pareto front related to those performance functions is not connected. ๐ถ21 - the contour of performance function 1 going through ๐2 , the archetype of task 2, is in thick 61 purple. ๐ถ12 - the contour of performance function 2 going through ๐1 , the archetype of task 1, is in thin blue. ๐1 and ๐2 are red dots. The Pareto front is plotted in red. It can be seen that it is not connected. 62 Appendix S10 The Pareto front of r strongly concave performance functions is a connected set of Hausdorff dimension of at most r-1. We would like to calculate the Pareto front of a system that needs to perform r tasks in an n-dimensional trait space V = โ ๐ . Each performance function ๐๐ (๐ฃ) has a single maximum - the archetype ๐ฃ๐∗ . ๐๐ is assumed to be smooth and strongly concave. In Appendix S1 we’ve shown that for such a system, for a Pareto optimal point v there ๐ โ ๐๐ (๐ฃ)} that equals zero. exists a convex linear combination of {∇ ๐=1 For performance functions ๐๐ (๐ฃ) that decay with inner product norms, we’ve shown that ๐ โ ๐๐ (๐ฃ)} if there exists a convex linear combination of {∇ that equals zero, v is Pareto ๐=1 optimal. This remains true for strongly concave performance functions: Lemma 1: For strongly concave {๐๐ }, if there exists a convex linear combination of ๐ โ ๐๐ (๐ฃ)} that equals zero, then v is Pareto optimal {∇ ๐=1 Proof: By assumption, there exist {๐ผ๐ ≥ 0}๐๐=1 , ∑๐๐=1 ๐ผ๐ = 1 such that ∑๐๐=1 ๐ผ๐ โ∇ ๐๐ (๐ฃ) = 0. 63 Consider the function ๐ = ∑๐๐=1 ๐ผ๐ ๐๐ . Since all ๐๐ ’s are strongly concave, all ๐ผ๐ ’s are positive and are not all zero, f is strictly concave. By our assumption โ∇๐(๐ฃ) = ∑๐๐=1 ๐ผ๐ โ∇ ๐๐ = 0, and together those imply that v maximizes f. Since f is monotonic with all ๐๐ ’s, this implies v is Pareto optimal. Hence, v is Pareto optimal iff there is a convex combination of โ∇ ๐๐ (๐ฃ) that equals zero. ๐ In addition, for each convex set {๐ผ๐ }๐=1 , there exists a v such that ∑๐๐=1 ๐ผ๐ โ∇ ๐๐ (๐ฃ) = 0. This is because each of the ๐๐ is strongly concave on โ ๐ , and hence decays to negative infinity. This implies that ∑๐๐=1 ๐ผ๐ ๐๐ decays to negative infinity, so it must attain a maximum. This maximum point v will satisfy ∑๐๐=1 ๐ผ๐ โ∇ ๐๐ (๐ฃ) = 0. Also, from strict concavity, there will be only a single point v that satisfies this relation. ๐ To conclude, we found that for every convex set {๐ผ๐ }๐=1 there is a single point v such that ∑๐๐=1 ๐ผ๐ โ∇ ๐๐ (๐ฃ) = 0, v is Pareto optimal, and those are all Pareto optimal points. Hence, the set {๐ผ1 , … ๐ผ๐ |๐ผ๐ ≥ 0, ∑๐๐=1 ๐ผ๐ = 1} fully determines the Pareto front. Denote by T := {๐ผ1 , … ๐ผ๐ |๐ผ๐ ≥ 0, ∑๐๐=1 ๐ผ๐ = 1} the unit (r-1)-simplex. ๐ ⊂ โ ๐−1. By the above considerations, we can define a function h:T→ โ ๐ by assigning each {๐ผ1 , … ๐ผ๐ } with the maximal point of ∑๐๐=1 ๐ผ๐ ๐๐ . By the above results, we know that Im(T) equals the Pareto front. Lemma 2: h is continuously differentiable 64 Proof: Take ๐ผ โ {๐ผ1 , … , ๐ผ๐ } ∈ ๐ ⊂ โ ๐−1 . Take ๐ฃ = โ(๐ผ) ∈ โ ๐ . โ ๐๐ (๐ฆ). k is continuously differentiable. ๐(๐ผ, ๐ฃ) = 0 , where ๐(๐ฅ, ๐ฆ) โ ∑๐๐=1 ๐ฅ๐ ∇ Denote ๐ = {๐1 , … , ๐๐ } ๐๐1 ๐๐ฆ1 Then โฎ ๐๐1 (๐๐ฆ๐ โฏ โฑ โฏ ๐๐1 ๐๐ฆ๐ โฎ ๐๐๐ = ∑๐๐=1 ๐ผ๐ ∇2 ๐๐ . ๐๐ฆ๐ ) Since ๐๐ is strongly convex, ∇2 ๐๐ is positive definite for every i, and hence ∑๐๐=1 ๐ผ๐ ∇2 ๐๐ is positive definite, and hence not singular. According to the implicit function theorem, this implies that there exists an open environment U of ๐ผ (U itself is not necessarily contained in T) , an open environment V of v, and a unique continuously differentiable function ๐: ๐ → ๐ such that ∀๐ฅ ∈ ๐: ๐(๐ฅ, ๐(๐ฅ)) = 0. However, we know that ∀๐ผ ′ ∈ ๐: ∃ a single v’ such that ๐(๐ผ ′ , ๐ฃ) = 0. This implies that โ|๐∩๐ = ๐| ๐ ⇒ โ is continuously differentiable at ๐ผ. This is true for every ๐ผ ∈ ๐, and hence h is continuously differentiable on T. Corollary: For strongly concave and smooth performance functions, the Pareto front is connected. 65 Proof: We saw that the Pareto front is the image of T under h. T is connected (as the unit (r-1)-simplex in โ ๐ ) and h is continuous, and hence the Pareto front is connected. Corollary: The Pareto front has Hausdorff dimension of at most r-1 Proof: This follows as T has Hausdorff dimension of r-1 (as the unit (r-1)-simplex) and h is ๐ถ 1 . โ Note that v is Pareto optimal relative to ๐1 , … , ๐๐ ⇔ โ๐ฃ ′ ≠ ๐ฃ ๐ . ๐ก ∀๐: ๐๐ (๐ฃ) < ๐๐ (๐ฃ ′ ) ⇔ โ๐ฃ ′ ≠ ๐ฃ ๐ . ๐ก ∀๐: ๐(๐๐ (๐ฃ)) < ๐(๐๐ (๐ฃ′)) where ๐: โ → โ is a monotonic increasing function ⇔ v is Pareto optimal relative to ๐ โ ๐1 , … , ๐ โ ๐๐ Corollary: The Pareto front relative to ๐1 , … , ๐๐ , where each ๐๐ is a smooth monotonically increasing function of a strongly concave function is continuous and has a maximal Hausdorff dimension of r-1. 66 References 1. Grant PR, Abbott I, Schluter D, Curry RL, Abbott LK (1985) Variation in the size and shape of Darwin’s finches. Biological Journal of the Linnean Society 25:1-39. 2. Barber CB, Dobkin DP, Huhdanpaa H (1996) The quickhull algorithm for convex hulls. ACM Trans Math Softw 22:469–483. 3. Klee V, Laskowski MC (1985) Finding the smallest triangles containing a given convex polygon. Journal of Algorithms 6:359-375. 4. M. Gerstenhaber (1951) Theory of convex polyhedral cones, Chap. XVIII of Cowles Commission Monograph No. 13, Activity analysis of production and allocation, ed. T. C. Koopmans, Wiley, New York,. 67