ligand FAST mot or +ATT -ATT +CH 3 MCPs SLOW W P -CH 3 B ATP ATP Pi rpoH gene Transcription s Feedback - Heat - mRNA Translation s 32 hsp1 Feedback 32 Feedforward Heat stabilizes hsp2 Transcription & Translation FtsH Lon DnaK GroL GroS Proteases Chaperones s 32 Heat MCPs B A CW W P Y ~ A flagellar motor R P Z ADP Y Pi Collaborators and contributors (partial list) Theory: Parrilo, Carlson, Paganini, Papachristodoulo, Prajna, Goncalves, Fazel, Lall, D’Andrea, Jadbabaie, many current and former students, … Web/Internet: Low, Willinger, Vinnicombe,Kelly, Zhu,Yu, Wang, Chandy, Effros, … Biology: Csete,Yi, Arkin, Simon, AfCS, Borisuk, Bolouri, Kitano, Kurata, Khammash, El-Samad, Gross, Endelman, Sauro, Hucka, Finney, … Physics: Mabuchi, Doherty, Barahona, Reynolds, Asimakapoulos,… Turbulence: Bamieh, Dahleh, Bobba, Gharib, Marsden, … Engineering CAD: Ortiz, Murray, Schroder, Burdick, … Disturbance ecology: Moritz, Carlson, Robert, … Caltech faculty Other Caltech Finance: Martinez, Primbs, Yamada, Giannelli,… Other For more details www.cds.caltech.edu/~doyle www.aut.ee.ethz.ch/~parrilo And thanks to Carla Gomes for helpful discussions. Subthemes of this program • Scalability of algorithms and protocols – Large network and physical problems – Decentralized, asynchronous, multiscale – Computational complexity: P/NP/coNP • Approaches – Duality – Randomness • Workshop II part of this program • Workshop last week on “Phase Transitions of Algorithmic Complexity” The Internet hourglass Applications Web FTP Mail News Video Audio ping napster Transport protocols TCP SCTP UDP ICMP IP Ethernet 802.11 Power lines ATM Optical Link technologies Satellite Bluetooth The Internet hourglass Applications Web FTP Mail TCP News Video Audio ping napster Everything Transport protocols on IP SCTP UDP ICMP IP Ethernet 802.11 IP on Power lines ATM Optical everything Satellite Bluetooth Link technologies From Hari Balakrishnan Towards a theory of the Internet • The well-known original design principles are a rudimentary “theory of the Internet.” • This is a nearly pure robustness theory (little else is being optimized). • Can we provide a “deep,” complete, and coherent theory of internetworking? (Like standard comms and controls.) • If we can’t say something systematic about the Internet protocols, we’re probably kidding ourselves about our ability to treat more complex problems. • Nevertheless this is just a “warm-up” for a theory of ubiquitous embedded software, protocols, and networks for real-time control of everything, everywhere. Network protocols. HTTP Files TCP IP packets packets packets packets packets packets Routing Provisioning Network protocols. Vertical decomposition Protocol Stack HTTP TCP IP Routing Provisioning Network protocols. HTTP TCP IP Horizontal decomposition Each level is decentralized and asynchronous Routing Provisioning Vertical decomposition • “Breaks” standard communications HTTP and control theories. • Coherent, complete theory is missing but possible. First cut nearly done. TCP • In what sense, if any, is this optimal? IP• What needs to be done to fix it? Horizontal decomposition Routing Provisioning Key elements of new theory • Primal/dual vertical and horizontal decomposition (Kelly et al, Low et al) • Source coding into mice and elephants. (Appears to be “universal” but needs more study.) • Congestion control for bandwidth utilization and minimal delay. Proofs use relaxations (but still handcrafted). • How bad is short path (low delay for mice) routing for elephants in a “well-provisioned” network? Conjecture: Not bad. • Vertical and horizontal integration can be made “nearly” optimal in an asymptotic sense. (In what sense?) • Lots of people here are working out details (the IPAM team!). Stay tuned. Vertical decomposition • Networking protocols • Multiscale physics • Biological networks • Business, finance, econ organization • Unifying theoretical framework? Horizontal decomposition What’s next? • Scalable, integrated robustness analysis and software/protocol verification for hybrid control of nonlinear systems. • New extensions to robust control using sum-of-squares and semidefinite programming (SOS/SDP) offers extraordinary promise. • Already demonstrated on wide array of complex problems (controls, maxcut, quantum entanglement). • Potentially deep connections between verification and robustness. • Huge implications for biology and physics. • That’s the good news. Compute Communications and computing Compute Act Sense Environment Computation Devices Devices Control Dynamical Systems From • Software to/from human • Human in the loop Compute To • Software to Software • Full automation • Integrated control, comms, computing • Closer to physical substrate Computation • New capabilities & robustness • New fragilities & vulnerabilities Devices Devices Control Dynamical Systems Good new, bad news, good news • Good: Powerful new capabilities enabled by “embedded, everywhere” • Bad: Frightening new potentials for massive cascading failure events • Good: Need for new math tools for verifying robustness of embedded networking. – Embedded: Ubiquitous, sensing, actuating – Networking: Connected, distributed, asynchronous • This represents an enormous change, the impact of which is not fully appreciated • Robustness and verifiability of highly autonomous control systems with embedded software is the central challenge • Until recently, there were no promising methods for addressing this full problem • Even very special cases have had limited theoretical support for systematic verification of robustness • Everything has changed! Computation Compute • New capabilities & robustness Devices • New fragilities & vulnerabilities Devices Control Dynamical Systems Vertical decomposition • “Breaks” standard communications and HTTP control theories. • Duality as a method for decomposition TCP • Distributed and asynchronous control • Other applications IP • Robustness analysis • A posteriori error bounds for PDEs Horizontal decomposition Routing Provisioning Robust hybrid/nonlinear systems theory of embedded networks? “Theory” without scalable algorithms. Linear theory plus bounds, with scalable algorithms. Hacking. (Scalable algorithms without theory.) Theory with scalable algorithms? Most research: Not scalable, no theory. Provably robust, scalable protocols for control over embedded networks. Robustness verification of embedded control software/hardware. Theory with scalable algorithms. Provably robust, scalable Internet protocols. Hacking. Key issues 1. Robustness/Fragility: Uncertainty in components, environment, and modeling, assumptions, and computational approximations 2. Verifiability: Short proofs of robustness 3. Complexity: Extreme, highly structured internal complexity is typically needed to produce verifiably robust behavior 4. Scarce resources: All tradeoffs are aggravated by efficiency and scarce resources Ideal performance Typical design IP Robustness, evolvability/scalability, verifiability Robustness Evolvability Verifiability • Relative to“nominal” performance under ideal conditions, robust performance typically requires – greater internal complexity – some loss of nominal performance • Tradeoffs between robustness, evolvability, and verifiability seem less severe (e.g. IP) Robustness, evolvability/scalability, verifiability Ideal performance Robustness Evolvability Verifiability • That a system is not merely robust, but verifiably so, is an important engineering requirement and major research challenge • There is much anecdotal evidence and some new theoretical support as well for the compatibility of robustness, evolvability, and verifiability • Verifiability in forward engineering translates into comprehensibility in reverse engineering of biological systems • This research direction may be good news for understanding complex biological processes Computational complexity Assume you already know: • P/NP and NP complete • SAT and 3-SAT …but not necessarily • NP vs coNP • Duality and relaxations x R F ( x) ? n Typically NP hard. • If true, there is always a short proof. • Which may be hard to find. x R F ( x) ? n Typically coNP hard. Fundamental asymmetries* • Between P and NP • Between NP and coNP • More important problem. • Short proofs may not exist. * Unless they’re the same… x R F ( x) ? n What makes a problem “harder”? x R F ( x) ? n Easy to find solutions? x R F ( x) ? n S1 Satisfiable or feasible Easy to find proofs? x R F ( x) ? n Unsatisfiable or infeasible 0 1 k k+1 Trivially sharp "phase transition" at max F ( x) x Complexity? Example: Satisfiability • SAT: Given a formula in propositional calculus, is there an assignment to its variables making it true? • We consider clausal form, e.g.: • (a OR (NOT b) OR c) AND (b OR d) AND (b OR (NOT d) OR a) • a, b, c, and d are Boolean (True/False) variables. • Problem is NP-Complete. (Cook 1971) • Shows surprising “power” of SAT for encoding computational problems. Generating Hard Random Formulas • Key: Use fixed-clause-length model. – (Mitchell, Selman, and Levesque 1992) • Critical parameter: ratio of the number of clauses to the number of variables. • Hardest 3SAT problems at ratio = 4.3 Hardness of 3SAT 4000 50 var 40 var 20 var 3000 DP Calls Hard 2000 1000 0 Easy Easy 2 3 4 5 6 7 Ratio of Clauses-to-Variables 8 1.0 • At low ratios: 50% sat 0.8 – few clauses (constraints) – many assignments – easily found 0.6 0.4 0.2 0.0 2 3 4 5 6 7 Ratio of Clauses-to-Variables Mitchell, Selman, and Levesque 1991 4000 3000 50 var 40 var 20 var 8 • At high ratios: – many clauses – inconsistencies easily detected 2000 1000 0 The 4.3 Point 1.0 • Refer to as a 50% sat 0.8 – SAT transition – Complexity transition 0.6 0.4 0.2 0.0 2 3 4 5 6 7 Ratio of Clauses-to-Variables Mitchell, Selman, and Levesque 1991 4000 3000 2000 1000 0 50 var 40 var 20 var 8 • Is SAT transition either necessary or sufficient for complexity transition? • Connections with phase transitions in statistical physics? • Are transitions “sharp” in large size limit? Theoretical Status Of Threshold • Very challenging problem ... • Current status: – 3SAT threshold lies between 3.45 and 4.6 (Motwani et al. 1994, Achlioptas et al. 2001, Kirousis 2002, Broder and Suen 1993, Dubois 2000; Achlioptas and Beame 2001, Friedgut 1997, etc.) • Other problems better characterized (NPP) SAT Phase transitions ? ? Complexity Quasigroups or Latin Squares Quasigroup or Latin Square (Order 4) 32% preassignment Gomes and Selman 96 A quasigroup is an n-by-n matrix such that each row and column is a permutation of the same n colors Quasigroup with Holes (QWH) • Given a full quasigroup, “punch” holes into it 32% holes • Always completable (satisfiable), so no SAT transition. • Appears to have a complexity transition (easy-hard-easy). SAT Phase transitions ? ? Complexity Lots of problems with statistical physics story. Why may it be reasonable that math, algorithms, and randomness are so effective? • Robust systems are verifiably so? • Do only robust systems persist as coherent, structured objects of study (universes, solar systems, planets, life forms, protocols, …)? • If so, then mostly robust (and verifiably so) systems are around for us to study. Lattice models? What can we do with lattices that will be easy to understand, yet relevant to the “real” computational complexity problems that we most care about? Key abstractions: 1. Robustness/Fragility 2. Verifiability 3. Complexity .2 .4 .6 .8 Density = fraction of occupied sites (black) Not connected Connected Focus on “horizontal” paths. “Vertical” paths in empty sites are allowed to connect through corners or edges. (8 neighbors) “Horizontal” paths connect only on edges. (4 neighbors.Ordinary square site percolation.) Focus on “horizontal” paths. Some (nonstandard) definitions vertical paths horizontal paths Critical phase transition at density = .59… .2 .4 .6 .8 Density = fraction of occupied sites (black) Not connected Connected Focus on “horizontal” paths. • Robustness is provided by barriers in some state space. These prevent cascading failure events. • Lattices offer a crude abstraction, in that paths can be thought of as barriers, with robustness to perturbations in the lattice. • Verifiability complexity is measured in the length of the proof required to verify robustness. • Lattices can offer a variety of crude abstractions to this as well. The length of minimal paths would be a simple measure of “proof length.” Very special features: • Dual and primal problems are “essentially” the same. • There is no duality gap. vertical paths horizontal paths vertical paths horizontal paths Caution: potential source of confusion. Barriers in 3d lattices are 2d cuts. Barriers in 1d lattices are 0d cuts. path fragments barrier In general, barriers are d-1 dimensional (dual) cuts stopping 1-dim (primal) paths in a d-dim lattice. vertical paths horizontal paths Critical phase transition at density = .59… Lattices offer pedagogically useful but potentially dangerously misleading simplifications, which are thus both strengths and weaknesses: 1. Internal complexity 2. Computational complexity 3. Duality Focus on “horizontal” paths. 1. Internal vs external complexity: Real biology and technology uses extremely complex hierarchical organization in order to create robust and verifiably (simple) behavior. Lattices allow no distinction between complex organization and complex behavior. This can be very misleading. 2. Computational complexity: Most lattice computational problems are in P and thus easily explored, but fail to illustrate the P/NP asymmetry. We will rely on notions of complexity that are good analogies, but not precisely comparable. 3. Duality: Duality is greatly simplified and transparent. This makes exposition easy but hides the NP/coNP asymmetry which is central to the general problem. Lattices offer enormous (and potentially dangerous) simplifications: • Robustness problem= existence of horizontal path • Verification = prove existence of horizontal path • Complexity = minimum horizontal path length (of proof) • Model fragility = minimum number of site changes to break all horizontal paths (= create a vertical path) Focus on “horizontal” paths. Note: I’m going to draw small lattices and rely on your imagination for what large lattices would look like. Alternative definition of “complexity:” • The “computer” is you, looking at the lattice and determining by inspection whether there is a path or not. • This can be easy or hard, depending on the density. • This is not exactly the same as minimal path length, but close enough for now. • Do a very informal story, and then make it rigorous. .2 .4 .6 Density = fraction of occupied sites (black) .8 No Yes Easy Exist horizontal path? Hard For random lattices, there are 4 regimes, with all combinations of Easy/Hard and Yes/No. The hard cases correspond to lattices that are of intermediate density, near the critical point. Easy cases are either high or low densities, which always correspond to Yes or No, respectively. No Yes No Easy Easy Hard Hard Yes It is much easier to see with all the clusters colored. But that’s cheating, because determining the clusters is essentially the computational problem. No Easy Hard Yes The orthodox story: Hard problems are associated in some way with the phase transition. Low/Yes High/No The counterexamples Easy & Robust Exactly the opposite of criticality Hard & Fragile No • Yes or no • Easy or hard • High or low density • Robust or fragile (to perturbations) Easy Hard Yes Low/Yes Easy & Robust High/No The counterexamples Exactly the opposite of criticality Hard & Fragile 1. 2. 3. 4. Yes or no Easy or hard Low or high density Robust or fragile (to perturbations) 16 different possible combinations Low/Yes Easy & Robust High/No The counterexamples Exactly the opposite of criticality Hard & Fragile 1. 2. 3. 4. Yes or no Easy or hard Low or high density Robust or fragile (to perturbations) 8 16 different possible combinations Robust Fragile Robust Easy Easy Hard Hard Low Density (but connected) Fragile High density Hard implies fragile (we’ll prove this later). So only 6 of the 8 possibilities exist, and the critical density is nothing special. We will prove that these and only these implications hold. Robust Fragile Robust Easy Easy Hard Hard Low Density Robust Fragile Fragile High density Robust Fragile Easy Easy Hard Hard Robust Fragile Easy Hard All interesting real world problems are in this regime, with efficient, highly structured, rare configurations, using scarce (limited) resources. Low Density Robust Fragile Air bags EGR control Electronic fuel injection Electronic ignition Electric power steering (PAS) Easy Hard Temperature control Anti-lock brakes Active suspension Electronic transmission Cruise control Robust Fragile Easy Hard Impossible. Low Density Robust Easy Hard Fragile Improbable in random lattices. Robust Fragile Robust Easy Easy Hard Hard Low Density Fragile High density Theorem: Fragility Complexity Scarcity Proof tonite. Theorem: Fragility Complexity Scarcity Random lattices are complex (and fragile) only at critical phase transition. Low Density Robust Fragile High density Robust Fragile Easy Easy Hard Hard Definitions. Assume there is a connected (horizontal) path of minimal length l . n = length of side r = density l = MinPath length Occupied Empty MinPath Typical “minimal” path Definitions. Assume there is a connected path of minimal length l . n = length of side r = density l = MinPath length b = MinCut barrier length Typical “minimal” cut Occupied Empty MinPath Typical “minimal” path b Definitions. Assume there is a connected path of minimal length l . n = length of side r = density l = MinPath length b = MinCut barrier length Vertical path b n = length of side r = density l = MinPath length b = MinCut barrier length Assume a path exists. (Otherwise L=F=.) Necessarily r 1/n, n2 l n and define S L F log r Resource scarcity 0 S log(n) l log = Path Length Complexity 0 L log(n) n n log = Path Fragility 0 F log(n) b l = MinPath length b = MinCut barrier length S L F log r Resource scarcity 0 S log(n) l log = Path Length Complexity 0 L log(n) n n log = Path Fragility 0 F log(n) b Theorem: F L S Fragility Complexity Scarcity l = MinPath length b = MinCut barrier length S L F log r Resource scarcity 0 S log(n) l log = Path Length Complexity 0 L log(n) n n log = Path Fragility 0 F log(n) b Theorem: F L S Proof (Vinnicombe&sushi): To provide robustness to b changes, there must be at least b independent paths, which by assumption have minimum length l. Necessarily r n2 lb, or r n/b l/n. Take log of both sides. S L F log r Resource scarcity 0 S log(n) l log = Path Length Complexity 0 L log( n) n n log = Path Fragility 0 F log(n) b Theorem: F L S This is “maximally tight” in the sense that: Lattices and paths can be: 1. Resources: Scarce or rich 2. Existence of path: Yes or no 3. Complexity: Hard or easy 4. Perturbations: Fragile or robust Anything is possible, consistent with the theorem. S L F log r Resource scarcity 0 S log(n) l log = Path Length Complexity 0 L log(n) n n log = Path Fragility 0 F log(n) b Theorem: F L S Lattices and paths can be: 1. Existence: Yes or no 2. Resources: Scarce or rich 3. Perturbations: Fragile or robust 4. Complexity: Hard or easy Anything is possible, consistent with the theorem. We’ll just consider the 8 cases with paths. Theorem: F L S Fragile n F log b Hard l L log n Easy Robust Rich Scarce -S=log(r) Theorem: F L S Fragile Hard Robust Scarce Rich Easy Theorem: F L S Fragile Hard Robust Scarce Rich Easy Theorem: F L S Hard Fragile Scarce Rich Easy Robust Hard Fragile Easy Scarce Rich Easy Robust Occupied Empty F=S, L=0 Theorem: F L S MinPath Hard Fragile Easy Scarce Rich Easy Robust F=S, L=0 Theorem: F L S Most robust possible. Fragile Easy and Fragile Hard Scarce Rich F=log(n)>S, L=0 Theorem: F L S Easy Robust Hard Fragile Scarce F=S+L Rich Easy m d Robust b Occupied Empty MinPath Theorem: F L S r = density b = MinCut barrier length l = MinPath length n = length of side m = # of “cells” d = width of open regions To construct asymptotically tight cases where rn2 = lb, consider the lattice below. n m(b d ) d l n m(n 2b 1) r n 2 n 2 (m 1)d (n b) nd (n 2b 1) bd nb nd n 2 2nb n dn 2bd d bd n 2 n nb 2nb nd dn 2bd d bd n 2 n nb 2bd d n2 bd bd for 1 (b d ) n l and n m 1 l n m d b b d Now take limits: Consider the limit where n m 1 and 1 (b d ) n l nd n m 1 bd bd r n 2 n 2 (m 1)d (n b) n 2 mdn d b 2 n 2 1 n bd bd n2 r n2 l lb r n 2 bd b By constructing lattices as below, with n>>m>>1, it is possible to find lattices such that any rn2 lb, with r<1 is achievable. F=S+L Hard Fragile Scarce Rich Easy Robust Theorem: F L S The Fragile Face Hard Fragile Scarce Rich Easy Theorem: F L S Robust The Four Corners Hard Fragile Scarce Rich Easy Robust Theorem: F L S F=S+L Fragile Most Fragile F>>S Scarce Most Robust F=S Easy Robust Theorem: F L S Hard Random Fragile Scarce Rich Easy Robust Theorem: F L S Hard Fragile Scarce Rich Efficient and Easy robust is far from random Air bags Temperature control EGR control Electronic fuel injection Electronic ignition Electric power steering (PAS) Anti-lock brakes Active suspension Electronic transmission Cruise control Robust Fragility Complexity Scarcity • How general is this? • Seems to hold in all theory where it has been investigated. • Extensive literature on ill-conditioning in LPs and numerical linear algebra. • Anecdotally, seems to capture essence of many complexity problems. • Needs to be combine with laws constraining net system fragility. Phase transitions Complexity Bad news and good news • Bad news? Some hoped-for connections between phase transitions and complexity are not there. • Good news?: Ideas still interesting. • Lots more really good news! • The alternative is much richer and useful, and connects in interesting ways with phase transitions • New algorithms, new mathematics, new practical applications,… • And deep implications for physics. Phase transitions ? Complexity Physics and the edge of chaocritiplexity • Internet traffic and topology • Biological and ecological networks • Evolution and extinction • Earthquakes and forest fires • Finance and economics • Social and political systems Air bags Temperature control EGR control Active suspension Electronic fuel injection Electronic ignition Electric power steering (PAS) Anti-lock brakes Electronic transmission Cruise control Phase transitions ? Complexity Physics and the edge of chaocritiplexity • Internet traffic and topology • Biological and ecological Rich new unifying networks theory complex • Evolution andofextinction control, communication, • Earthquakes and forest fires and computing systems • Finance and economics • Social and political systems • Ubiquity of power laws • Coherent structures in shear flow turbulence • Macro dissipation and irreversibility vs. micro reversibility. • Quantum entanglement, measurement, and the QM/Classical transition • Growing group of physicists and experimentalists are joining this effort (Carlson, Mabuchi, Doherty, Gharib,…) Physics and the edge of chaocritiplexity Rich new unifying theory of complex control, communication, and computing systems More powerful bounds for the co-NP side Semialgebraic geometry + convex optimization (SDP) • Polynomial time computation. • Never worse than the standard. • Exhausts co-NP. x R F ( x) ? n Polynomial functions: NP-hard problem. 0 1 k k+1 Trivially sharp "phase transition" at max F ( x) x Complexity? Special case: Scalar QP M x R n | x Ax x b c c n M x R | 1 x b/2 b / 2 1 0 A x c b / 2 0 b/2 A Assume for nontriviality that A 0. Special case: Scalar QP M x R n | x Ax x b c c b/2 b / 2 0 A bA1b / 4 c 1) "phase transition" when c bA1b / 4 2) complexity depends only on bA1b 3) 1) and 2) are only trivially related x R F ( x) ? n • Polynomial functions: NP-hard problem. • A “simple” relaxation (Shor): find the minimum γ such that γ- F(x) is a sum of squares (SOS). • • • • Upper bound on the global maximum. Solvable using SDP, in polynomial time. A concise proof of nonnegativity. Surprisingly effective (Parrilo & Sturmfels 2001). • Exactly as in QP case, SAT “phase transition” does not imply complexity. • SOS/SDP relaxations much faster than standard algebraic methods (QE,GB, etc.). • Before SOS/SDP, might have conjectured that this was an example of phase transition induced complexity. • SOS/SDP gives certified upper bound in polynomial time. • If exact, can recover an optimal feasible point. • Surprisingly effective: – In more than 10000 “random” problems, always the correct solution… • Bad examples do exist (otherwise NP=co-NP), but “rare.” – Variations of the Motzkin polynomial. – Reductions of hard problems (e.g. NPP is nice) – None could be found using random search… Sums of squares (SOS) A sufficient condition for nonnegativity: fi ( x) : p( x) fi 2 ( x) ? i • Convex condition (Shor, 1987) • Efficiently checked using SDP (Parrilo). Write: p( x) z Qz, T Q0 where z is a vector of monomials. Expanding and equating sides, obtain linear constraints among the Qij. Finding a PSD Q subject to these conditions is exactly a semidefinite program (LMI). Nested families of SOS (Parrilo) x p( x) 0 iff g ( x) SOS, f i ( x) : g ( x) p( x) f i ( x) ? 2 i Nested families gk ( x) e.g. x 2k i exhaust co-NP 0 1 k Conjectures on why such a boring “phase transition:” • One polynomial is generically robust, therefore no complexity. • QPs capture the essence of this. • Can make up other “phase transitions” which create fragilities, and thus the possibility of complexity k+1 x R , f ( x) ? where f is a multivariate polynomial n M M M nested family of model sets P PP Search for counterexample M coNP ? P NP Search for proof nested family of proof sets Positivstellensatz Search for counterexample M ? P Search for proof {x R n : fi ( x) 0, gi ( x) 0} f cone( fi ), g ideal( gi ) : f 1 g 0 • Convex, but infinite dimensional. • Efficient (P time) search subsets (relaxations) using SOS/SDP (Parrilo) • Guaranteed to converge Search for simple counterexample M M M M M P P Search for short proof M M M nested family of model sets P PP nested family of proof sets {x R n : M P fi ( x) 0, gi ( x) 0} f cone( fi ), g ideal( gi ) : f 1 g 0 M x R n1 | Ax b 0 Special case: LP P R1 m | 0, b 0, A 0 Cone( Ax b) ( Ax b) | 0 ( Ax b) 1 0 A 0, b 1 0 data vk NPP vk S1 S2 partition ? S1 S2 v v ? k S1 k S2 Choose random n-bit integer vk complex fragile Fragile = large changes in solution from small changes in data NPP {x R : n x k 2 data vk 0, xk vk 2 2 0} fragile {x R n : xk 2 x v 2 k k 2 2 0} Choose random n-bit integer vk complex fragile k NPP f Random f x R n , f ( x) ? where f is a multivariate polynomial {x R : n x k 2 0, xk vk 2 { R, R x R : xk k n 0 0 0 1 { , 0 1 n 2 0 k vk 2 1 0 1 0 0} x 0 1 0 2 k 2 vk 0 0 n 2 0} 0} Very unlikely to be feasible. Contrast with random polynomial. • Complexity is caused by fragility (ill-conditioning). • Another example: Purely satisfiable QCP • Phase transitions are, in general, unrelated to complexity • Random scalar QP problems are generically robust (wellconditioned) and thus simple M x R n | x Ax x b c c b/2 b / 2 0 A 1) "phase transition" when c bA1b / 4 2) complexity depends only on bA1b 3) 1) and 2) are only trivially related Phase transitions Complexity More powerful bounds for the co-NP side Semialgebraic geometry + convex optimization (SDP) • Polynomial time computation. • Never worse than the standard. • Exhausts co-NP. A key insight Think of LMIs as quadratic forms, not as matrices. LMIs: quadratic forms, that are positive definite. • General forms , not necessarily quadratic. • Instead of nonnegativity (NP-hard), use sum of squares. SOS: multivariable forms, that are sum of squares. M M M nested family of model sets P PP nested family of proof sets Search for counterexample M ? P Search for proof • Models describe sets of possible (uncertain) behaviors intersected with sets of unacceptable behaviors (failures) • Thus verification of robustness (of protocols, embedded, dynamics, etc) involves showing that a set is empty. • Searching for an element x M is in NP, since checking whether a given x M is typically in P. • Proving that M is empty is in coNP and there may not be short proofs. M M M nested family of model sets P PP Search for counterexample M ? P Seach for proof {x R n : nested family of proof sets fi ( x) 0, gi ( x) 0} f cone( fi ), g ideal( gi ) : f 1 g 0 • Convex, but infinite dimensional. • Efficient (P time) search subsets (relaxations) using SOS/SDP • Guaranteed to converge Search for simple counterexample M M M M M P P Search for short proof M M M nested family of model sets P PP nested family of proof sets {x R n : M P fi ( x) 0, gi ( x) 0} f cone( fi ), g ideal( gi ) : f 1 g 0 M x R n1 | Ax b 0 Special case: LP P R1 m | 0, b 0, A 0 Cone( Ax b) ( Ax b) | 0 ( Ax b) 1 0 A 0, b 1 0 Search for simple counterexample M M M M M P P Search for short proof M M M nested family of model sets P PP nested family of proof sets Search for simple counterexample M M M M M Failure to find short proof implies some relaxed model is nonempty (which is bad). P P Search for short proof M M M nested family of model sets P PP nested family of proof sets Sums of squares (SOS) A sufficient condition for nonnegativity: fi ( x) : p( x) fi 2 ( x) ? i • Convex condition (Shor, 1987) • Efficiently checked using SDP (Parrilo). Write: p( x) z Qz, T Q0 where z is a vector of monomials. Expanding and equating sides, obtain linear constraints among the Qij. Finding a PSD Q subject to these conditions is exactly a semidefinite program (LMI). Nested families of SOS (Parrilo) x p( x) 0 iff g ( x) SOS, f i ( x) : g ( x) p( x) f i ( x) ? 2 i Nested families gk ( x) e.g. x 2k i exhaust co-NP 0 1 k , F ( ) ? A Few Applications • Nonlinear dynamical systems – Lyapunov function computation – Bendixson-Dulac criterion – Robust bifurcation analysis • Continuous and combinatorial optimization – Polynomial global optimization – Graph problems: e.G. Max cut – Problems with mixed continuous/discrete vars. • Hybrid??? Let’s see some examples… Continuous Global Optimization • Polynomial functions: NP-hard problem. • A “simple” relaxation (Shor): find the maximum γ such that f(x) – γ is a sum of squares. • • • • Lower bound on the global optimum. Solvable using SDP, in polynomial time. A concise proof of nonnegativity. Surprisingly effective (Parrilo & Sturmfels 2001). • • • • Much faster than exact algebraic methods (QE,GB, etc.). Provides a certified lower bound. If exact, can recover an optimal feasible point. Surprisingly effective: – In more than 10000 “random” problems, always the correct solution… • Bad examples do exist (otherwise NP=co-NP), but “rare.” – Variations of the Motzkin polynomial. – Reductions of hard problems. – None could be found using random search… More general framework A model co-NP problem: Check emptiness of semialgebraic sets. Obtain LMI sufficient conditions. Can be made arbitrarily tight, with more computation. Polynomial time checkable certificates. Semialgebraic Sets • Semialgebraic: finite number of polynomial equalities and inequalities. f i ( x) 0, gi ( x) 0 • Continuous, discrete, or mixture of variables. • Is a given semialgebraic set empty? – Feasibility of polynomial equations: NP-hard… • Search for bounded-complexity emptiness proofs, using SDP. (Parrilo 2000) Positivstellensatz (Real Nullstellensatz) {x R : f i ( x) 0, gi ( x) 0} is empty n if and only if f cone ( f i ), g ideal ( gi ) : f 1 g 0 • Stengle, 1974 • Generalizes Hilbert’s Nullstellensatz and LP duality • Infeasibility certificates of polynomial equations over the real field. • Parrilo: Bounded degree solutions computed via SDP! • Nested family of polytime relaxations: for quadratics, the first level is the S-procedure… Combinatorial optimization: MAX CUT •Given a graph •Partition the nodes in two subsets •To maximize the number of edges between the two subsets. A mathematical formulation: max yi {1,1} 1 2 w (1 y y ) ij i, j Hard combinatorial problem (NP-complete). Compute upper bounds using convex relaxations. i j Standard semidefinite relaxation: max trace D Dual problems D W min trace WY Y 0,Yii 1 This is just a first step. We can do better! The new tools provide higher order relaxations. Tighter bounds are obtained. Never worse than the standard relaxation. In some cases (n-cycle, Petersen graph), provably better. Still polynomial time. MAX CUT on the Petersen graph The standard SDP upper bound: 12.5 Second relaxation bound: 12. The improved bound is exact. A corresponding coloring. Finding Lyapunov functions V V f 0 V 0 • Ubiquitous, fundamental problem • Algorithmic LMI solution Convex, but still NP hard. Test using SOS and SDP. After optimization: coefficients of V. A Lyapunov function V, that proves stability. Finding Lyapunov functions V V f 0 Given: x 2 y 3x 2 x 3 y 6 x 2 y • Ubiquitous, fundamental problem • Algorithmic LMI solution Propose: V ( x, y ) i j c x ij y i j 4 After optimization: coefficients of V. A Lyapunov function V, that proves stability. Conclusion: a certificate of global stability 5 c=1 c=2.164 c=10 0 -5 -10 -15 -10 -5 0 5 10 -10 -5 0 5 10 -10 -5 0 5 2 1.5 1 x2 0.5 0 -0.5 -1 -1.5 -2 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 x1 Global stability of a switching system using 4th order MLFs defined in 6 equiangular partitions 2 DS applications: Bendixson-Dulac ( rf ) 0 • In 2D rules out periodic orbits. • Higher dimensional generalizations (Rantzer) provide • • Weaker stability criterion than Lyapunov (allowing a zero-measure set of divergent trajectories). • Convexity for synthesis. How to search for ρ ? DS applications: Bendixson-Dulac • Restrict to polynomial (or rational) solutions, use SOS. • As for Lyapunov, now a fully algorithmic procedure. Given: x y Propose: r a bx cy y x y x 2 y 2 After optimization: ( rf ) 1 a 3, 2 b 3, c 1 1 1 1 2 1 1 3 y 3 x 3 3 3 6 2 2 2 0 Conclusion: a certificate of the inexistence of periodic orbits x'=y 2 y'=-x-y+x +y 2 3 2 y 1 0 -1 -2 -3 -3 -2 -1 0 x 1 2 3 Conclusion: a certificate of the inexistence of periodic orbits x'=y y saddle stable Stronger μ upper bounds • Structured singular value µ is NP-hard (as general QP) • Standard µ upper bound can be interpreted: •As a computational scheme. •As an intrinsic robustness analysis question (timevarying uncertainty). •As the first step in a hierarchy of convex relaxations. •For the four-block Morton & Doyle counterexample: Standard upper bound: 1 Second relaxation: 0.895 Exact µ value: 0.8723 What is the message ? Even if short proofs are not guaranteed to exist, in many cases they do. What happens in the broader setting of robustness and verification? Line of Attack • Want to decouple – System complexity – Complexity of verification. “bad” region Nominal System • Even for extremely complex systems, there may exist simple robustness proofs. Try to look for those first… What is the message ? Even if short proofs are not guaranteed to exist, in many cases they do. WHY ? • Partly intrinsic (as in optimization problems), but can also be a consequence of design. • Robustness, verifiability, and complexity are inextricably linked. • Lots of circumstantial evidence: – All our previous experience in robustness analysis and optimization: µ upper bounds, etc. – Hard mathematical results, linking complexity with distance to set of ill-posed instances (Smale, etc). What are these short proofs? Admit multiple interpretations: – Alternative reformulations (perhaps more natural). – Relaxation of assumptions (LTI -> LTV, commutativity, etc.) – Purely computational schemes. – Bounded depth derivations. About synthesis… • Everything discussed is for analysis or verification. x P( x) ? • Synthesis is a much more complicated beast. y x P( x, y ) ? • In general, in higher complexity classes, harder than NP-hard (Tierno & Doyle 1995). • Alternating quantifiers, relativized Turing machines: the polynomial time hierarchy. The polynomial time hierarchy ... ... 3 3 2 2 1 1 Synthesis Co-NP x P( x)? y x P( x, y)? NP 0 0 P Analysis Why are LMIs ubiquitous? y x P( x, y) 0 • In general is Π2-hard. • No current hope of solving this efficiently. • But when P(x,y) is quadratic in x and affine in y… • Drops two levels to P, polynomial time ! P(x,y) is quadratic in x and affine in y y x P( x, y) 0 State feedback synthesis: (A+BK)X -1 AX -1 +BKX -1 Output feedback: eg H , multiobjective LPV synthesis Lyapunov fcn for nonlinear 1-dim systems Backstepping: Lower triangular structure • Synthesis results depend on hand-crafted “tricks” that we don’t fully understand yet. • Until recently we could say the same about analysis, where custom techniques abound. • For analysis, there’s a method in the madness, earlier results unified and expanded. Entangled Quantum States (Doherty, Parrilo, Spedalieri 2001) • Entangled states are one of the most important distinguishing features of quantum physics. • Bell inequalities: hidden variable theories must be nonlocal. • Teleportation: entanglement + classical communication. • Quantum computing: some computational problems may have lower complexity if entangled states are available. How to determine whether or not a given state is entangled ? • QM state described by psd Hermitian matrices ρ • States of multipartite systems are described by operators on the tensor product of vector spaces • Product states: r s 1 1 • each system is in a definite state A s1 B 1 pis i i , 0 pi , • Separable states: r • a convex combination of product states. p •Entangled states: those that cannot be written as a convex combination of product states. i 1 Decision problem: find a decomposition of r as a convex combination of product states or prove that no such decomposition exists. (Hahn-Banach Theorem) Z Separable states ρ Z is an “entanglement witness,” a generalization of Bell’s inequalities Theorem (Horodecki 1996): a state ris entangled if and only if there exists a Hermitian Z such that: Tr r Z 0 x, y Tr xx* yy* Z Z ij ;kl xi x*j yk yl* 0 Hard! First Relaxation Restrict attention to a special type of Z: Tr r Z Z ij ;kl xi x j yk y G m ;ik xi yk H m ;ik xi y k * x y G x y x y * * 2 2 * l * T H x y * T T2 Tr r G r H The bihermitian form Z is a sum of squared magnitudes. T2 minimize Tr r G r H subject to G 0, H 0 If minimum is less than zero, ris entangled First Relaxation • Equivalent to known condition • Peres-Horodecki Criterion, 1996 • Known as PPT (Positive Partial Transpose) • Exact in low dimensions • Counterexamples in higher dimensions T2 minimize Tr r G r H subject to G 0, H 0 If minimum is less than zero, ris entangled Further relaxations Broaden the class of allowed Z to those for which x Z 2 i * * l x x j yk y ij;kl i is a sum of squared magnitudes. Also a semidefinite program. Strictly stronger than PPT. Can directly generate a whole hierarchy of tests. Second Relaxation x Z 2 i xi x j yk y Gm;ijk xi yk x j H m;ijk x i yk x j * ij ;kl 2 * l K m;ijk xi y k x j * minimize subject to * 2 TrrZ G0 H 0 K 0 x y x * Z I x y x x y x * G H T K T x y x 1 2 If the minimum is less than zero then ris entangled. Detects all the non-PPT entangled states tried… 2 Quantum entanglement and Robust control Entanglement Robustness Exact problem is hard Is r entangled? Robust? Known sufficient condition PPT Upper bound Counterexamples Exact in low dimensions Horodecki-Choi Morton/Packard/Doyle Quantum entanglement and Robust control Exact problem is hard Known sufficient condition Counterexamples Higher order relaxations Entanglement Robustness Is r entangled? Robust? Equivalent Horodecki-Choi Morton/Packard/Doyle Equivalent Higher order relaxations • • • • • • • Nested family of SDPs Necessary: Guaranteed to converge to true answer No uniform bound (or P=NP) Tighter tests for entanglement Improved upper bounds in robust control Special cases of general approach All of this is the work of Pablo Parrilo (PhD, Caltech, 2000, now Professor at ETHZ) • My contribution: I kept out of his way. Summary • Single framework with substantial advances in – – – – – – Testing entanglement MaxCut Global continuous optimization Finding Lyapunov functions for nonlinear systems Improved robustness analysis upper bounds Many other applications • This is just the tip of a big iceberg Nested relaxations and SDP Exact problem is hard Known sufficient condition Higher order relaxations Robustness coNP hard problem Robust? x P( x) ? Standard upper bound Usually a known relaxation Sharper sufficient conditions. Converges to exact solution. • Huge breakthroughs… • …but also a “natural” culmination of more than 2 decades of research in robust control. • Initial applications focus has been CS and physics, • … but substantial promise for “persistent mysteries” in controls and dynamical systems • Completely changes the possibilities for – robust hybrid/nonlinear control – interactions with CS and physics • Unique opportunities for controls community – – – – Resolve old difficulties within controls Unify and integrate fragmented disciplines within Unify and integrate without: comms and CS Impact on physics and biology • Unique capabilities of controls community – New tools, but built on robust control machinery – Unique talent and training p P Rm x(0) X R w W n Problem: In general, computation grows exponentially with m and n. p0 parameters x f ( x, p, w) w dynamics noise, disturbances Key idea: systematic search for short proofs. Chemical oscillator (Prajna, Papachristodoulou) 2 X Y 3X A X Y B A, B constant Nondimensional state equations x axx y 2 y bx y 2 x axx y Limit cycle for y bx y b a b a 2 2 1 0.8 a 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 b 1 1.2 1.4 3 3 2.5 a = 0.1, b = 0.13 2 1.5 1 0.8 a 1 0 0.6 0.1 0.2 0.4 0.2 0 0 0.2 0.4 0.6 0.8 b 1 1.2 1.4 0.3 0.4 0.5 0.6 aa bb 1 0.8 a 0.6 b, a 0.4 0.2 0 0 0.2 0.4 0.6 0.8 b 1 1.2 1.4 0ax x y equilibrium 0bx y 2 x a x x x x y y 2 y b x x y y 2 x x 2 2 y y 2 aa bb 2 x axx y 2 2 y bx y 2 For what a, b does there V 0 exist V ( x, y ) such that V V f 0 0ax x y 2 x a x x x x y y 2 y b x x y y 0bx y 2 x x 2 2 y y 2 2 2 aa bb 4 order V ( x, y) th 1 0.8 a b, a 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 b 1 1.2 1.4 a = 0.6, b = 1.1 1 0.8 y 0.6 0.4 0.2 0 1 1.5 x 2 2.5 a = 1, b = 2 1 0.8 0.6 0.4 0.2 0 2.2 2.6 3 3.4 Features of new approach (Parrilo) • SOS/SDP: Based on Sum-of-square (SOS) and semidefinite programming (SDP) • Exist “gold standard” relaxation algorithms for canonical coNP hard problems, such as – MaxCut – Quantum entanglement – Robustness () upper bound • All special cases of first step of SOS/SDP • Further steps (all in P) converge to answer • No uniform bound (or P=NP) • Standard tools of robust (linear) control – – – – Unmodeled dynamics, nonlinearities, and IQCs Noise and disturbances Real parameter variations D-K iteration for -synthesis • Are all treated much better… • And generalized to – – – – Nonlinear Hybrid DAEs Constrained Caveats • Inherits difficulties from robust control • High state dimension and large LMIs • Must find ways to exploit structure, symmetries, sparseness • Note: many researchers don’t want to get rid of the ad hoc, handcrafted core of their approaches to control (why take the fun out of it?)