EPCC: KT in novel computing Professor Arthur Trew Director, EPCC A.S.Trew@ed.ac.uk +44 131 650 5025 leading Europe • mission - “to be the premier European computational science centre” – the European equivalent of one of the big US centres, eg NCSA, University of Chicago • think globally, act locally building the vision • Vital statistics: – 75 staff – £4.5 M turnover (almost)European all from external sources leadership • Multidisciplinary and multi-funded – ... with a large spectrum of activities – … and a critical mass of expertise • R&D Training Supports and undertakes research at UoE through: excellence – access to facilities – training (MSc and some undergraduate) Partnerships – HPC-Europa visitor programme HPC Dbase + Grid –facilities+skills collaborative research Industry (eg NAIS, + RealityGrid … ) expertise Academia HECToR • 4th generation national facility managed by Edinburgh • HPCx + HECToR = £150M • HECToR 250 Tflops peak – Most powerful computer in UK academia – shortly to be upgraded to 350 Tflops • … used for a wide variety of physical, engineering, environmental and biological projects • … opportunities for industry involvement – through facility access, or direct collaboration win-win-win the second age of parallelism • the end of Moore’s law at the core level multi-core chips ← parallelism → very many processors Many-core era Massively parallel applications 100 Increasing HW Threads Per Socket Multi-core era Scalar and parallel applications 10 HT 1 2003 2005 2007 2009 2011 2013 • … today’s top-end HPC techniques will have widespread applicability tomorrow making an economic impact • projects based on delivering business benefit Academic R&D – not pushing a particular technology ` – based within the University we have access to a wide range of leading-edge expertise • … on time, on budget and to specification • we now need to diversify from bespoke consultancy • EPCC Industry Hub to be launched in 2009 – use ISV’s to target wider markets – make facilities available as a paying service – seeking SE and industry support Commercial exploitation The challenges • the end of (not) Moore’s Law – levels of parallelism may increase, but can we use it? – will, say, MPI scale to Exascale? – how can we create fault-tolerant applications? – is there really an economic basis for HPC based on commodity components? • escalating infrastructure costs – power, space, cooling … • verification of results – rigorous testing of QCD has shown numerous hardware problems – choosing appropriate algorithms essential – widespread training required if computational science can truly stand alongside theory and experiment … and our response • Exascale Technology Centre – funded by the University in collaboration with Cray to investigate key scalability problems – hybrid programming models – PGAS languages – GPU-based architectures • Numerical Algorithms & Intelligent Software (NAIS) – collaboration with Numerical Analysts and Computer Scientists – 5-year project to develop new algorithms designed to be parallel – written to be WORA (Write-Once, Run-Anywhere) – … with increased information to aid the compiler to generate highly-efficient code • high-impact demonstrators – eg. real-time simulation of fire spread in the Olympic Stadium the need for partnership • the Exascale challenge is beyond any one institution • … perhaps even beyond any one country • so, there is a clear desire to collaborate • G8 funding call a clear opportunity to build on the TsukubaEdinburgh – the scale may be small, but it will grow … collaborationhttp://www.dfg.de/en/research_funding/international_cooperation/research_collaboration/g 8-initiative/index.html