CC S NIF I National Ignition Facility laser to be controlled by Ada software distributed over CORBA The National Ignition Facility John P. Woodruff woodruff1@llnl.gov Lawrence Livermore National Lab Presented at Software Technology Conference Salt Lake City, Utah May 5, 1999 The National Ignition Facility is a $1.2 Billion project to be completed in 2003 Optics assembly building Cavity mirror mount assembly Pockels cell assembly Amplifier Spatial filters Control room Master oscillator room Power conditioning transmission lines Switchyard support structure Amplifier power conditioning modules Periscope polarizer mount assembly Beam control & laser diagnostic systems NIF Project Team Lawrence Livermore National Lab Los Alamos National Lab Sandia National Lab Univ.. of Rochester/Lab for Laser Energetics Pre-amplifier modules Transport turning mirrors Diagnostics building Target chamber Final optics system John Woodruff - 2 Integrated Computer Control System is a distributed system that does not have hard real time requirements Supervisory software is event driven — Operator-initiated actions and scripted sequences do not require specific response times — Status information is propagated from the laser to updates on graphic user screens — Speed requirements derive from operator needs for interactive response No process-related hard deadlines must be met — Several hours of preparation precede shot — Shot executes in microseconds controlled by dedicated hardware — Data gathering and reporting occurs in minutes after the shot Some process controls are encapsulated in front-ends — Automatic alignment — Capacitor charging John Woodruff - 3 The hardware boundary is the solid ground on which we build our software architecture Each of 192 beamlines has the same control points — 60_000 points in all The control points are relatively inflexible But the experimental plans will evolve during 30 years of operation Four-pass Amplifier Cavity Spatial Filter Two-pass Amplifier Polarizer Transport Spatial Filter Injection LM2 Mirror Deformable Mirror Plasma Electrode Pockels Cell Target Final Optics Assembly Preamplifier Module Master Oscillator John Woodruff - 4 ICCS software is distributed on some 500 computers in two layers Online Database Configuration Name Server Operator Console Client Software Objects Upper-level computers Supervisory Control Layer ORBexpress distribution Server Software Objects Real-time Control Layer Front End Processors Field bus Control Points: Sensors & Actuators John Woodruff - 5 The ICCS Computer System and Network Architecture as deployed for NIF 8 Supervisory Consoles Two Dual-Monitor Workstations per Console Users & External Databases Firewall Core 100 Mb/s Ethernet Switch ATM 155 Mb/s Switch Servers Edge Ethernet Switch Front End Processors 13 Edge Switches 300 362 FEPs FEP’s Edge Ethernet Switch Front End Processors Remote Workstations Automatic Alignment Servers 24 Camera Video Digitizers 2,150 loops 500 CCD Cameras Legend ATM OC-3 155 Mb/s (Fiber) Ethernet 100 Mb/s Ethernet 10 and 100 Mb/s John Woodruff - 6 The software architecture distributes control over a facility-wide software bus Integration Services - System manager - Device hierarchy - Access control Supervisory Console Database - History - Shots - Configuration Operator Controls Status Display Event Log Software Distribution Bus (exists on network) Object Request Broker Device Control Status Monitor Controller Interface Driver Front End Processor This software framework is reused to construct 8 supervisory applications and 18 types of front end processors John Woodruff - 7 Framework components are reused across the NIF Common services provided anywhere on the network — Operator control and equipment status — Database archiving and trending — Integration services – Orderly starting of the software – Device-level access controls – Plug and play communications — Steps in preparation for an experimental shot — Logging of control system events — Operator-programmable checklists These services strive to be “decentralized” — Clients should know little about their location — Services should be largely independent of each other John Woodruff - 8 CORBA provides decentralized distribution services A standard model of distributed objects resolves a major development risk — Anticipate 30-year life of the standard — ICCS software engineers are freed from building a “homebrew”communication infrastructure CORBA defines loose coupling between objects — Communication becomes nearly invisible – Neither clients nor servers depend directly on communication infrastructure – Names of communicating objects hide locations — Transparent interoperability – IDL specifications are language-neutral interfaces – Data marshalling hides differences between hosts Allocation of object implementations to processes can be deferred John Woodruff - 9 Using IDL to define interfaces implies some compromises Interfaces must be declared in terms of IDL types — These types “diffuse” into the rest of the system — IDL type model is less strict than Ada’s – No range constraints – No initial values for record components — No default parameter values — No operator overloading in interfaces Configuration management must accommodate to the possibility that implementation details might be loaded into client processes John Woodruff - 10 The majority of NIF’s CORBA objects are long-lived 60_000 objects implement the class Device in Front-End Processors — About 250 subclasses — Each instance is initialized at system start-up — A framework manages data and naming – Oracle database maintains configuration – Persistence broker objects implement SQL queries on behalf of CORBA clients A dead server is an error to be diagnosed and recovered — Failover to a replacement of the same class is not automated John Woodruff - 11 Efforts to economize on message traffic Status of every Device must be observable at multiple consoles — Some status reports require latency as small as 0.1 second — Monitor objects are co-located with Devices – Local polling in the FEP – Notification of “significant” change Supervisory objects collect and collate change reports — GUI’s that display “broad view” status subscribe to these supervisors — GUI’s receive their status updates via “data push” from the supervisor John Woodruff - 12 Measurements of ORBexpress 2.0.1 confirm adequate performance Network is 100 Megabit ethernet Both client and server are 2-processor Sun Enterprise 3000’s — Client runs 40 Ada tasks; server runs 5 — Runtime is Apex 3.0 – GNAT 3.11 is roughly 10% faster 3000 100% 2700 90% 2400 80% 2100 70% 1800 60% Msg Rate (100Mbit) % Client CPU % Server CPU % Network 1500 1200 900 50% 40% 30% 600 20% 300 10% 0 0% Message Size (Bytes) John Woodruff - 13 Server performance is comparable on the Front-End processor Server is a 300 MHz PowerPC in a VME crate Runtime is VxWorks operating system with ApexWorks 3.0 Same tasking implementation as previous data 1500 100% 1350 90% 1200 80% 1050 70% 900 60% 750 50% 600 40% Msg Rate (100Mbit) % Client CPU % Network 450 300 30% 20% 150 10% 0 0% Message Size (Bytes) John Woodruff - 14 Simulation models show how resource usage scales to full-scale operation CORBA 2% Image Input 6% Unused 17% Control Logic 10% Alert Calculations 3% Other Frameworks 2% Control Context Switching 10% Image Processing 50% Automatic Alignment CPU Utilization per Processor John Woodruff - 15 Numerous online sources offer further information The National Ignition Facility — http://lasers.llnl.gov/lasers/nif.html The NIF Integrated Computer Control System — http://lasers.llnl.gov/lasers/ICCS/ Controlling the World’s Most Powerful Laser — Science & Technology Review, November 1998 — http://www.llnl.gov/str/11.98.html A Large Distributed Control System Using Ada in Fusion Research — Proceedings ACM SIGAda International Conference 1998 — UCRL-JC-130569 (*) Integrated Computer Control System CORBA-Based Simulator — UCRL-ID-133243 (*) Evaluation of CORBA for Use in Distributed Control Systems — UCRL-ID-133254 (*) * Numbered reports are accessible from http://www.llnl.gov/tid/opac.html John Woodruff - 16 DISCLAIMER UCRL - MI - 133457 This document was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor the University of California nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or the University of California, and shall not be used for advertising or product endorsement purposes. Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405ENG-48. John Woodruff - 17