Accelerator Modeling: Present capabilities, future prospects, and applications to the HEP Program

advertisement
Accelerator Modeling:
Present capabilities, future prospects,
and applications to the HEP Program
(with emphasis on SciDAC)
Robert D. Ryne
Lawrence Berkeley National Laboratory
with contributions from
Kwok Ko (SLAC) and Warren Mori (UCLA)
Presented to the HEPAP AARD Subpanel
December 21, 2005
SciDAC Accelerator Science & Technology
(AST) Project: Overview
• Goals:
— Develop new generation of parallel accelerator modeling codes to solve
the most challenging and important problems in 21st century accel S&T
— Apply the codes to improve existing machines, design future facilities,
help develop advanced accelerator concepts
• Sponsored by DOE/SC HEP in collaboration with ASCR
• Primary customer: DOE/SC, primarily its HEP, also NP programs
— codes have also been applied to BES projects
• Funding: $1.8M/yr (HEP), $0.8M/yr (ASCR/SAPP)
— Strong leveraging from SciDAC ISICs
• Duration: Currently in 5th (final) year
• Participants:
— Labs: LBNL, SLAC, FNAL, BNL, LANL, SNL
— Universities: UCLA, USC, UC Davis, RPI, Stanford
— Industry: Tech-X Corp.
SciDAC Accelerator Science & Technology
(AST) Project: Overview cont.
• Management:
— K. Ko and R. Ryne, co-PIs
— Senior mgmt team: K. Ko, R. Ryne, W. Mori, E. Ng
• Oversight and reviews by DOE/HEP program mgrs
— Vicky White
— Irwin Gaines
— Craig Tull (present)
• The project must
— advance HEP programs (R. Staffin)
— through synergistic collaboration w/ ASCR that advances the state-ofthe-art in advanced scientific computing (M. Strayer)
SciDAC AST Overview: Focus Areas
• Organized into 3 focus areas:
— Beam Dynamics (BD), R. Ryne
— Electromagnetics (EM), K. Ko
— Advanced Accelerators (AA), W. Mori
• All supported by SciDAC Integrated Software Infrastructure Centers (ISICs)
and ASCR Scientific Application Partnership Program (SAPP)
• Most funding goes to BD and EM; AA is very highly leveraged
Why do we need SciDAC???
• Why can’t our community do code development
just ourselves as we have done in the past?
• Why can’t it be done just as an activity tied to
accelerator projects?
• Why can’t our community follow “business as
usual?”
Computational Issues
• Large scale:
—simulations approaching a billion particles, mesh points
—Huge data sets
—Advanced data mgmt & visualization
• Extremely complex 3D geometry (EM codes)
• Complicated hardware with multiple levels of memory
heirarchy, > 100K processors
• Parallel issues
—Load balancing
—Parallel sparse linear solvers
—parallel Poisson solvers
—particle/field managers
Close collaboration w/ ASCR researchers
(ISICs, SAPP) is essential
• A hallmark of the SciDAC project is that it built upon
collaboration between applications/computational
scientists with mathematicians, computer scientists,
parallel performance experts, visualization specialists,
and other IT experts.
• The AST project collaborates with several ISICs:
—TOPS (Terascale Optimal PDE Solvers)
—APDEC (Applied Partial Differential Equations Center)
—TSTT (Terascale Simulation Tools & Technologies)
—PERC (Performance Evaluation Research Center)
Overview of the 3 focus areas
• Beam Dynamics (BD)
• Electromagnetic Modeling (EM)
• Advanced Accelerators (AA)
Overview of the 3 focus areas
• Beam Dynamics (BD)
• Electromagnetic Modeling (EM)
• Advanced Accelerators (AA)
SciDAC Codes: Beam Dynamics
• Set of parallel, 3D multi-physics codes for modeling beam
dynamics in linacs, rings, and colliders
—IMPACT suite: includes 2 PIC codes (s-based, t-based);
mainly for electron and ion linacs
—BeamBeam3D: strong-weak, strong-strong, multi-slice,
multi-bunch, multi-IP, head-on, crossing-angle, long-range
—MaryLie/IMPACT: hybrid app combines MaryLie+IMPACT
—Synergia: multi-language, extensible, framework; hybrid
app involves portions of IMPACT+MXYZPTLK
—Langevin3D: particle code for solving Fokker-Planck
equation from first principles
IMPACT suite becoming widely used;
> 300 email contacts in FY05, > 100 already in FY06
•RAL
•SLAC
•PSI
•LBNL
•GSI
•LANL
•KEK
•Tech-X
•FNAL
•ANL
•ORNL
•MSU
•BNL
•Jlab
•Cornell
•NIU
SciDAC code development involves large, multidisciplinary teams.
Example: MaryLie/IMPACT code
Development, reuse, and synthesis of code components.
Examples: Synergia, e-cloud capability
New algorithms and methodologies are key. Examples: (1) high aspect ratio
Poisson solver; (2) self-consistent Langevin/Fokker-Planck
Electric field error vs. distance
Self-Consistent Diffusion Coefficients vs. velocity
Spitzer approximation
Error in the computed electric field of a Gaussian
distribution of charge (x=1mm and y=500mm). Even
using a grid size of 64x8192, the standard method
(blue curve) is less accurate than the Integrated
Green Function method (purple) on 64x64.
First-ever 3D self-consistent
Langevin/Fokker-Planck simulation
SciDAC beam dynamics applications
benefit DOE/SC programs, esp. HEP
•
•
•
•
•
•
•
•
•
•
Beam-Beam simulation of Tevatron, PEP-II, LHC, RHIC
ILC damping rings (space-charge, wigglers)
FNAL Booster losses
CERN PS benchmark study
RIA driver linac modeling
SNS linac modeling
LCLS photoinjector modeling
CERN SPL (proposed proton driver) design
J-PARC commissioning
Publications:
— 23 refereed papers since 2001 (including 5 Phys Rev Lett., 10
PRST-AB, 4 NIM-A, 2 J. Comp. Phys., Computer Physics
Comm.), numerous conf proceedings papers
• USPAS course on computational methods in beam dynamics
Examples: Collider modeling using BeamBeam3D
LHC beam-beam
simulation
nx1=nx2= ny1=ny2=0.31,
x0=–0.0034
First-ever 1M particle, 1M turn strong- PEP-II luminosity calculation shows importance of multistrong b-b simulation (J. Qiang, LBNL) slice modeling (J. Qiang, Y. Cai, SLAC; K. Ohmi, KEK)
Parameter studies of antiproton
lifetime in Tevatron
Code scalability depends strongly on
parallelization methodology (J. Qiang)
ILC damping ring modeling using ML/I
Results of MaryLie/IMPACT
simulations of an ILC “dog-bone”
damping ring (DR) design showing
space-charge induced emittance
growth using different space-charge
models. Space charge is important for
the ILC DR in spite of the high energy
because of the combination of small
emittance and large (16 km)
circumference. Top (nonlinear space
charge model): the beam exhibits
small emittance growth. Bottom
(linear space charge model): the
beam exhibits exponential growth due
to a synchro-betatron resonance. The
instability is a numerical artifact
caused by the simplified (linear)
space-charge model. (M. Venturini,
LBNL)
FNAL booster modeling using Synergia
FNAL booster simulation results using Synergia showing the merging of 5
microbunches. SciDAC team members are working closely with experimentalists
at the booster to help understand and improve machine performance.
(P. Spentzouris and J. Amundson, FNAL; J. Qiang and R. Ryne, LBNL)
Beam Dynamics under SciDAC 2
(HEP program)
• Support/maintain/extend successful codes developed
under SciDAC 1 (BD, EM, AA)
• Develop new capabilities to meet HEP priorities: LHC, ILC,
Tevatron, PEP-II, FNAL main injector, booster, proton driver
— Self-consistent 3D simulation of: e-cloud, e-cooling, IBS, CSR
— Start-to-end modeling with all relevant physical effects
• Enable parallel, multi-particle beam dynamics design &
optimization
• Performance and scalability optimization on platforms up
to the petascale (available by the end of the decade)
• Couple parallel beam dynamics codes to commissioning,
operations, and beam experiments
Overview of the 3 focus areas
• Beam Dynamics (BD)
• Electromagnetic Modeling (EM)
• Advanced Accelerators (AA)
SciDAC AST – Electromagnetics
Under SciDAC AST, the Advanced Computations Dept. @
SLAC is in charge of the Electromagnetics component to:
 Develop a comprehensive suite of parallel electromagnetic
codes for the design and analysis of accelerators,
(Ron’s talk)
 Apply new simulation capability to accelerator projects
across SC including those in HEP, NP and BES,
(Ron’s talk)
 Advance computational science to enable terascale
computing through ISICs/SAPP collaborations.
(this talk)
ACD’s ISICs/SAPP Collaborations
ACD is working with the TOPS, TSTT, PERC ISICs as
well as SAPP researchers on 6 computational science
projects involving 3 national labs and 6 universities.
 Parallel Meshing – TSTT (Sandia, U Wisconsin/PhD thesis)
 Adaptive Mesh Refinement – TSTT (RPI)
 Eigensolvers – TOPS (LBNL), SAPP (Stanford/PhD thesis,
UC Davis)
 Shape Optimization – TOPS (UT Austin, Columbia, LBNL),
TSTT (Sandia, U Wisconsin)
 Visualization – SAPP (UC Davis/PhD thesis)
 Parallel Performance – PERC (LBNL, LLNL)
Parallel Meshing & Adaptive Mesh Refinement
Parallel meshing is needed for generating LARGE meshes to
model multiple cavities in the ILC superstructure & cryomodule
Processor:
1
2
3
4
Adaptive Mesh Refinement improves accuracy & convergence
of frequency and wall loss calculations
RFQ - Q Convergence
6100
6050
6000
Frequency
Wall loss Q
5950
Q
RIA RFQ
Frequency in MHz
RFQ - Frequency Convergence
55.2
55.1
55
54.9
54.8
54.7
54.6
54.5
54.4
54.3
5900
5850
5800
5750
0
1000000 2000000 3000000 4000000
Number of Unknowns
0
1000000 2000000 3000000
Number of Unknowns
4000000
Eigensolvers & Shape Optimization
Complex eigensolver for treating external coupling is essential
for computing HOM damping in ILC cavities.
Omega3P
Lossless
ISIL w/
refinement
ESIL
Lossy
Material
Periodical
Structure
Implicit Restarted
Arnoldi
SOAR
External
Coupling
Self-Consistent
Loop
Shape Optimization to replace manual, iterative process in
designing cavities with specific goals subject to constraints.
Omega3P
Sensitivity
optimization
geometric
model
meshing
sensitivity
Omega3P
meshing
(only for discrete sensitivity)
Visualization & Parallel Performance
Visualization is
critical to mode
analysis in
complex 3D
cavities, e.g.
mode rotation
effects
Parallel Performance studies are needed to maximize code
efficiency and optimize use of computing resources.
Solve & Postprocess Breakdown
Communication Pattern
Proposed Projects for SciDAC 2
SLAC will develop the NEXT level of simulation tools for
NEXT generation SC accelerators (ILC, LHC, RIA, SNS)
by continuing to advance Computational Science in
collaborations with the ISICs/SAPP component of SciDAC
 Parallel adaptive h-p-q refinement where h is mesh size,
p is order of FE basis and q is order of geometry model
 Parallel shape optimization (goals w/ constraints) and
prediction (cavity deformations from HOM measurements)
 Parallel particle simulation on unstructured grids for
accurate device modeling (RF guns, klystrons)
 Integrated electromagnetics/thermal/mechanical modeling
for complete design and engineering of cavities
 Parallel, interactive visualization cluster for mode analysis
and particle simulations
Overview of the 3 focus areas
• Beam Dynamics (BD)
• Electromagnetic Modeling (EM)
• Advanced Accelerators (AA)
Recent advances in modeling advanced accelerators:
plasma based acceleration and e-clouds
W.B.Mori , C.Huang, W.Lu, M.Zhou, M.Tzoufras, F.S.Tsung, V.K.Decyk (UCLA)
D.Bruhwiler, J. Cary, P. Messner, D.A.Dimtrov, C. Neiter (Tech-X)
T. Katsouleas, S.Deng, A.Ghalam (USC)
E.Esarey, C.Geddes (LBL)
J.H.Cooley, T.M.Antonsen (U. Maryland)
Accomplishments and highlights:
Code development
• Four independent high-fidelity particle based codes
— OSIRIS: Fully explicit PIC
— VORPAL: Fully explicit PIC + ponderomotive guiding center
— QuickPIC: quasi-static PIC + ponderomotive guiding center
— UPIC:
Framework for rapid construction of new
codes--QuickPIC is based on UPIC: FFT based
• Each code or Framework is fully parallelized. They each have
dynamic load balancing and particle sorting. Each production
code has ionization packages for more realism. Effort was made
to make codes scale to 1000+ processors.
• Highly leveraged
Full PIC: OSIRIS and Vorpal
• Successfully applied to various
LWFA and PWFA problems
Colliding laser pulses
Self-ionized particle beam wake
Particle beams
104
s(N)
3D LWFA simulation
Scale well to 1,000’s of
processors
Quasi-static PIC:
QuickPIC
Code features:
• Based on UPIC parallel object-oriented plasma
simulation Framework.
Model features:
• Highly efficient quasi-static model for beam drivers
• Ponderomotive guiding center + envelope model for
laser drivers.
• Can be 100+ times faster than conventional PIC with
no loss in accuracy.
• ADK model for field ionization.
Applications:
• Simulations for PWFA experiments,
E157/162/164/164X/167
• Study of electron cloud effect in LHC.
• Plasma afterburner design
afterburner
hosing
E164X
Recent highlights: LWFA simulations
using full PIC



Phys. Rev. Lett. by Tsung et al. (September 2004) where a peak energy
of 0.8 GeV and a mono-energetic beam with an central energy of 280
MeV were reported in full scale 3D PIC simulations.
3 Nature papers (September 2004)
where
mono-energetic
electron
beams with
energy near 100 MeV
were measured. Supporting PIC
simulations were presented.
SciDAC members were collaborators
on two of these Nature publications
and SciDAC codes OSIRIS and Vorpal
were used.
Vorpal result on cover
Modeling self-ionized PWFA experiment with QuickPIC
Emax ~ 4GeV
(initialenergy chirp
considered)
Located in the FFTB

Ionizing
Laser Pulse Li Plasma
(193 nm) ne- 6 · 10 15 cm -3
L- 30 cm
e-
Streak Camera
(1ps resolution) •Cdt
N=1-2 · 10 10
z =0.1 mm
Optical T ransition Spectrometer
E=30 GeV
Radiators
25 m
E164X experiment
25 m
Not to scale!
Emax ~ 5- 0.5( - tron radiation)  4.5GeV
E (GeV)
12
6
10
4
8
2
6
0
4
-2
2
-4
0
100
200
300
400
z(m)
QuickPIC simulation
0
500
FFTB
Beam current (KA)

8
X-Ray
Diagnostic
Cerenkov
Radiator
Dump
Afterburner simulation:
0.5 TeV ~ 1 TeV in 28 meters
s=0m
s = 28.19 m
Simulation done with QuickPIC in 5000 node-hours
Full PIC run would have taken 5,000,000 node-hours!
Vision for the future: SciDAC 2
High fidelity modeling of .1 to 1TeV plasma
accelerator stages
•
•
Physics Goals:
— A) Modeling 1to 10 GeV plasma acc stages: Predicting and designing near term experiments.
— B) Extend plasma accelerator stages to 250 GeV - 1TeV range: understand physics & scaling laws
— C) Use plasma codes to definitively model e-cloud physics:
• 30 minutes of beam circulation time on LHC
• ILC damping ring
Software goals:
— A) Add pipelining into QuickPIC: Allow QuickPIC to scale to 1000’s of processors.
— B) Add self-trapped particles into QuickPIC and ponderomotive guiding center Vorpal packages.
— C) Improve numerical dispersion* in OSIRIS and VORPAL.
— D) Scale OSIRIS, VORPAL, QuickPIC to 10,000+ processors.
— E) Merge reduced models and full models
— F) Add circular and elliptical pipes* into QuickPIC and UPIC for e-cloud.
— G) Add mesh refinement into* QuickPIC, OSIRIS, VORPAL,and UPIC.
— H) Develop better data analysis and visualization tools for complicated phase space data**
•
•
*Working with APDEC ISIC
**Working with visualization center
In Conclusion…
• Q: What is the scope of our research in regard to HEP
short/medium/long-range applications?
• A: It is mainly short/medium.
—Capabilities have been developed, codes applied to:
• Short: PEP-II, Tevatron, FNAL Booster, LHC
• Medium: ILC
• Long: Exploration of advanced accelerator concepts
– these activities are highly leveraged, represent 10% of the
SciDAC AST budget
Final remarks
• Future HEP facilities will cost ~$0.5B to ~$10B
—High end modeling is crucial to
• Optimize designs
• Reduce cost
• Reduce risk
—Given the magnitude of the investment in the facility, the
$1.8M investment in SciDAC is tiny, but the tools are
essential
• Laser/plasma systems are extraordinarily complex
—High fidelity modeling, used in concert with theory &
experiment, is essential to understand the physics and
help realize the promise of advanced accelerator
concepts
Acronyms used
•
•
•
•
•
•
•
SciDAC: Scientific Discovery through Advanced Computing
AST: SciDAC Accelerator Science & Technology project
ASCR: Office of Advanced Scientific Computing Research
BD: Beam dynamics activities of SciDAC AST
EM:Electromagnetics activities of SciDAC AST
AA: Advanced Accelerator activities of SciDAC AST
ISIC: SciDAC Integrated Software Infrastructure Center
— TOPS: Terascale Optimal PDE Solvers center
— TSTT: Terascale Simulation Tools and Technologies center
— APDEC: Applied Partial Differential Equations center
— PERC: Performance Evaluation Research Center
• SAPP: Scientific Application Partnership Program (ASCR-supported
researchers affiliated w/ specific SciDAC projects)
Download