ppt

advertisement
Advanced Computations Department
DOE HEP Physics Program Review
June 14-16, 2005 @SLAC
Kwok Ko
* Work supported by U.S. DOE ASCR & HEP Divisions under contract DE-AC02-76SF00515
ACD Mission
Formed in 2000 to focus on high performance computing
with the mission to:
 Develop new simulation capability to support accelerator R&D
at SLAC & accelerator facilities across SC,
 Advance computational science to enable ultra-scale computing
on SC’s flagship computers (NERSC, ORNL)
 Share resources with community and educate/train future
computational scientists.
Support: Base program, SciDAC, Accelerator projects, SBIR + others
Personnel:15 people/13 FTE (5 computational physicists,
7 computer scientists,
2 graduate students,
1 admin/technical assistant)
Output: 3 PhD thesis, 5 papers, 3 reports, 30 talks/posters (2003-05)
ACD R&D Overview & SciDAC
High Performance Computing (NERSC, ORNL)
H
Modeling and
Simulation
and
Simulation
Modeling
Accelerators
SLAC
FNAL
ANL
Jlab
MIT
DESY
KEK
PSI
Parallel Code
Development
ACD
Accelerator
Modeling
Computational
Mathematics
Computing
Technologies
D
SBIR - STAR
Inc
Computational
Science
SciDAC
LBNL
LLNL
SNL
Stanford
UCD
RPI, CMU
Columbia
UWisconsin
Electromagnetic Modeling
Elements of Computational Science

CAD/Meshing
Partitioning
Solvers
Analysis
NLC
Cell
Design
Refinement
Performance
Optimization
Visualization
Large-scale electromagnetic modeling is enabled by
advancing all elements through SciDAC collaborations
SciDAC ESS Team
“Electromagnetic Systems Simulation”
SLAC/ACD
Accelerator Modeling
Computational Mathematics
Computing Technologies
L. Lee, L. Ge,
E. Prudencio,
S. Chen (Stanford),
N. Folwell, G. Schussman,
R. Uplenchwar,
A. Guetz (Stanford)
K. Ko, V. Ivanov, A. Kabel,
Z. Li, C. Ng,, L. Xiao,
A. Candel (PSI)
ISICs (TSTT, TOPS, PERC) and SAPP
LLNL
LBNL
E. Ng, W. Gao, X. Li, C, Yang
P. Husbands, A. Pinar,
D. Bailey, D. Gunter
CMU
O. Ghattas
V. Akcelik
L. Diachin, D. Brown,
D. Quinlan, R. Vuduc
Columbia
Stanford
D. Keyes
G. Golub
RPI
M. Shephard,
A. Brewer,
E. Seol
SNL
P. Knupp, K. Devine.
L. Fisk, J. Kraftcheck
UWisconsin
UCD
T. Tautges,
H. Kim,
K. Ma, H. Yu
Z. Bai
Parallel Code Development
Electromagnetics
(SciDAC funded)
Generalized
Yee Grid
Beam Dynamics
(SLAC supported)
Finite-Element Discretization
Tau3P/T3P
Time Domain
Simulation
With Excitations
Omega3P
Frequency Domain
Mode Calculation
S3P
Scattering Matrix
Evaluation
Weak-strong
Beam-beam
Strong-strong
Beam-beam
TrafiC4 - CSR
Track3P – Particle Tracking with Surface Physics
V3D – Visualization/Animation of Meshes, Particles & Fields
“Unstructured Grid and Parallel Computing”
Achievements in Accelerator Science
(Electromagnetics & Beam Dynamics)
NLC DDS Wakefields
Omega3P/Tau3P computed the long-range wakefields in the
55-cell Damped Detuned Structure to verify the NLC design in
wakefield suppression by damping and detuning.
NLC 55-cell DDS
Omega3P: Sum over eigenmodes
Tau3P:
Tau3P:Direct
Directbeam
beam excitation
excitation
Tau3P: Direct beam excitation
Omega3P Wakefields
Tau3P Wakefields
NLC Dark Current
Dark current pulses were simulated for the 1st time in a 30cell X-band structure with Track3P and compared with data.
Simulation shows increase in dark current during pulse
risetime due to field enhancement from dispersive effects.
Track3P:
Track3P:Dark
Darkcurrent
currentsimulation
simulation
Red –– Primary
Primary particles
particles,
Green––Secondary
Secondaryparticles
particles
Red
, Green
Dark current @ 3 pulse risetimes
-- 10 nsec
-- 15 nsec
-- 20 nsec
Track3P
Data
ILC Cavity Design
An international collaboration
(DESY, KEK, SLAC, FNAL,
Jlab) is working on a LowLoss cavity (23% lower
cryogenic loss) as a viable
option for the ILC linac. SLAC
is calculating the HOM
damping & multipacting for the
DESY and KEK designs.
ILC LL 9-cell Cavity Design
ILC Cavity HOM Damping
Partitioned Mesh of LL Cavity
Complex Omega3P is being used to calculate the Qext of
dipole modes in the DESY and KEK LL cavity designs.
Qext in ICHIRO-2 cavity
1.E+06
1.E+05
Qext
1.E+04
1.E+03
1.E+02
DESY
KEK
1.E+01
1.45E+09
1.55E+09
1.65E+09
1.75E+09
F (GHz)
1.85E+09
1.95E+09
PEP-II Vertex Bellows Damping
Omega3P was used to study the
effectiveness of ceramic tiles
mounted on the bellows convolution
to damp localized modes that
contribute to HOM heating of the
bellows. Bellows modes can be
damped to very low Qs (~20-50)
Ceramic tile absorber
Bellows mode
PEP-II Vertex Bellows
Bellows Modes
Dielectric loss
LCLS RF Gun Cavity Design
ACD provided the dimensions for the
LCLS RF Gun cavity that meet two
important requirements:
 minimized dipole and quadrupole
fields via a racetrack dual-feed
coupler design,
 reduced pulse heating by rounding
of the z coupling iris.
A new parallel Particle-In-Cell (PIC)
capability is being developed in T3P
for self-consistent modeling of RF
guns needed for the LCLS upgrade,
future light sources and FELs.
Quad (βr)/mm
0.003
cylindrical cavity
racetrack with offset=0.05 "
0.002
0.001
0.000
-0.001
-0.002
-0.003
-200
Quad
-100
0
rf phase (degree)
100
200
LCLS CSR Effects
LCLS Bunch Compressor (with P. Emma): Predict FEL
performance in the self-consistent Coherent Synchrotron
Radiation (CSR) regime for different compressor settings
Slice Saturation
Power
Coherent Edge Radiation:
Field viewer module for TraFiC4
allows study of the spatial &
temporal behaviour of the
detector signal
Slice Gain Length
Tevatron Beam-Beam Simulation
Tevatron (with Y. Cai and T. Sen): Calculate actual lifetimes
and lifetime signatures for the machine at injection and
collision for different machine parameters
New version of parallel beam-beam framework PLIBB:
• Allows billions of particle-turns
• Resolves ~100h lifetime (collision case!)
• Handles chromaticity exactly
• Strong-strong being integrated
Example result - Low particle
loss rates at collision
PLIBB Results
Lifetime enhancement with
lowered chromaticity
PSI Cyclotron HOM Analysis
1st ever eigenmode analysis of
an entire ring cyclotron as part
of a PhD research (L. Stingelin)
to investigate the beam-cavity
interactions in the existing
machine and future upgrade.
CAVITY
VACUUM CHAMBER
MIXED MODES (NEW)
Advances in Computational Science
(SciDAC)
Parallel Meshing (SNL, UWisconsin)
Processor:
1
2
3
4
To be able to model multiple ILC cavities a parallel meshing
capability has been developed in collaboration with SNL and
UWisconsin (PhD thesis) to facilitate the generation of VERY
LARGE meshes on the supercomputer directly to overcome
the memory limitation of desktops.
Eigensolvers (LBL, UCDavis, Stanford)
With LBL, UCD and Stanford, a comprehensive capability
has been under development for solving large, complex RF
cavities to accuracies previously not possible. The parallel
eigensolver Omega3P has been successfully applied to
numerous accelerator cavities and beamline components.
Omega3P
Lossy
Material
Lossless
ISIL w/
refinement
WSMP
ESIL
Periodic
Structure
Implicit Restarted
Arnoldi
MUMPS
SuperLU
External
Coupling
SOAR
Self-Consistent
Loop
Kryov Subspace Methods
Domain-specific
preconditioners
Mesh Refinement (RPI)
In modeling RIA’s RFQs, Adaptive
Mesh Refinement (AMR) provided
accuracy gain of 10 and 2 in
frequency and wall loss calculations
with Omeg3P over standard codes,
while using a fraction of CPU time
compared to the case without AMR.
Wall Loss on AMR Mesh
RFQ - Q Convergence
6100
6050
Frequency
Convergence
Qo
Convergence
6000
5950
Q
AMR speeds up
convergence thereby
minimizing computing
resources
Frequency in MHz
RFQ - Frequency Convergence
55.2
55.1
55
54.9
54.8
54.7
54.6
54.5
54.4
54.3
5900
5850
5800
5750
0
1000000 2000000 3000000 4000000
Number of Unknowns
0
1000000 2000000 3000000
Number of Unknowns
4000000
More accurate f and Q predictions reduce the number of
tuners and tuning range, and allow for better cooling design
Shape Optimization (CMU, SNL, LBNL)
An ongoing SciDAC project is to develop a parallel shape
optimization tool to replace the existing manual process of
optimizing a cavity design with direct computation. The
capability requires the expertise from SciDAC’s ISICs.
Omega3P
Sensitivity
optimization
geometric
model
meshing
sensitivity
Omega3P
meshing
(only for discrete sensitivity)
Visualization (UCDavis)
Graphics tools for rendering LARGE, 3D multi-stream,
unstructured data have been developed and a visualization
cluster soon be installed, both to support accelerator analysis
Mode rotation (in space and time) exhibited by the two
polarizations of a damped dipole mode in ILC cavity
New graphics tools for rendering LARGE, multi-stream, 3D
unstructured data have been developed, to be supported
by a dedicated visualization cluster to help in analyzing
cavity design, such as mode rotation in the ILC cavity.
Dissemination
 HEP/SBIR: STAR Inc and ACD are developing the GUIs to
interface SLAC’s parallel codes which are in use at e.g. FNAL
and KEK. These codes potentially can replace use of
commercial software (MAFIA, HFSS) at DOE sites to save costs
~million+ $ per year in leases.
 USPAS: SciDAC codes and capabilities are shared regularly
with the community via the course “Computational Methods in
Electromagnetism”
USPAS sponsored by the
Cornell University
held in Ithaca, NY
June 20 - July 1, 2005
http://uspas.fnal.gov/
Education/Training
PhDs completed in ACD;
Yong Sun, SCCM, Stanford University, March 2003
“The Filter Algorithm for Solving Large-Scale Eigenvalue Problems from
Accelerator Simulations”
Greg Schussman, Computer Science, UCDavis, December 2003
“Interactive and Perceptively Enhanced Visualization of Large, Complex
Line-based Datasets”
Lukas Stingelin, Physics, Ecole Polytechnique Lausanne, December 2004
“Beam-cavity Interactions in High Power Cyclotrons”
PhDs in progress;
Adam Guetz, ICME, Stanford University
Sheng Chen, ICME, Stanford University
Summer interns – Grad/Undergrad
ACD Goals
 Continue to support Accelerator Science across SC
 Continue SciDAC collaborations in Computational Science
 Involve in Astroparticle Physics & Photon Science
ILC
ILC LL Cavity &
Cryomodule
Cavity for Jlab
12 GeV Upgrade
BPM & Wakefields in LCLS Undulator
XFEL SC RF Gun
MIT PBG
Download