Title: Beam Simulations at SLACACD: Current Work, Future Plans
1Beam Simulations at SLAC/ACD Current Work,
Future Plans
- Andreas KabelStanford Linear Accelerator Center
2Parallel Beam Dynamics Simulations
- SLAC/ACD focuses on development of parallel beam
dynamics codes for optimizing current
accelerators and designing future machines - TraFIC4 (w/ DESY) to simulate Coherent
Synchrotron Radiation (CSR) in LCLS, X-FEL.. - NIMZOVICH to model Strong-strong beam-beam
effects in PEP-II, Tevatron, LHC, KEK-B.. - DUMBBB to model Weak-strong beam-beam effects
in the Tevatron
3TraFIC4 CSR Code
- Parallel, 3D, self-consistent, weighted
macroparticles, particle-to-particle, retarded
potentials
4LCLS BC Optimization
TraFIC4 is used to compute CSR effects in the
bunch compressors
5LCLS Bunch Compressor
- Formation of current cusps and low energy spread
requires high resolution
6LCLS Bunch Compressor
- Slice output data, calculate FEL figures of merit
- Result FEL performance will increase well below
the design bunch length
7Work in Progress and Plans
- New parallelization scheme (w/ R. Uplenchvar)
Old way is to store all trajectories on all
processors, calculate total kick on select subset
and broadcast kicks to all processes to update
trajectories - New scheme Store only a few trajectories,
request calculations from other trajectories - Memory savings by a factor of Nproc !
- RD on improvement to O(N2) schemes, targeting
very high-resolution, 3-D simulations to resolve
microbunching instability
8NIMZOVICH Beam-beam Code
- Strong-strong beam-beam
- Multi-bunch, multi-IP
- Includes parasitics and crossing angles
- Highly parallel design (previous codes 32
processors) highly configurable - Benchmarked against PEP-II data and against Cais
and Ohmis codes - 300,000 particle-push operations/s on PC
NIMZOVICH used to be called b2b3, which is the
opening move of the Nimzovich-Larsen attack.
Also, it runs well on 64 processors.
9NIMZOVICH - Bunch Parallelization
10NIMZOVICH - Slice Parallelization
11NIMZOVICH - Wraparound Algorithm
Particles past their last interaction slice can
be moved into the next interaction after applying
the transport map. As particles' longitudinal
coordinates may have changed, a hiatus period is
introduced to wait for back particles to catch
up. The wait time is dynamically adjusted at run
time.
12NIMZOVICH - Field Particle Calculations
The electric field is calculated by FFT
convolution with a Green's function. To transform
the periodic convolution to a linear one, the
grid is padded with zeroes in both directions.
To decrease field discontinuities, the force on a
particle is linearly interpolated from the forces
present when the head and the tail of the
opposing slice pass the present slice's center of
gravity.
13NIMZOVICH - Adaptive Grids
The transverse size of the interaction region
depends on the point in time during the
interaction (hourglass effect). The expected size
of the region is calculated from the unperturbed
Twiss function and the interaction grid is sized
accordingly a separate Green's function is
precalculated for each grid. We are currently
testing dynamic grid rescaling at runtime.
14Work in Progress and Plans
- Speeding up solver (w/ R. Uplenchvar), factor of
2 can be expected - Simulations for PEP-II
- Strong-strong simulations for Tevatron at
injection/collision
15DUMBBB Beam-beam Code
- Embarrassingly parallel brute-force code for
weak-strong beam-beam interactions - Fast numeric calculation of complex erf
- Optimized algorithms for 5 flavors of crossings,
virtualized wrappers around template functions - Routinely runs on 1024 processors
- Compute engine, setup/analysis done elsewhere
16Tevatron Lifetimes
- Tevatron at injection (150 GeV) has 72 parasitic
crossings - Lifetime is in the region of hours
- Back-of-the-envelope 10 GParticleTurns to model
this machine behavior - DUMBBB is used to simulate the Tevatron lifetime
17Tevatron Lifetimes from DUMBBB
18Work in Progress and Plans
- More machine features Sextupoles, Head-on
collision - Integrated poor man's DA (fixed points,
chromaticity) - Integrate more setup/analysis
- Understand diffusion processes
- Integrate with the NIMZOVICH code
19Summary
- SLAC/ACD has a vigorous growing program in
Parallel Beam Simulation - It covers three important areas CSR,
strong-strong and weak-strong beam-beam effects - Through mostly supported by SLACs base program
the ACD effort impacts accelerator projects
beyond SLAC (e.g. the Tevatron, .) - Progress scope have been limited by lack of
resources so the funding from SciDAC though
small, is helping to address these issues.