Very Large Scale Computing In Accelerator Physics - PowerPoint PPT Presentation

About This Presentation
Title:

Very Large Scale Computing In Accelerator Physics

Description:

medical isotope production. medical irradiation therapy. Robert Ryne. 6 ... Accelerators for proton radiography. Accelerator-driven energy production ... – PowerPoint PPT presentation

Number of Views:38
Avg rating:3.0/5.0
Slides: 34
Provided by: Rober733
Category:

less

Transcript and Presenter's Notes

Title: Very Large Scale Computing In Accelerator Physics


1
Very Large Scale Computing In Accelerator Physics
  • Robert D. Ryne
  • Los Alamos National Laboratory

2
with contributions from members of
  • Grand Challenge in Computational Accelerator
    Physics
  • Advanced Computing for 21st Century Accelerator
    Science and Technology project

3
Outline
  • Importance of Accelerators
  • Future of Accelerators
  • Importance of Accelerator Simulation
  • Past Accomplishments
  • Grand Challenge in Computational Accelerator
    Physics
  • electromagnetics
  • beam dynamics
  • applications beyond accelerator physics
  • Future Plans
  • Advanced Computing for 21st Century Accelerator
    ST

4
Accelerators have enabled some of the greatest
discoveries of the 20th century
  • Extraordinary tools for extraordinary science
  • high energy physics
  • nuclear physics
  • materials science
  • biological science

5
Accelerator Technology BenefitsScience,
Technology, and Society
  • electron microscopy
  • beam lithography
  • ion implantation
  • accelerator mass spectrometry
  • medical isotope production
  • medical irradiation therapy

6
Accelerators have been proposed to address issues
of international importance
  • Accelerator transmutation of waste
  • Accelerator production of tritium
  • Accelerators for proton radiography
  • Accelerator-driven energy production

Accelerators are key tools for solving problems
related to energy, national security, and quality
of the environment
7
Future of Accelerators Two Questions
  • What will be the next major machine beyond LHC?
  • linear collider
  • n-factory/ m-collider
  • rare isotope accelerator
  • 4th generation light source
  • Can we develop a new path to the high-energy
    frontier?
  • Plasma/Laser systems may hold the key

8
Example Comparison of Stanford Linear Collider
and Next Linear Collider
9
Possible Layout of a Neutrino Factory
10
Importance of Accelerator Simulation
  • Next generation of accelerators will involve
  • higher intensity, higher energy
  • greater complexity
  • increased collective effects
  • Large-scale simulations essential for
  • design decisions feasibility studies
  • evaluate/reduce risk, reduce cost, optimize
    performance
  • accelerator science and technology advancement

11
Cost Impacts
  • Without large-scale simulation cost escalation
  • SSC 1 cm increase in aperture due to lack of
    confidence in design resulted in 1B cost
    increase
  • With large-scale simulation cost savings
  • NLC Large-scale electromagnetic simulations
    have led to 100M cost reduction

12
DOE Grand Challenge In Computational Accelerator
Physics (1997-2000)
Goal - to develop a new generation of
accelerator modeling tools on High Performance
Computing (HPC) platforms and to apply them to
present and future accelerator applications of
national importance.
Beam Dynamics LANL (S. Habib, J. Qiang, R.
Ryne) UCLA (V. Decyk) Electromagnetics SLAC (N.
Folwell, Z. Li, V. Ivanov, K. Ko, J. Malone, B.
McCandless, C.-K. Ng, R. Richardson, G.
Schussman, M. Wolf) Stanford/SCCM (T. Afzal, B.
Chan, G. Golub, W. Mi, Y. Sun, R. Yu) Computer
Science Computing Resources - NERSC ACL
13
New parallel applications codes have been applied
to several major accelerator projects
  • Main deliverables 4 parallel applications codes
  • Electromagnetics
  • 3D parallel eigenmode code Omega3P
  • 3D parallel time-domain EM code Tau3P
  • Beam Dynamics
  • 3D parallel Poisson/Vlasov code, IMPACT
  • 3D parallel Fokker/Planck code, LANGEVIN3D
  • Applied to SNS, NLC, PEP-II, APT, ALS, CERN/SPL

New capability has enabled simulations 3-4 orders
of magnitude greater than previously possible
14
Parallel Electromagnetic Field Solvers Features
  • C implementation w/ MPI
  • Reuse of existing parallel libraries (ParMetis,
    AZTEC)
  • Unstructured grids for conformal meshes
  • New solvers for fast convergence and scalability
  • Adaptive refinement to improve accuracy
    performance
  • Omega3P 3D finite element w/ linear quadratic
    basis functions
  • Tau3P unstructured Yee grid

15
Why is Large-Scale Modeling Needed? Example NLC
Rounded Damped Detuned Structure (RDDS) Design
  • highly three-dimensional structure
  • detuningdamping manifold for wakefield
    suppression
  • require 0.01 accuracy in accelerating frequency
    to maintain efficiency
  • simulation mesh size close to fabrication
    tolerance (order of microns)
  • available 3D codes on desktop computers cannot
    deliver required accuracy, resolution

16
NLC - RDDS Cell Design (Omega3P)
Frequency accuracy to 1 part in 10,000 is achieved
Accelerating Mode
1 MHz
h4
17
NLC - RDDS 6 Cell Section (Omega3P)
18
NLC - RDDS Output End (Tau3P)
19
PEP II, SNS, and APT Cavity Design (Omega3P)
20
Omega3P - Mesh Refinement
Peak Wall Loss in PEP-II Waveguide-Damped RF
cavity
refined mesh size 5 mm
2.5 mm 1.5mm
elements 23390 43555
106699 degrees of freedom 142914
262162 642759 peak power
density 1.2811 MW/m2 1.3909
MW/m2 1.3959 MW/m2
21
Parallel Beam Dynamics Codes Features
  • split-operator-based 3D parallel particle-in-cell
  • canonical variables
  • variety of implementations (F90/MPI, C, POOMA,
    HPF)
  • particle manager, field manager, dynamic load
    balancing
  • 6 types of boundary conditions for field solvers
  • open/circular/rectangular transverse
    open/periodic longitudinal
  • reference trajectory transfer maps computed on
    the fly
  • philosophy
  • do not take tiny steps to push particles
  • do take tiny steps to compute maps then push
    particles w/ maps
  • LANGEVIN3D self-consistent damping/diffusion
    coefficients

22
Why is Large-Scale Modeling Needed? Example
Modeling Beam Halo in High Intensity Linacs
  • Future high-intensity machines will have to
    operate with ultra-low losses
  • A major source of loss low density, large
    amplitude halo
  • Large scale simulations (100M particles) needed
    to predict halo

Maximum beam size does not converge in
small-scale PC simulation (up to 1M particles)
23
Mismatched Induced Beam Halo
Matched beam. x-y cross-section
Mismatched beam. x-y cross-section
24
Vlasov Code or PIC code?
  • Direct Vlasov
  • bad very large memory
  • bad subgrid scale effects
  • good no sampling noise
  • good no collisionality
  • Particle-based
  • good low memory
  • good subgrid resolution OK
  • bad statistical fluctuations
  • bad numerical collisionality

25
How to turn any magnetic optics code into a
tracking code with space charge
26
Development of IMPACT has Enabled the Largest,
Most Detailed Linac Simulations ever Performed
  • Model of SNS linac used 400 accelerating
    structures
  • Simulations run w/ up to 800M particles on a 5123
    grid
  • Approaching real-world of particles (900M for
    SNS)
  • 100M particle runs now routine (5-10 hrs on 256
    PEs)
  • Analogous 1M particle simulation using legacy 2D
    code on a PC requires weekend
  • 3 order-of-magnitude increase in simulation
    capability

100x larger simulations performed in 1/10 the time
27
Comparison Old vs. New Capability
  • 1980s 10K particle, 2D serial simulations
    typical
  • Early 1990s 10K-100K particle, 2D serial
    simulations typical
  • 2000 100M particle runs routine (5-10 hrs on 256
    PEs) more realistic treatment of beamline
    elements

SNS linac 500M particles
LEDA halo expt 100M particles
28
Intense Beams in Circular Accelerators
  • Previous work emphasized high intensity linear
    accelerators
  • New work treats intense beams in bending magnets
  • Issue vast majority of accelerator codes use arc
    length (z or s) as the independent
    variable.
  • Simulation of intense beams requires solving
    ?2?? at fixed time

x-z plot based on x-f data from an s-code plotted
at 8 different times
The split-operator approach treated in linear and
circular systems will soon make it possible to
flip a switch to turn space charge on/off in
the major accelerator codes
29
Collaboration/impact beyond accelerator physics
  • Modeling collisions in plasmas
  • new Fokker/Planck code
  • Modeling astrophysical systems
  • starting w/ IMPACT, developing astrophysical PIC
    code
  • also a testbed for testing scripting ideas
  • Modeling stochastic dynamical systems
  • new leap-frog integrator for systems w/
    multiplicative noise
  • Simulations requiring solution of large
    eigensystems
  • new eigensolver developed by SLAC/NMG Stanford
    SCCM
  • Modeling quantum systems
  • Spectral and DeRaedt-style codes to solve the
    Schrodinger, density matrix, and Wigner-function
    equations

30
First-Ever Self-Consistent Fokker/Planck
  • Self-consistent Langevin-Fokker/Planck requires
    the analog of thousands of space charge
    calculations per time step
  • clearly such calculations are impossible.
    NOT!
  • DEMONSTRATED, thanks to modern parallel machines
    and intelligent algorithms

Diffusion Coefficients
Friction Coefficient / velocity
31
Schrodinger Solver Two Approaches
FFTs global communication
  • Spectral
  • Field Theoretic
  • Discrete

Nearest-neighbor communication
32
ConclusionAdvanced Computing for 21st Century
Accelerator Sci. Tech.
  • Builds on foundation laid by Accelerator Grand
    Challenge
  • Larger collaboration
  • presently LANL, SLAC, FNAL, LBNL, BNL, JLab,
    Stanford, UCLA
  • Project Goal develop a comprehensive, coherent
    accelerator simulation environment
  • Focus Areas
  • Beam Systems Simulation, Electromagnetic Systems
    Simulation, Beam/Electromagnetic Systems
    Integration
  • View toward near-term impact on
  • NLC, n-factory (driver, muon cooling),
    laser/plasma accelerators

33
Acknowledgement
  • Work supported by the DOE Office of Science
  • Office of Advanced Scientific Computing Research,
    Division of Mathematical, Information, and
    Computational Sciences
  • Office of High Energy and Nuclear Physics
  • Division of High Energy Physics, Los Alamos
    Accelerator Code Group
Write a Comment
User Comments (0)
About PowerShow.com