Vladimir Litvin, Harvey Newman, Sergey Schevchenko - PowerPoint PPT Presentation

1 / 25
About This Presentation
Title:

Vladimir Litvin, Harvey Newman, Sergey Schevchenko

Description:

Using of Grid Prototype Infrastructure for QCD Background Study to ... beak.cs.wisc.edu/jobmanager- condor-INTEL-LINUX. environment = CONDOR_UNIVERSE=scheduler ... – PowerPoint PPT presentation

Number of Views:58
Avg rating:3.0/5.0
Slides: 26
Provided by: rand140
Category:

less

Transcript and Presenter's Notes

Title: Vladimir Litvin, Harvey Newman, Sergey Schevchenko


1
Using of Grid Prototype Infrastructure for QCD
Background Study to the H ? ?? Process on
Alliance Resources
  • Vladimir Litvin, Harvey Newman, Sergey
    Schevchenko
  • Caltech CMS
  • Scott Koranda, Bruce Loftis, John Towns
  • NCSA
  • Miron Livny, Peter Couvares, Todd Tannenbaum,
    Jamie Frey
  • Wisconsin Condor

2
CMS Physics
  • The CMS detector at the LHC will probe
    fundamental forces in our Universe and search for
    the yet undetected Higgs Boson
  • Detector expected to come online 2007

3
CMS Physics
4
Leveraging Alliance Grid Resources
  • The Caltech CMS group is using Alliance Grid
    resources today for detector simulation and data
    processing prototyping
  • Even during this simulation and prototyping phase
    the computational and data challenges are
    substantial

5
Goal to simulate QCD background
  • The QCD jet-jet background cross section is huge
    ( 1010 pb). Previous studies of QCD jet-jet
    background have got the estimation of the rate,
    Rjet , when jet might be misidentified as photon
    and, due to the limited CPU power, for QCD
    jet-jet background rates where simply squared
    (Rjet2). Hence, the correlations within event
    have not been taken into account in previous
    studies
  • Previous simulations have been done with
    simplified geometry and non-gaussian tails in the
    resolution have not been adequately simulated
  • Our goal is to make full simulation of relatively
    large QCD sample, measure the rate of diphoton
    misidentification and compare it with other types
    of background

6
Generation of QCD background
  • QCD jet cross section strongly depends on the pT
    of the parton in hard interaction
  • QCD jet cross section is huge. We need reasonable
    preselection at the generator level before pass
    events through full detector simulation
  • The optimal choice of pT is needed
  • Our choice is pT 35GeV
  • pT 35 GeV is safe cut we do not lose
    significant fraction of events, which could fake
    the Higgs signal, at the preselection level

7
Generator level cuts
  • QCD background
  • Standard CMS cuts Et1gt40 GeV, Et2gt25 GeV,
    ?1,2lt2.5
  • at least one pair of any two neutral particles
    (?0, ?, e, ?, ?, ?, ?) with
  • Et1 gt 37.5 GeV
  • Et2 gt 22.5 GeV
  • ?1,2 lt 2.5
  • minv in 80-160 GeV
  • Rejection factor at generator level 3000
  • Photon bremsstrahlung background
  • Standard CMS cuts Et1gt40 GeV, Et2gt25 GeV,
    ?1,2lt2.5
  • at least one neutral particle (?0, ?, e, ?, ?,
    ?, ?) with
  • Et gt 37.5 GeV
  • ? lt 2.5
  • Rejection factor at generator level 6

8
Challenges of a CMS Run
  • CMS run naturally divided into two phases
  • Monte Carlo detector response simulation
  • 100s of jobs per run
  • each generating 1 GB
  • all data passed to next phase and archived
  • reconstruct physics from simulated data
  • 100s of jobs per run
  • jobs coupled via Objectivity database access
  • 100 GB data archived
  • Specific challenges
  • each run generates 100 GB of data to be moved
    and archived
  • many, many runs necessary
  • simulation reconstruction jobs at different
    sites
  • large human effort starting monitoring jobs,
    moving data

9
Tools
  • Generation level - PYTHIA 6.152 (CTEQ 4L
    structure functions) http//www.thep.lu.se/torbjo
    rn/Pythia.html
  • Full Detector Simulation - CMSIM 121 (includes
    full silicon version of the tracker)
    http//cmsdoc.cern.ch/cmsim/cmsim.html
  • Reconstruction - ORCA 5.2.0 with pileup at L 2
    1033 cm-2/s (30 pileup events per signal event)
    - http//cmsdoc.cern.ch/orca

10
Analysis Chain
  • Full analysis chain

11
Meeting Challenge With Globus and Condor
  • Globus
  • middleware deployed across entire Alliance Grid
  • remote access to computational resources
  • dependable, robust, automated data transfer
  • Condor
  • strong fault tolerance including checkpointing
    and migration
  • job scheduling across multiple resources
  • layered over Globus as personal batch system
    for the Grid

12
CMS Run on the Alliance Grid
  • Caltech CMS staff prepares input files on local
    workstation
  • Pushes one button to launch master Condor job
  • Input files transferred by master Condor job to
    Wisconsin Condor pool (700 CPUs) using Globus
    GASS file transfer

Caltech workstation
Input files via Globus GASS
WI Condor pool
13
CMS Run on the Alliance Grid
  • Master Condor job at Caltech launches secondary
    Condor job on Wisconsin pool
  • Secondary Condor job launches 100 Monte Carlo
    jobs on Wisconsin pool
  • each runs 1224 hours
  • each generates 1GB data
  • Condor handles checkpointing migration
  • no staff intervention

14
CMS Run on the Alliance Grid
  • When each Monte Carlo job completes data
    automatically transferred to UniTree at NCSA
  • each file 1 GB
  • transferred using Globus-enabled FTP client
    gsiftp
  • NCSA UniTree runs Globus-enabled FTP server
  • authentication to FTP server on users behalf
    using digital certificate

100 Monte Carlo jobs on Wisconsin Condor pool
100 data files transferred via gsiftp, 1 GB each
NCSA UniTree with Globus-enabled FTP server
15
CMS Run on the Alliance Grid
  • When all Monte Carlo jobs complete Secondary
    Condor reports to Master Condor at Caltech
  • Master Condor at Caltech launches job to stage
    data from NCSA UniTree to NCSA Linux cluster
  • job launched via Globus jobmanager on cluster
  • data transferred using Globus-enabled FTP
  • authentication on users behalf using digital
    certificate

Master starts job via Globus jobmanager on
cluster to stage data
16
CMS Run on the Alliance Grid
  • Master Condor at Caltech launches physics
    reconstruction jobs on NCSA Linux cluster
  • job launched via Globus jobmanager on cluster
  • Master Condor continually monitors job and logs
    progress locally at Caltech
  • no user intervention required
  • authentication on users behalf using digital
    certificate

Master starts reconstruction jobs via Globus
jobmanager on cluster
17
CMS Run on the Alliance Grid
  • When reconstruction jobs complete data
    automatically archived to NCSA UniTree
  • data transferred using Globus-enabled FTP
  • After data transferred run is complete and Master
    Condor at Caltech emails notification to staff

data files transferred via gsiftp to UniTree for
archiving
18
Condor Details for Experts
  • Use CondorG
  • Condor Globus
  • allows Condor to submit jobs to remote host via a
    Globus jobmanager
  • any Globus-enabled host reachable (with
    authorization)
  • Condor jobs run in the Globus universe
  • use familiar Condor classads for submitting jobs

universe globus globusscheduler
beak.cs.wisc.edu/jobmanager-
condor-INTEL-LINUX environment
CONDOR_UNIVERSEscheduler executable
CMS/condor_dagman_run arguments -f -t -l .
-Lockfile cms.lock -Condorlog
cms.log -Dag cms.dag -Rescue
cms.rescue input CMS/hg_90.tar.gz remote_
initialdir Prod2001 output
CMS/hg_90.out error CMS/hg_90.err log
CMS/condor.log notification
always queue
19
Condor Details for Experts
  • Exploit Condor DAGman
  • DAGdirected acyclic graph
  • submission of Condor jobs based on dependencies
  • job B runs only after job A completes, job D runs
    only after job C completes, job E only after
    A,B,C D complete
  • includes both pre- and post-job script execution
    for data-staging, cleanup, or the like
  • Job jobA_632 Prod2000/hg_90_gen_632.cdr
  • Job jobB_632 Prod2000/hg_90_sim_632.cdr
  • Script pre jobA_632 Prod2000/pre_632.csh
  • Script post jobB_632 Prod2000/post_632.csh
  • PARENT jobA_632 CHILD jobB_632
  • Job jobA_633 Prod2000/hg_90_gen_633.cdr
  • Job jobB_633 Prod2000/hg_90_sim_633.cdr
  • Script pre jobA_633 Prod2000/pre_633.csh
  • Script post jobB_633 Prod2000/post_633.csh
  • PARENT jobA_633 CHILD jobB_633

20
Monte Carlo Samples Simulated and Reconstructed
21
CPU timing
22
  • All cuts except isolation are applied
  • Distributions are normalized to Lint 40 pb-1

23
Isolation
  • Tracker isolation
  • Isolation cut Number of tracks with pT gt 1.5 GeV
    in R 0.30 cone around photon candidate is zero
  • Still optimizing pT threshold and cone sizes
  • Ecal isolation
  • Sum of Et energy in the cone around photon
    candidate, using Et energies of ECAL clusters
  • Isolation cut Sum of Et energy in R 0.30 cone
    around photon candidate is less than 0.8 GeV

24
Background Cross Section
25
Conclusions
  • The goal of this study is to increase efficiency
    of computer resources and to reduce and minimize
    human intervention during simulation and
    reconstruction
  • proof of concept - it is possible to create the
    distributed system based on GLOBUS and Condor
    (MOP is operational now)
  • A lot of work ahead in order to make this system
    as automatic as possible
  • Important results are obtained for the Higgs
    boson search in two photon decay mode
  • the main background is the background with one
    prompt photon plus bremsstrahlung photon or
    isolated ?0 , which is 50 of the total
    background. QCD background is reduced down to
    the 15 of the total background
  • More precise studies need much more CPU time
Write a Comment
User Comments (0)
About PowerShow.com