Multicore SALSA Parallel Computing and Web 2'0 for Cheminformatics and GIS Analysis - PowerPoint PPT Presentation

1 / 43
About This Presentation
Title:

Multicore SALSA Parallel Computing and Web 2'0 for Cheminformatics and GIS Analysis

Description:

... this time span. Gaming and Generalized decision support (data mining) ... High Level Theory. Deterministic Annealing can be looked at from a Physics, Statistics and/or Information ... – PowerPoint PPT presentation

Number of Views:66
Avg rating:3.0/5.0
Slides: 44
Provided by: salsawe
Category:

less

Transcript and Presenter's Notes

Title: Multicore SALSA Parallel Computing and Web 2'0 for Cheminformatics and GIS Analysis


1
Multicore SALSAParallel Computing and Web
2.0for Cheminformatics and GIS Analysis
  • 2007 Microsoft eScience Workshop at RENCI
  • The Friday Center for Continuing Education UNC -
    Chapel HillOctober 22 2007
  • Geoffrey Fox, Seung-Hee Bae, Neil Devadasan,
    Rajarshi Guha, Marlon Pierce, Xiaohong Qiu, David
    Wild, Huapeng Yuan
  • Community Grids Laboratory, Research Computing
    UITS, School of informatics and POLIS Center
    Indiana University
  • George Chrysanthakopoulos, Henrik Frystyk Nielsen
  • Microsoft Research, Redmond WA
  • http//www.infomall.org/multicore
  • gcf_at_indiana.edu, http//www.infomall.org

2
Too much Computing?
  • Historically one has tried to increase computing
    capabilities by
  • Optimizing performance of codes
  • Exploiting all possible CPUs such as Graphics
    co-processors and idle cycles
  • Making central computers available such as
    NSF/DoE/DoD supercomputer networks
  • Next Crisis in technology area will be the
    opposite problem commodity chips will be
    32-128way parallel in 5 years time and we
    currently have no idea how to use them
    especially on clients
  • Only 2 releases of standard software (e.g.
    Office) in this time span
  • Gaming and Generalized decision support (data
    mining) are two obvious ways of using these
    cycles
  • Intel RMS analysis
  • Note even cell phones will be multicore

3
Intels Projection
4
Too much Data to the Rescue?
  • Multicore servers have clear universal
    parallelism as many users can access and use
    machines simultaneously
  • Maybe also need application parallelism as needed
    on client machines
  • Over next years, we will be submerged of course
    in data deluge
  • Scientific observations for e-Science
  • Local (video, environmental) sensors
  • Data fetched from Internet defining users
    interests
  • Maybe data-mining of this too much data will
    use up the too much computing both for science
    and commodity PCs
  • PC will use this data(-mining) to be intelligent
    user assistant?
  • Must have highly parallel algorithms

5
Intels Application Stack
6
CICC Chemical Informatics and Cyberinfrastructure
Collaboratory Web Service Infrastructure
Portal Services RSS Feeds User
Profiles Collaboration as in Sakai
Core Grid Services Service Registry Job
Submission and Management Local Clusters IU
Big Red, TeraGrid, Open Science Grid
7
Deterministic Annealing for Data Mining
  • We are looking at deterministic annealing
    algorithms because although heuristic
  • They have clear scalable parallelism (e.g. use
    parallel BLAS)
  • They avoid (some) local minima and regularize ill
    defined problems in an intuitively clear fashion
  • They are fast (no Monte Carlo)
  • I understand them and Google Scholar likes them
  • Developed first by Durbin as Elastic Net for TSP
  • Extended by Rose (my student then now at UCSB))
    and Gurewitz (visitor to C3P) at Caltech for
    signal processing and applied later to many
    optimization and supervised and unsupervised
    learning methods.
  • See K. Rose, "Deterministic Annealing for
    Clustering, Compression, Classification,
    Regression, and Related Optimization Problems,"
    Proceedings of the IEEE, vol. 80, pp. 2210-2239,
    November 1998

8
High Level Theory
  • Deterministic Annealing can be looked at from a
    Physics, Statistics and/or Information theoretic
    point of view
  • Consider a function (e.g. a likelihood) L(y)
    that we want to operate on (e.g. maximize)
  • Set L? (y?,T) ? L(y) exp(- (y? - y)2 /T
    ) dy
  • Incorporating entropy term ensuring that one
    looks for most likely states at temperature T
  • If y is a distance, replacing L by L?
    corresponds to smearing or smoothing it over
    resolution ?T
  • Minimize Free Energy F -Ln L? (y?,T) rather
    than energy E -Ln L (y)
  • Use mean field approximation to avoid Monte Carlo
    (simulated annealing)

9
Deterministic Annealing for Clustering I
  • Illustrating similarity between clustering and
    Gaussian mixtures
  • Deterministic annealing for mixtures replaces
    by and anneals down to
    mixture size

10
Deterministic Annealing for Clustering II
  • This is an extended K-means algorithm
  • Start with a single cluster giving as solution y1
    as centroid
  • For some annealing schedule for T, iterate above
    algorithm testing correlation matrix in xi about
    each cluster center to see if elongated
  • Split cluster if elongation long enough
    splitting is a phase transition in physics view
  • You do not need to assume number of clusters but
    rather a final resolution ?T or equivalent
  • At T0, uninteresting solution is N clusters one
    at each point xi

11
DeterministicAnnealing
F(y, T)
Solve Linear Equations for each
temperature Nonlinearity removed by
approximating with solution at previous higher
temperature
Configuration y
  • Minimum evolving as
    temperature decreases
  • Movement at fixed temperature
    going to local minima if not initialized
    correctly

12
Clustering Data
  • Cheminformatics was tested successfully with
    small datasets and compared to commercial tools
  • Cluster on properties of chemicals from high
    throughput screening results to chemical
    properties (structure, molecular weight etc.)
  • Applying to PubChem (and commercial databases)
    that have 6-20 million compounds
  • Comparing traditional fingerprint (binary
    properties) with real-valued properties
  • GIS uses publicly available Census data in
    particular the 2000 Census aggregated in 200,000
    Census Blocks covering Indiana
  • 100MB of data
  • Initial clustering done on simple attributes
    given in this data
  • Total population and number of Asian, Hispanic
    and Renters
  • Working with POLIS Center at Indianapolis on
    clustering of SAVI (Social Assets and
    Vulnerabilities Indicators) attributes at
    http//www.savi.org) for community and decision
    makers
  • Economy, Loans, Crime, Religion etc.

13
Where are we?
  • We have deterministically annealed clustering
    running well on 8-core (2-processor quad core)
    Intel systems using C and Microsoft Robotics
    Studio CCR/DSS
  • Could also run on multicore-based parallel
    machines but didnt do this (is there a large
    Windows quad core cluster on TeraGrid?)
  • This would also be efficient on large problems
  • Applied to Geographical Information Systems (GIS)
    and census data
  • Could be an interesting application on future
    broadly deployed PCs
  • Visualize nicely on Google Maps (and presumably
    Microsoft Virtual Earth)
  • Applied to several Cheminformatics problems and
    have parallel efficiency but visualization harder
    as in 150-1024 (or more) dimensions
  • Will develop a family of such parallel annealing
    data-mining tools where basic approach known for
  • Clustering
  • Gaussian Mixtures (Expectation Maximization)
  • and possibly Hidden Markov Methods

14
Clustering algorithm annealing by decreasing
distance scale and gradually finds more clusters
as resolution improved Here we see 10 clusters
increasing to 30 as algorithm progresses
15
Renters
16
In detail, different groups have different
cluster centers
17
Multicore SALSA at CGL
  • Service Aggregated Linked Sequential Activities
  • http//www.infomall.org/multicore
  • Aims to link parallel and distributed (Grid)
    computing by developing parallel applications as
    services and not as programs or libraries
  • Improve traditionally poor parallel programming
    development environments
  • Can use messaging to link parallel and Grid
    services but performance functionality
    tradeoffs different
  • Parallelism needs few µs latency for message
    latency and thread spawning
  • Network overheads in Grid 10-100s µs
  • This presentation describes first of set of
    services (library) of multicore parallel data
    mining algorithms

18
Parallel Programming Model
  • If multicore technology is to succeed, mere
    mortals must be able to build effective parallel
    programs
  • There are interesting new developments
    especially the Darpa HPCS Languages X10, Chapel
    and Fortress
  • However if mortals are to program the 64-256 core
    chips expected in 5-7 years, then we must use
    todays technology and we must make it easy
  • This rules out radical new approaches such as new
    languages
  • The important applications are not scientific
    computing but most of the algorithms needed are
    similar to those explored in scientific parallel
    computing
  • Intel RMS analysis
  • We can divide problem into two parts
  • High Performance scalable (in number of cores)
    parallel kernels or libraries
  • Composition of kernels into complete applications
  • We currently assume that the kernels of the
    scalable parallel algorithms/applications/librarie
    s will be built by experts with a
  • Broader group of programmers (mere mortals)
    composing library members into complete
    applications.

19
Scalable Parallel Components
  • There are no agreed high-level programming
    environments for building library members that
    are broadly applicable.
  • However lower level approaches where experts
    define parallelism explicitly are available and
    have clear performance models.
  • These include MPI for messaging or just locks
    within a single shared memory.
  • There are several patterns to support here
    including the collective synchronization of MPI,
    dynamic irregular thread parallelism needed in
    search algorithms, and more specialized cases
    like discrete event simulation.
  • We use Microsoft CCR http//msdn.microsoft.com/ro
    botics/ as it supports both MPI and dynamic
    threading style of parallelism

20
Composition of Parallel Components
  • The composition step has many excellent solutions
    as this does not have the same drastic
    synchronization and correctness constraints as
    for scalable kernels
  • Unlike kernel step which has no very good
    solutions
  • Task parallelism in languages such as C, C,
    Java and Fortran90
  • General scripting languages like PHP Perl Python
  • Domain specific environments like Matlab and
    Mathematica
  • Functional Languages like MapReduce, F
  • HeNCE, AVS and Khoros from the past and CCA from
    DoE
  • Web Service/Grid Workflow like Taverna, Kepler,
    InforSense KDE, Pipeline Pilot (from SciTegic)
    and the LEAD environment built at Indiana
    University.
  • Web solutions like Mash-ups and DSS
  • Many scientific applications use MPI for the
    coarse grain composition as well as fine grain
    parallelism but this doesnt seem elegant
  • The new languages from Darpas HPCS program
    support task parallelism (composition of parallel
    components) decoupling composition and scalable
    parallelism will remain popular and must be
    supported.

21
Service Aggregation in SALSA
  • Kernels and Composition must be supported both
    inside chips (the multicore problem) and between
    machines in clusters (the traditional parallel
    computing problem) or Grids.
  • The scalable parallelism (kernel) problem is
    typically only interesting on true parallel
    computers as the algorithms require low
    communication latency.
  • However composition is similar in both parallel
    and distributed scenarios and it seems useful to
    allow the use of Grid and Web 2.0 composition
    tools for the parallel problem.
  • This should allow parallel computing to exploit
    large investment in service programming
    environments
  • Thus in SALSA we express parallel kernels not as
    traditional libraries but as (some variant of)
    services so they can be used by non expert
    programmers
  • For parallelism expressed in CCR, DSS represents
    the natural service (composition) model.

22
Inside the SALSA Services
  • We generalize the well known CSP (Communicating
    Sequential Processes) of Hoare to describe the
    low level approaches to fine grain parallelism as
    Linked Sequential Activities in SALSA.
  • We use term activities in SALSA to allow one to
    build services from either threads, processes
    (usual MPI choice) or even just other services.
  • We choose term linkage in SALSA to denote the
    different ways of synchronizing the parallel
    activities that may involve shared memory rather
    than some form of messaging or communication.
  • There are several engineering and research issues
    for SALSA
  • There is the critical communication optimization
    problem area for communication inside chips,
    clusters and Grids.
  • We need to discuss what we mean by services

23
Microsoft CCR
  • Supports exchange of messages between threads
    using named ports
  • FromHandler Spawn threads without reading ports
  • Receive Each handler reads one item from a
    single port
  • MultipleItemReceive Each handler reads a
    prescribed number of items of a given type from a
    given port. Note items in a port can be general
    structures but all must have same type.
  • MultiplePortReceive Each handler reads a one
    item of a given type from multiple ports.
  • JoinedReceive Each handler reads one item from
    each of two ports. The items can be of different
    type.
  • Choice Execute a choice of two or more
    port-handler pairings
  • Interleave Consists of a set of arbiters (port
    -- handler pairs) of 3 types that are Concurrent,
    Exclusive or Teardown (called at end for clean
    up). Concurrent arbiters are run concurrently but
    exclusive handlers are
  • http//msdn.microsoft.com/robotics/

23
24
(No Transcript)
25
Preliminary Results
  • Parallel Deterministic Annealing Clustering in C
    with speed-up of 7.8 (Chemistry) and 7 (GIS) on
    Intel 2 quad core systems
  • Analysis of performance of Java, C, C in MPI and
    dynamic threading with XP, Vista, Windows Server,
    Fedora, Redhat on Intel/AMD systems
  • Study of cache effects coming with MPI
    thread-based parallelism
  • Study of execution time fluctuations in Windows
    (limiting speed-up to lt 8)

26
DSS as Service Model
  • We view system as a collection of services in
    this case
  • One to supply data
  • One to run parallel clustering
  • One to visualize results in this by spawning a
    Google maps browser
  • Note we are clustering Indiana census data
  • DSS is convenient as built on CCR
  • Messaging overhead around 30-40 µs

27
(No Transcript)
28
Parallel Multicore GISDeterministic Annealing
Clustering
Parallel Overheadon 8 Threads Intel 8b Speedup
8/(1Overhead)
10 Clusters
Overhead Constant1 Constant2/n Constant1
0.02 to 0.1 (Client Windows) due to
threadruntime fluctuations
20 Clusters
10000/(Grain Size n points per core)
29
Parallel Multicore Deterministic Annealing
Clustering
Parallel Overhead for large (2M points) Indiana
Census clustering on 8 Threads Intel 8bThis
fluctuating overhead due to 5-10 runtime
fluctuations between threads
Constant1
Increasing number of clusters decreases
communication/memory bandwidth overheads
30
Parallel Multicore Deterministic Annealing
Clustering
Parallel Overhead for subset of PubChem
clustering on 8 Threads (Intel 8b) The
fluctuating overhead is reduced to 2 (under
investigation!)40,000 points with 1052 binary
properties (Census is 2 real valued properties)
Constant1
Increasing number of clusters decreases
communication/memory bandwidth overheads
31
MPI Parallel Divkmeans clustering of PubChem
AVIDD Linux cluster, 5,273,852 structures
(Pubchem compound collection, Nov 2005)
32
Scaled Speed up Tests
  • The full clustering algorithm involves different
    values of the number of clusters NC as
    computation progresses
  • The amount of computation per data point is
    proportional to NC and so overhead due to memory
    bandwidth (cache misses) declines as NC increases
  • We did a set of tests on the clustering kernel
    with fixed NC
  • Further we adopted the scaled speed-up approach
    looking at the performance as a function of
    number of parallel threads with constant number
    of data points assigned to each thread
  • This contrasts with fixed problem size scenario
    where the number of data points per thread is
    inversely proportional to number of threads
  • We plot Run time for same workload per thread
    divided by number of data points multiplied by
    number of clusters multiped by time at smallest
    data set (10,000 data points per thread)
  • Expect this normalized run time to be independent
    of number of threads if not for parallel and
    memory bandwidth overheads
  • It will decrease as NC increases as number of
    computations per points fetched from memory
    increases proportional to NC

33
Intel 8-core C with 80 Clusters Vista Run Time
Fluctuations for Clustering Kernel
  • 2 Quadcore Processors
  • This is average of standard deviation of run time
    of the 8 threads between messaging
    synchronization points

34
Intel 8 core with 80 Clusters Redhat Run Time
Fluctuations for Clustering Kernel
  • This is average of standard deviation of run time
    of the 8 threads between messaging
    synchronization points

Standard Deviation/Run Time
Number of Threads
35
Basic Performance of CCR
36
CCR Overhead for a computation of 23.76 µs
between messaging
Rendezvous
37
Overhead (latency) of AMD4 PC with 4 execution
threads on MPI style Rendezvous Messaging for
Shift and Exchange implemented either as two
shifts or as custom CCR pattern
38
Overhead (latency) of Intel8b PC with 8 execution
threads on MPI style Rendezvous Messaging for
Shift and Exchange implemented either as two
shifts or as custom CCR pattern
39
Cache Line Interference
40
Cache Line Interference
  • Early implementations of our clustering algorithm
    showed large fluctuations due to the cache line
    interference effect discussed here and on next
    slide in a simple case
  • We have one thread on each core each calculating
    a sum of same complexity storing result in a
    common array A with different cores using
    different array locations
  • Thread i stores sum in A(i) is separation 1 no
    variable access interference but cache line
    interference
  • Thread i stores sum in A(Xi) is separation X
  • Serious degradation if X lt 8 (64 bytes) with
    Windows
  • Note A is a double (8 bytes)
  • Less interference effect with Linux especially
    Red Hat

41
Cache Line Interference
  • Note measurements at a separation X of 8 (and
    values between 8 and 1024 not shown) are
    essentially identical
  • Measurements at 7 (not shown) are higher than
    that at 8 (except for Red Hat which shows
    essentially no enhancement at Xlt8)
  • If effects due to co-location of thread variables
    in a 64 byte cache line, the array must be
    aligned with cache boundaries
  • In early implementations we found poor X8
    performance expected in words of A split across
    cache lines

42
Inter-Service Communication
  • Note that we are not assuming a uniform
    implementation of service composition even if
    user sees same interface for multicore and a Grid
  • Good service composition inside a multicore chip
    can require highly optimized communication
    mechanisms between the services that minimize
    memory bandwidth use.
  • Between systems interoperability could motivate
    very different mechanisms to integrate services.
  • Need both MPI/CCR level and Service/DSS level
    communication optimization
  • Note bandwidth and latency requirements reduce as
    one increases the grain size of services
  • Suggests the smaller services inside closely
    coupled cores and machines will have stringent
    communication requirements.

43
Mashups v Workflow?
  • Mashup Tools are reviewed at http//blogs.zdnet.co
    m/Hinchcliffe/?p63
  • Workflow Tools are reviewed by Gannon and Fox
    http//grids.ucs.indiana.edu/ptliupages/publicatio
    ns/Workflow-overview.pdf
  • Both include scripting in PHP, Python, sh etc. as
    both implement distributed programming at level
    of services
  • Mashups use all types of service interfaces and
    perhaps do not have the potential robustness
    (security) of Grid service approach
  • Mashups typically pure HTTP (REST)

43
Write a Comment
User Comments (0)
About PowerShow.com