Faculty recruitment talk - PowerPoint PPT Presentation

About This Presentation
Title:

Faculty recruitment talk

Description:

Title: Faculty recruitment talk Author: Aniruddha Gokhale Last modified by: UConn Created Date: 9/4/2000 4:00:56 PM Document presentation format: On-screen Show – PowerPoint PPT presentation

Number of Views:89
Avg rating:3.0/5.0
Slides: 31
Provided by: Aniru8
Category:

less

Transcript and Presenter's Notes

Title: Faculty recruitment talk


1
Model-driven Performance Analysis Methodology for
Scalable Performance Analysis of Distributed
Systems
Aniruddha Gokhale a.gokhale_at_vanderbilt.edu Asst.
Professor of EECS, Vanderbilt University,
Nashville, TN
Jeff Gray gray_at_cis.uab.edu Asst. Professor of
CIS Univ. of Alabama at Birmingham Birmingham, AL
Swapna Gokhale ssg_at_engr.uconn.edu Asst. Professor
of CSE, University of Connecticut, Storrs, CT
Presented at NSF NGS/CSR PI Meeting Rhodes,
Greece, April 25-26, 2006 CSR CNS-0406376,
CNS-0509271, CNS-0509296, CNS-0509342
2
Distributed Performance Sensitive Software
Systems
  • Military/Civilian distributed performance-sensitiv
    e software systems
  • Network-centric larger-scale systems of
    systems
  • Stringent simultaneous QoS demands
  • e.g., dependability, security, scalability,
    thruput
  • Dynamic context

3
Trends in DPSS Development
  • Historically developed using low-level APIs
  • Increasing use of middleware technologies
  • Standards-based COTS middleware helps to
  • Control end-to-end resources QoS
  • Leverage hardware software technology advances
  • Evolve to new environments requirements
  • Middleware helps capture codify commonalities
    across applications in different domains by
    providing reusable configurable patterns-based
    building blocks

Examples CORBA, .Net, J2EE, ICE, MQSeries
Developers must decide at design-time which
blocks to use to obtain desired functionality and
performance
4
The What If Performance Analysis Process
Generalization of the model
Model decomposition
5
Guiding Example The Reactor Pattern
The Reactor architectural pattern allows
event-driven applications to demultiplex
dispatch service requests that are delivered to
an application from one or more clients.
  • Many networked applications are developed as
    event-driven programs
  • Common sources of events in these applications
    include activity on an IPC stream for I/O
    operations, POSIX signals, Windows handle
    signaling, timer expirations
  • Reactor pattern decouples the detection,
    demultiplexing, dispatching of events from the
    handling of events
  • Participants include the Reactor, Event handle,
    Event demultiplexer, abstract and concrete event
    handlers

6
Reactor Dynamics
  • Registration Phase
  • Event handlers register themselves with the
    Reactor for an event type (e.g., input, output,
    timeout, exception event types)
  • Reactor returns a handle it maintains, which it
    uses to associate an event type with the
    registered handler
  • Snapshot Phase
  • Main program delegates thread of control to
    Reactor, which in turn takes a snapshot of the
    system to determine which events are enabled in
    that snapshot
  • For each enabled event, the corresponding event
    handler is invoked, which services the event
  • When all events in a snapshot are handled, the
    Reactor proceeds to the next snapshot

7
Characteristics of Base Reactor
  • Single-threaded, select-based Reactor
    implementation
  • Reactor accepts two types of input events, with
    one event handler registered for each event type
    with the Reactor
  • Each event type has a separate queue to hold the
    incoming events. Buffer capacity for events of
    type one is N1 and of type two is N2.
  • Event arrivals are Poisson for type one and type
    two events with rates l1 and l2.
  • Event service time is exponential for type one
    and type two events with rates m1 and m2.
  • In a snapshot, events are serviced
    non-deterministically (in no particular order).
  • -- Base model of the prioritized reactor
    presented in NGS 2005.
  • -- Non-deterministic handling makes it
    interesting/complicated.

8
Performance Metrics
  • Throughput
  • -Number of events that can be processed
  • -Applications such as telecommunications call
    processing.
  • Queue length
  • -Queuing for the event handler queues.
  • -Appropriate scheduling policies for
    applications with real-time requirements.
  • Total number of events
  • -Total number of events in the system.
  • -Scheduling decisions.
  • -Resource provisioning required to sustain
    system demands.
  • Probability of event loss
  • -Events discarded due to lack of buffer
    space.
  • -Safety-critical systems.
  • -Levels of resource provisioning.
  • Response time

9
SRN Model of the Base Reactor
  • Stochastic Reward Nets (SRNs) Extension of
    PNs/GSPNs.
  • Part A Models arrivals, queuing, and service of
    events.
  • Transitions A1 and A2 Event arrivals.
  • Places B1 and B2 Buffer/queues.
  • Places S1 and S2 Service of the events.
  • Transitions Sr1 and Sr2 Service completions.
  • Inhibitor arcs Place B1and transition A1 with
    multiplicity N1 (B2, A2, N2)
  • - Prevents firing of transition A1 when
    there are N1 tokens in place B1.

10
SRN Model of the Base Reactor
  • Part B
  • Process of taking successive snapshots
  • Non-deterministic service of events.
  • T_StSnp(i) enabled Token in StSnpSht Tokens
    in B(i) no Token in S(i).
  • T_EnSnp(i) enabled No token in S(i).
  • T_ProcSnp(i) enabled Token in place S(i) and no
    token in other S(i)s.

11
SRN Model of the Base Reactor
  • Reward rates
  • Loss probability (Pr(B(i) N(i)))
  • Total number of events ((B(i) S(i)))
  • Throughput (rate(Sr(i)))
  • Queue length ((B(i))
  • Optimistic and pessimistic bounds on the response
    times using the
  • tagged-customer approach.

12
SRN Model of the Base Reactor
  • Validate the performance measures
  • Simulation implemented in CSIM
  • Arrival rates, l1 l2 0.4/sec
  • Service rates, m1 m2 2.0/sec

Measure Buffer Space Buffer Space Buffer Space Buffer Space
Measure N1 N2 1 N1 N2 1 N1 N2 5 N1 N2 5
Measure SRN CSIM SRN CSIM
T1 0.37/sec. 0.365/sec. 0.399/sec. 0.395/sec.
Q1 0.064 0.0596 0.12 0.115
L1 0.064 0.0024
R1,l 0.63 sec. 0.676 sec. (avg.) 0.79 sec. 0.830 sec. (avg.)
R1,u 0.86 sec. 1.08 sec.
Average response time estimate obtained from
simulation lies between pessimistic and
optimistic response time estimates obtained from
SRN.
13
SRN Model of the Base Reactor
Sensitivity analysis
  • Vary the arrival rate of type 1 events.
  • -- Remaining parameters same as before.
  • Response time of type 1, 2 events
  • -- Approaches pessimistic response
  • time as arrival rate becomes higher.
  • -- Actual response time of type 1
  • events is higher than type 2 events.
  • Loss probabilities of type 1, 2 events
  • -- Increases with arrival rate.

14
SRN Model of the Generalized Reactor
Am
m event types handled by the reactor.
15
SRN Model of the Generalized Reactor
State-space explosion
16
Model Decomposition
Tagged customer approach
Tagged event
bi
1 2
i m
  • Tagged event of type i, bi events in the queue
    of type i.
  • Other queues may have more or less events.
  • Current snapshot may be in progress
  • Incoming event will be served after bi1
    snapshots.
  • In each bi1 snapshots, other events may or may
    not be handled.
  • Pessimistic case each bi1 snapshots handles
    every event type
  • Service time of each of the bi1 snapshots

17
Model Decomposition
  • (bi2) snapshot, tagged event is serviced.
  • Non-deterministic order of servicing enabled
    event handles
  • -- Optimistic case Tagged type i event is
    the first to be serviced.
  • -- Pessimistic case Tagged type i event is
    the last to be serviced.
  • For the pessimistic case, service time of (bi2)
    snapshot is the same

Kleinrock We are willing to accept the raw
facts of life, which state that our models are
not perfect pictures of the systems we wish to
analyze so we should be willing to accept
approximations and bounds in our problem
solution. vo1. 2, page 319
18
Model Decomposition
Event type i, M/G/1 queue
Closed-form solutions for worse-case bounds on
the performance estimates
19
Model Decomposition
  • Verification of performance estimates obtained
    from model decomposition
  • Parameters li 0.4/sec, mi 2.0/sec, Ni15

Met. m 2 m 2 m 3 m 3 m 4 m 4
Met. SRN Dec. SRN Dec. SRN Dec.
?i - 0.40 - 0.60 - 0.80
Ti 0.40 0.40 0.40 0.40 0.40 0.40
Ei 0.34 0.47 0.50 0.70 0.82 0.98
Ri 0.85s 1.18s 1.25s 1.74s 2.05s 2.45s
  • Performance estimates obtained from SRN lower
    than model decomposition.
  • Exact solution for m4 took over 12 hours.

20
Addressing Middleware Variability Challenges
Manual design time performance modeling and
analysis is complex and tedious since middleware
building blocks and their compositions incur
variabilities that impact performance
  • Compositional Variability
  • Incurred due to variations in the compositions of
    these building blocks
  • Need to address compatibility in the compositions
    and individual configurations
  • Dictated by needs of the domain
  • E.g., Leader-Follower makes no sense in a single
    threaded Reactor
  • Per-Block Configuration Variability
  • Incurred due to variations in implementations
    configurations for a patterns-based building
    block
  • E.g., single threaded versus thread-pool based
    reactor implementation dimension that crosscuts
    the event demultiplexing strategy (e.g., select,
    poll, WaitForMultipleObjects

21
Automated DPSS Design-time Analysis
Automated design-time performance analysis
techniques to estimate the impact of variability
in middleware-based DPSS systems
  • Build and validate performance models for
    invariant parts of middleware building blocks
  • Weaving of variability concerns manifested in a
    building block into the performance models
  • Compose and validate performance models of
    building blocks mirroring the anticipated
    software design of DPSS systems
  • Estimate end-to-end performance of composed
    system
  • Iterate until design meets performance
    requirements

22
Automation Goals for What if Analysis
Automating design-time performance analysis
techniques to estimate the impact of variability
in middleware-based DPSS systems
  • Build and validate performance models for
    invariant parts of middleware building blocks
  • Weaving of variability concerns manifested in a
    building block into the performance models
  • Compose and validate performance models of
    building blocks mirroring the anticipated
    software design of DRE systems
  • Estimate end-to-end performance of composed
    system
  • Iterate until design meets performance
    requirements

Composed System
Refined model of a pattern
Refined model of a pattern
Invariant model of a pattern
Refined model of a pattern
Refined model of a pattern
Refined model of a pattern
weave
weave
variability
variability
Refined model of a pattern
Refined model of a pattern
system
23
Technology Enabler Generic Modeling Environment
Write Code That Writes Code That Writes Code!
GME Architecture
COM
COM
GME Editor
ConstraintManager
Browser
Translator(s)
COM
Add-On(s)
Metamodel
GModel
GMeta
XML
UML / OCL
CORE
Paradigm Definition
XML
ODBC
Goal Correct-by-construction DPSS systems
www.isis.vanderbilt.edu/Projects/gme/default.htm
24
Leveraging Our Existing Solution CoSMIC
CoSMIC can be downloaded at www.dre.vanderbilt.edu
/cosmic
  • CoSMIC tools e.g., PICML used to model
    application components
  • Captures the data model of the OMG DC
    specification
  • Synthesis of static deployment plans for DPSS
    applications
  • New capabilities being added for QoS provisioning
    (real-time, fault tolerance)

25
POSAML Modeling DPSS Composition Process
  • POSAML GME-based modeling language for
    middleware composition
  • Provides a structural composition model
  • Captures variability in blocks
  • Generative programming capabilities to synthesize
    different artifacts e.g., benchmarking,
    configuration, perf modeling.

26
SRNML Modeling the What if Process
  • SRNML GME-based modeling language for modeling
    performance models of building blocks
  • Provides behavioral models for building blocks
  • Developed Reactor and Proactor models to date
  • Need to apply to other patterns
  • Demonstrate model compositions and model solving

Need to address scalability challenges for models
27
Leveraging Our Solution C-SAW Model
Transformation Replication Engine
MetaModel
M
M
ECL Interpreter
o
o
d
d
e
e
l
l
i
i
n
n
g
g




A
A
P
P
ECL Parser
I
I
s
s
Implemented as a GME plug-in to assist in the
rapid adaptation and evolution of models by
weaving crosscutting changes into models.
28
Scaling a Base SRN Model
strategy computeTEnSnpGuard(min_old, min_new,
max_new integer TEnSnpGuardStr string)
if (min_old lt max_new) then
computeTEnSnpGuard(min_old 1, min_new, max_new,
TEnSnpGuardStr "(S" intToString(min_old)
" 0)") else addEventswithGuard(min_n
ew, max_new, TEnSnpGuardStr "(S"
intToString(min_old) " 0))?10")
endif ... // several strategies not show here
(e.g., addEventswithGuard) strategy
addEvents(min_new, max_new integer
TEnSnpGuardStr string) if (min_new lt
max_new) then addNewEvent(min_new,
TEnSnpGuardStr) addEvents(min_new1,
max_new, TEnSnpGuardStr) endif strategy
addNewEvent(event_num integer TEnSnpGuardStr
string) declare start, stTran, inProg,
endTran atom declare TStSnp_guard
string start findAtom("StSnpSht")
stTran addAtom("ImmTransition", "TStSnp"
intToString(event_num)) TStSnp_guard "(S"
intToString(event_num) " 1)?1 0"
stTran.setAttribute("Guard", TStSnp_guard)
inProg addAtom("Place", "SnpInProg"
intToString(event_num)) endTran
addAtom("ImmTransition", "TEnSnp"
intToString(event_num)) endTran.setAttribute("G
uard", TEnSnpGuardStr) addConnection("InpImmedA
rc", start, stTran) addConnection("OutImmedArc"
, stTran, inProg) addConnection("InpImmedArc",
inProg, endTran) addConnection("OutImmedArc",
endTran, start)
strategy connectNewEvents(min_new, max_new
interger) if(min_new lt max_new) then
connectOneNewEventToOtherNewEvents(min_new,
max_new) connectNewEvents(min_new1,
max_new) endif strategy connectOneNewEventTo
OtherNewEvents(event_num, max_new integer)
if(event_num lt max_new) then
connectTwoEvents(event_num, max_new)
connectNewEvents(event_num, max_new-1)
endif strategy connectTwoEvents(first_num,
second_num integer) declare firstinProg,
secondinProg atom declare secondTProc1,
secondTProc2 atom declare first_numStr,
second_numStr, TProcSnp_guard1, TProcSnp_guard2
string first_numStr intToString(first_num)
second_numStr intToString(second_num)
TProcSnp_guard1 "((S" first_numStr "
0) (S" second_numStr " 1))?1 0"
TProcSnp_guard2 "((S" second_numStr "
0) (S" first_numStr " 1))?1 0"
firstinProg findAtom("SnpInProg"
first_numStr) secondinProg
findAtom("SnpInProg" second_numStr)
secondTProc1 addAtom("ImmTransition",
"TProcSnp" first_numStr ","
second_numStr) secondTProc1.setAttribute("Guard
", TProcSnp_guard1) secondTProc2
addAtom("ImmTransition", "TProcSnp"
second_numStr "," first_numStr)
secondTProc2.setAttribute("Guard",
TProcSnp_guard2) addConnection("InpImmedArc",
firstinProg, secondTProc1) addConnection("OutI
mmedArc", secondTProc1, secondinProg)
addConnection("InpImmedArc", secondinProg,
secondTProc2) addConnection("OutImmedArc",
secondTProc2, firstinProg)
29
Project Status and Work in Progress
  • Ongoing Integration of SRNML (behavioral) and
    POSAML (structural)
  • Incorporate SRNML POSAML in CoSMIC and release
    the software in open source public domain
  • Integrate with C-SAW scalability engine
  • Performance analysis of different building blocks
    (patterns)
  • Non-deterministic reactor (all steps).
  • Prioritized reactor, Active Object, Proactor
    (Steps 1, 2 Basic model, Model validation)
  • Compose DPSS systems and perform performance
    analysis (analytical and simulation) of composed
    systems
  • Validate via automated empirical benchmarking
  • Demonstrate on real systems

30
Selected Publications
  1. U. Praphamontripong, S. Gokhale, A. Gokhale, and
    J. Gray, Performance Analysis of an Asynchronous
    Web Server Proc. of 30th COMPSAC, To Appear.
  2. J. Gray, Y. Lin, and J. Zhang, "Automating Change
    Evolution in Model-Driven Engineering," IEEE
    Computer (Special Issue on Model-Driven
    Engineering), vol. 39, no. 2, February 2006, pp.
    51-58
  3. Invited (Under Review) J. Gray, Y. Lin, J.
    Zhang, S. Nordstrom, A. Gokhale, S. Neema, and S.
    Gokhale, "Replicators Transformations to Address
    Model Scalability," voted one of the Best Papers
    of MoDELS 2005 and invited for an extended
    submission to the Journal of Software and Systems
    Modeling.
  4. S. Gokhale, A. Gokhale, and J. Gray, "Response
    Time Analysis of an Event Demultiplexing Pattern
    in Middleware for Network Services," IEEE
    GlobeCom, St. Louis, MO, December 2005.
  5. Best Paper Award J. Gray, Y. Lin, J. Zhang, S.
    Nordstrom, A. Gokhale, S. Neema, and S. Gokhale,
    "Replicators Transformations to Address Model
    Scalability," Model Driven Engineering Languages
    and Systems (MoDELS) (formerly the UML series of
    conferences), Springer-Verlag LNCS 3713, Montego
    Bay, Jamaica, October 2005, pp. 295-308. - Voted
    one of the Best Papers of MoDELS 2005 and invited
    for an extended submission to the Journal of
    Software and Systems Modeling.
  6. P. Vandal, U. Praphamontripong, S. Gokhale, A.
    Gokhale, and J. Gray, "Performance Analysis of
    the Reactor Pattern in Network Services," 5th
    International Workshop on Performance Modeling,
    Evaluation, and Optimization of Parallel and
    Distributed Systems (PMEO-PDS), held at IPDPS,
    Rhodes Island, Greece, April 2006.
  7. A. Kogekar, D. Kaul, A. Gokhale, P. Vandal, U.
    Praphamontripong, S. Gokhale, J. Zhang, Y. Lin,
    J. Gray, "Model-driven Generative Techniques for
    Scalable Performability Analysis of Distributed
    Systems," Next Generation Software Workshop, held
    atIPDPS, Rhodes Island, Greece, April 2006.
  8. S. Gokhale, A. Gokhale, and J. Gray, "A
    Model-Driven Performance Analysis Framework for
    Distributed Performance-Sensitive Software
    Systems," Next Generation Software Workshop, held
    at IPDPS, Denver, CO, April 2005.

31
Concluding Remarks
Model building (basic characteristics)
  • DPSS development incurs significant challenges
  • Model-driven design-time performance analysis is
    a promising approach
  • Performance models of basic building blocks built
  • Scalability demonstrated
  • Generative programming is key for automation

Model validation (simulation/measurements)
Generalization of the model
Model decomposition
Automatic generation
Model replication
Many hard RD problems with model-driven
engineering remain unresolved!!
  • GME is available from www.isis.vanderbilt.edu/Proj
    ects/gme/default.htm
  • CoSMIC is available from www.dre.vanderbilt.edu/co
    smic
  • C-SAW is available from www.gray-area.org/Research
    /C-SAW
Write a Comment
User Comments (0)
About PowerShow.com