Instructor: Spyros Reveliotis - PowerPoint PPT Presentation

About This Presentation
Title:

Instructor: Spyros Reveliotis

Description:

Develop an appreciation of the inherent complexity of these problems and the ... and Analysis of Production and Service Systems as Continuous-Time Markov Chains ... – PowerPoint PPT presentation

Number of Views:18
Avg rating:3.0/5.0
Slides: 10
Provided by: spy2
Category:

less

Transcript and Presenter's Notes

Title: Instructor: Spyros Reveliotis


1
IE7201 Production Service Systems
EngineeringFall 2009Closure
  • Instructor Spyros Reveliotis

2
Course Objectives
  • Provide an understanding and appreciation of the
    different resource allocation and coordination
    problems that underlie the operation of
    production and service systems.
  • Enhance the student ability to formally
    characterize and study these problems by
    referring them to pertinent analytical
    abstractions and modeling frameworks.
  • Develop an appreciation of the inherent
    complexity of these problems and the resulting
    need of simplifying approximations.
  • Systematize the notion and role of simulation in
    the considered problem contexts.
  • Define a research frontier in the addressed
    areas.

3
Course Outline
  • 1. Introduction Course Objectives, Context, and
    Outline
  • Contemporary organizations and the role of
    Operations Management (OM)
  • Corporate strategy and its connection to
    operations
  • The organization as a resource allocation system
    (RAS)
  • Classification of Production Systems on the basis
    of their workflow structure and control policies
  • The underlying RAS management problems and the
    need for understanding the impact of the
    underlying stochasticity
  • The basic course structure
  • 2. Modeling and Analysis of Production and
    Service Systems as Continuous-Time Markov Chains
  • (A brief overview of the key results of the
    theory of Markov Chain and Continuous-Time Markov
    Chains (CT-MC))
  • Birth-Death Processes and the M/M/1 Queue
  • Transient Analysis
  • Steady State Analysis
  • Modeling more complex behavior through CT-MCs
  • Single station systems with multi-stage
    processing, finite resources and/or blocking
    effects
  • Open (Jackson) and Closed (Gordon-Newell)
    Queueing networks
  • Gershwins Models for Transfer Line Analysis
  • Bucket Brigades

4
Course Outline (cont.)
  • 3. Accommodating non-Markovian behavior
  • Phase-type distributions and their role as
    approximating distributions
  • The M/G/1 queue
  • The G/M/1 queue
  • The G/G/1 queue
  • The essence of Factory Physics
  • (BCMP networks)
  • 4. Performance Control of Production and Service
    systems
  • Controlling the event rates of the underlying
    CT-MC model (an informal introduction of the dual
    Linear Programming formulation in standard MDP
    theory)
  • A brief introduction of the theory of Markov
    Decision Processes (MDPs) and of Dynamic
    Programming (DP)
  • An introduction to Approximate DP
  • An introduction to dispatching rules and
    classical scheduling theory
  • Buffer-based priority scheduling policies, Meyn
    and Kumars performance bounds and stability
    theory

5
Course Outline (cont.)
  • 5. Behavioral Control of Production and Service
    Systems
  • Behavioral modeling and analysis of Production
    and Service Systems
  • Resource allocation deadlock and the need for
    liveness-enforcing supervision (LES)
  • Petri nets as a modeling and analysis tool
  • A brief introduction to the behavioral control of
    Production and Service Systems

6
What (I hope that) we eventually achieved ?
  • A solid exposition of
  • DTMC theory
  • Properties of exponential distributions and
    Poisson processes
  • CTMC and Semi-Markov theory
  • Classical queueing theory
  • (An introduction to reversibility theory and its
    implications)
  • Demonstration of the application of the
    aforementioned theory in the modeling and
    performance evaluation of production and service
    systems
  • Bucket Brigades
  • Factory Physics type of models
  • The method of stages for modeling non-Markovian
    behavior through Markovian models
  • Other examples presented in class and in reading
    assignments (e.g., excerpt from Gershwin)
  • Homework problems
  • Establishing some proficiency for working with
    the aforementioned stochastic models
  • Indentify / recognize cases accepting exact
    modeling
  • Fit more general situations / behaviors to the
    presented queueing models for some approximating
    analysis
  • Go back to (semi-)Markov models using the method
    of stages, and maybe some approximating
    techniques on the resulting Markov model to deal
    with the complexity problems arising from the
    curse of dimensionality

7
Towards performance control
  • In case of a small number of alternatives, use an
    appropriate (descriptive) model to evaluate the
    performance of each alternative.
  • For cases where the system performance can be
    explicitly expressed as a function of the
    parameter of interest, formulate and solve an
    appropriate optimization (mathematical
    programming) problem.
  • Markov Decision Processes (MDP)
  • Transition probabilities out of each state are
    functions of decisions / actions taken at these
    states.
  • Selections of decisions to be followed at each
    state (and visit), constitute a policy. A policy
    can also be deterministic or randomized.
  • A functional defined over the sequence of visited
    states and performed actions at each state
    characterizes the performance of the policy
    instantiation.
  • The expected value of this functional over all
    the possible sample paths generated by the
    considered policy, when starting from a certain
    initial state, defines the policy value at this
    state.
  • In an MDP setting we want to identify a policy
    that optimizes the value of each state.
  • The state function providing their values under
    an optimal policy is known as the value function
    of the MDP problem.
  • Dynamic Programming (DP) is a set of methods that
    tries to compute the value function of a given
    MDP problem by building upon the fundamental
    remark that the value of a (state, action) pair
    at a certain time / period, can be decomposed as
  • an immediate value for taking the considered
    action at the given state in the current period,
    plus
  • the expected value of the resulting state.
  • The above decomposition is the essential content
    of the, so called, Bellmans equation.
  • Availability of the value function reduces the
    problem of computing an optimal decision for any
    given state, as a local optimization problem over
    the entire set of actions available at that
    state.
  • Classical DP methods are limited by the very
    large size of the underlying state spaces, that
    renders intractable even the enumeration of the
    entire value function! These complexity problems
    are collectively known as the curse of
    dimensionality.
  • Approximate DP (ADP) seeks to deal with the curse
    of dimensionality problems by working with an
    approximate compact representation of the value
    function. This approach necessitates the
    development of
  • a representational space able to support
    effective compact approximations of the problem
    value function

8
Petri nets and behavioral control of RAS
A (generalized stochastic) Petri net modeling the
CQN of Problem 8.15 in Homework 3
  • Petri nets offer an unambiguous and compact
    representation of DES structure and behavior.
  • Reachability / Coverability analysis is the
    systematic construction of the underlying state
    space through an efficient enumeration of all the
    possible behaviors generated by the system.
  • The availability of the reachable state space
    enables an assessment of various qualitative
    properties of the system behavior (like
    boundedness, liveness, deadlock-freedom,
    fairness, etc.)
  • For many PN sub-classes, the assessment of many
    such qualitative properties can be performed
    through much more efficient analytical /
    algebraic techniques that exploit the information
    on the system structure and its invariants
    provided by the PN model itself.
  • In the context of DES modeling the resource
    allocation taking place in production and service
    systems, an important issue is the maintenance of
    live behavior by preventing the formation of
    deadlock.
  • Generalized Stochastic PNs allow the modeling of
    the passage of time by distinguishing between
    instantaneous and timed transitions. Timed
    transitions involve a delay in their firing,
    which is drawn from an exponential distribution.
    Their timed-based dynamics corresponds to a
    semi-Markov process defined on the underlying
    state space.

9
Thanks for being in the class and Have a Great
Holiday!
Write a Comment
User Comments (0)
About PowerShow.com