Title: Module 3: Basic Periodic Tasking
1Module 3Basic Periodic Tasking
2Some Definitions
- A job is a one-time finite-length activity
performed by a processor. - The amount of work required to perform that
activity, measured in units of time, is called
the execution time, denoted by ei. - A task is a (usually conceptually infinite)
sequence of jobs.
T1
J1,2
J1,1
T2
J2,1
J2,2
J2,3
timeline
3Notes on Definitions
- A job can be code executed by a processor, or
bits transferred by a bus or switch or network
link. - A job is sometimes called a task instance or task
dispatch. - We will restrict attention here to sequences of
non-overlapping jobs, i.e. a job will not begin
until its predecessor has completed. - A task may be denoted by t (Greek letter tau) in
some literature. - Job execution time is often called compute time
in some literature, denoted by C. - Execution time is the preemption-free,
contention-free time required for the processor
to perform that job to completion. - Execution time is not equal for all jobs in a
task. The parameter ei is interpreted as the
maximum or worst case execution time (WCET).
4Events in the Life of a Job
absolute deadline
release time
completion time
response time
relative deadline
5Notes on Job Events
- Response time is not equal for all jobs in a task
due to - Variations in job execution time
- Variations due to system scheduling
- Response time of a task is the largest response
time for any of its jobs. - Deadline usually means relative deadline, not
absolute deadline.
6Hard versus Soft
- Hard real-time means failure to meet a deadline
is considered a fault. - No general consensus on a precise definition of
soft real-time. - In practice, hard real-time models are useful
engineering abstractions, e.g. The hard
real-time requirement simplifies the process of
validating the overall system. (And simplifies
the process of designing the overall system.)
7Timing VV
- Timing requirements must be verified and
validated. - Validation means insuring the numbers in the
requirements specification really solve the end
users problem, e.g. the specified maximum screen
update time following a button push really does
avoid irritating the pilot. (We wont cover
this.) - Verification means insuring the implementation
complies with the requirements. (Well talk
about some methods for this.) - Many timing requirements are derived
requirements, meaning they are specified by other
engineers, e.g. control task periods. - VV of timing requirements must be mapped into
the VV process somewhere, but this depends on
the exact VV processes used.
8Periodic Tasks
period
P
execution time
e
timeline
A periodic task is a sequence of jobs that are
released at times 0f, P f, 2P f, 3P f, A
periodic task Ti is specified using the
parameters Pi period ei maximum execution
time Fi phase or offset (almost always zero) Di
relative deadline (almost always Pi)
9Notes on Periodic Tasks
- Di Pi (absolute deadline equals next release
time) is very common. If deadlines are not
stated then this is assumed (called the implicit
deadline assumption). - Non-zero phase is sometimes used in distributed
system scheduling, well ignore it for now. - Pi can be interpreted as the minimum time between
releases, often called the minimum interarrival
time. It turns out that many uni-processor
scheduling and schedulability analysis results
hold under this relaxed model. These are called
sporadic rather than periodic tasks. (We will
use the term purely periodic when we want to
explicitly exclude sporadic tasks.) - Ti is often used in the literature instead of Pi.
10Jitter
Suppose rk and rk1 are consecutive release
times. Then abs (P (rk1 rk)) is the jitter,
or the variation between the ideal and actual
inter-release time. The release time jitter is
the maximum jitter between any pair of job
release times. The completion time jitter is
similarly defined. How much jitter is tolerable
depends on the application. Jitter is often
ignored as a constraint in the scheduling
literature. Where jitter is tightly constrained
(e.g. multi-media), the system is often designed
to constrain it (e.g. outputs are buffered and
output with very low jitter at a periodic
interrupt).
11Processor Utilization
- The utilization of a processor is the asymptotic
fraction of time that processor is busy executing
jobs. - The utilization due to a periodic task Ti is
- Ui ei / Pi
- For a sporadic task this is the maximum
utilization. - For a single task, it should be obvious that Ui gt
1 implies no feasible schedule can exist.
12Utilization due to a Task Set
- For a set of periodic tasks hosted on a common
processor, the utilization is
Ugt1 means the task set cannot possibly be
scheduled so that every job of every task always
meets its deadline. We say the task set is
infeasible, or cannot be feasibly scheduled.
13Breakdown Utilization
- U 1 does not necessarily mean a task set can be
feasibly scheduled. - The breakdown utilization U of a given
scheduling algorithm is the value such that U
U guarantees the task set can be feasibly
scheduled by that algorithm. - Theoretical breakdown utilizations are known for
only a few scheduling algorithms. - In practice, be aware that resources cannot
always be perfectly utilized (especially in
multi-processor systems).
14Hyperperiod
- A periodic system schedule can be defined by
giving a finite-length schedule that is repeated
over and over. - The length of the smallest such schedule is
called the hyperperiod for that task set. - For a purely periodic task set, the hyperperiod
is
15Harmonic Task Set
- A periodic task set is harmonic (simply periodic)
if - Every period evenly divides all larger periods
- Every period is evenly divisible by all smaller
periods - The hyperperiod of a harmonic task set is equal
to the largest task period. - Harmonic task sets are of interest because
- They occur commonly in practice
- Scheduling algorithms may be simpler and more
efficient (higher breakdown utilizations) - Shorter hyperperiods may enable more efficient
implementations (e.g. smaller scheduling tables)
16Drift
Release time drift can occur when the mechanisms
used to dispatch tasks (clocks and software) are
not explicitly synchronized. This occurs
sometimes in multi-processor systems, and systems
architected as compositions of individual
controllers (e.g. hierarchical control
architecture). This is undesirable among the
tasks in a single multi-rate controller
17Periodic Tasking
18Representing Time
Implementations often synchronize/align job
dispatches. Exact alignment is necessary to
model an infinite schedule as a repeated
hyperperiod schedule. Operations like mod and
exactly divisible by appear in many
formulas. In implementations (simulations,
analysis algorithms, embedded code) avoid using
floating point numbers to represent time. Most
commonly, integers and a sufficiently small
quantization of time are used. (There are a few
theoretical models best served by rational
numbers.)
19Clock-Driven Scheduling
- Static
- Off-line
- Non-preemptive
- No run-time context swaps
- Implemented using a single processor thread
- May use off-line splitting, though
- Implementation idioms
- Timer-Driven (Table-Driven) Scheduling
- Frame-Based (Cyclic) Scheduling
20Basic Idea
- Define a hyperperiod schedule during development,
to be executed repeatedly at run-time. - Release time and latest completion time for each
job, relative to the start of the hyperperiod,
are fixed at development-time.
one hyperperiod
21Table-Driven Scheduling
- Construct a table that lists the relative release
time of every job in the hyperperiod.
22Run-Time Table-Driven Scheduler
- int 0
- k 0
- set clock interrupt handler to Scheduler
- start clock with first interrupt at
Release_Time(0) - procedure Scheduler is
- job Job_To_Release(k)
- int int 1
- k int mod Table_Size
- h floor (int / Table_Size)
- set next clock interrupt to hHyperperiod
Release_Time(k) - call job
- end Schedule
23Run-Time Table-Driven Scheduler Notes
- This is a little different than in the book, e.g.
omits background servicing during idle jobs,
omits check for job overrun. - Note clock and int may overflow. Either assure
these are large enough, or modify the idiom so
clock and all variables are maintained modulo the
hyperperiod (or some multiple thereof).
24So, How do you Generate a Table?
- General problem is NP-hard.
- Special cases are easier, e.g. theoretical
results for unit execution times. - Intuitively, big jobs make the puzzle-solving
harder. - Break big jobs into smaller pieces (this can
become a scheduling maintenance nightmare) - Adapt X-fit decreasing bin packing techniques
- Simulate off-line non-preemptive earliest
deadline first. - See references on the class web site
25Frame-Based Scheduling
J21
J12
J22
J11
J14
J13
t11
t21
t12
t13
t22
t14
- Evenly divide the hyperperiod into a number of
equal-length frames, and assign jobs to frames. - Jobs assigned to a frame are executed
back-to-back when that frame begins. - The frame structure is essentially a fixed
strictly periodic scheduling table structure,
which admits a simpler implementation idiom.
26Run-Time Cyclic Scheduler
- cycle 0
- set periodic clock interrupt handler to
Scheduler - start clock with interrupt period
Size_Of_Frame - procedure Scheduler is
- case cycle is
- when 0 gt J11 J21
- when 1 gt J12
- when 2 gt J13 J22
- when 3 gt J14
- end case
- cycle (cycle 1) mod Number_Of_Frames
- end Scheduler
27Run-Time Cyclic Scheduler Notes
- Again, a little different than in the book, e.g.
omits overrun checks and background servicing,
illustrates a slightly different implementation
idiom.
28So, How do you Generate a Frame-Based Schedule?
- 1. Pick a frame size
- Text gives constraints to be satisfied for the
general problem (but no constraint satisfaction
algorithm). - For harmonic workloads, typically use the
smallest task period. - 2. Assign jobs to frames
- Treat this as a kind of bin packing problem.
- See references on the class web site.
29Static Scheduling Notes 1
- Advantage fairly easy to include all
time-critical processor activity in the time
line, e.g. scheduler/interrupt handler, state
copy and I/O code. - Advantage verification is relatively easy.
- Advantage overheads can be low, e.g.
implementable using a single interrupt-handling
thread. - Advantage jobs are always mutually exclusive
30Static Scheduling Notes 2
- Using a COTS RTOS, hard real-time periodic tasks
can be executed by the interrupt handling thread,
leaving any other RTOS threads to execute in
background. These should be non-time-critical.
Background response times may become quite large
as the periodic utilization becomes large. - Disadvantage Not well-suited to sporadic (or
other not-strictly-periodic hard real-time)
tasks. Requires such tasks to be scheduled as
polling tasks, which adds up to one period
worst-case response between actual event and job
completion time.
31Static Scheduling Notes 3
- Disadvantage Producing an efficient schedule is
a challenge. - NP-hard problem in general.
- Scheduling tool may exhibit anomalous behaviors
- Small change to problem results in big change to
schedule - Small change to problem results in big change in
scheduling tool running time - Making the problem simpler (e.g. reducing an
execution time) may cause the scheduling tool to
fail (anomalous scheduling) - Usually cant say much about breakdown
utilization - Off-line manual pseudo-preemption of large jobs
can help, but be careful of complicating the
scheduling and maintenance problem.
32Static Scheduling Notes 4
- Lots of literature on all kinds of fancy,
high-powered methods applied to this problem,
e.g. - Various pruned search heuristics
- Simulated annealing
- Highly personal and totally unsubstantiated
opinion the complicated stuff probably only gets
you a little bit more than fairly fast algorithms
based on theoretically good (if not optimal)
methods for bin-packing, non-preemptive
scheduling, etc. - If finding static schedules becomes a major
development challenge (e.g. overnight runs,
management nail-biting) then you probably should
rethink some of the architectural decisions (e.g.
add another processor, switch to another
scheduling paradigm). (But sometimes its too
late.)
33Static Scheduling Notes 5
- Recommended for relatively simple periodic task
sets, especially where very low implementation
complexity and overhead is desirable. - Not recommended for complex workloads (e.g.
non-harmonics, sporadics, multi-processor),
especially where high utilizations are required.
34Preemptive Priority Scheduling
- A job becomes ready at its release time.
- A job remains ready until it has received enough
processor time to complete. - Among all ready jobs, the processor executes the
one having highest priority. - The scheduling problem reduces to assigning
priorities to jobs. - We initially assume task priorities are distinct,
no two tasks have the same priority.
35Example
- P1 2, e1 1 P2 3, e2 1
- T1 has higher priority than T2
- (all jobs of T1 have higher priority than jobs of
T2)
36Notes on Example
- This is called a fixed priority assignment (aka
static assignment, off-line assignment). All
jobs of a task have the same priority, and
priorities are assigned at development-time. - This schedule is just barely feasible at U 5/6
(breakdown utilization lt 1 it is a non-harmonic
task set).
37Priority Assignment Algorithms
- Preemptive Fixed Priority
- Rate Monotonic Assignment (RMA) priorities
assigned monotonically with periods, shorter
periods get higher priority - Deadline Monotonic Assignment (DMA) priorities
assigned monotonically with deadlines, shorter
deadlines get higher priority (generalizes RMA). - Earliest Deadline First
- Job with earliest absolute deadline has highest
priority (not a fixed/static priority assignment)
38PFP Lemma
- When thinking about the timing and scheduling
properties of a job, you can ignore all jobs from
all tasks of lower priority.
39Critical Instant Result
- For periodicsporadic fixed priority schedules,
- Among all possible relative task phases (whether
due to explicit phases or drift), - The maximum response time of a job will occur
when it is dispatched simultaneously with jobs of
every higher priority. - So, to determine worst-case response times, we
only have to simulate a schedule starting at an
instant at which all tasks are dispatched
simultaneously up to the point the job of
interest has completed.
40Intuitive Critical Instant Proof case 1
Suppose this phasing gave the greatest response
time for the lower priority job. If we slide the
high priority jobs later in time (to the right),
we bring in more preemption from the left, and
can at most push the same amount of preemption
out to the right. Thus, this can only increase
and never decrease the amount by which the lower
priority job is preempted, hence can only
increase and never decrease its response time.
41Intuitive Critical Instant Proofcase 2
Suppose this phasing gave the greatest response
time for the lower priority job. If we slide the
high priority jobs earlier in time (to the left),
we do decrease any preemption at the start of the
interval, and might increase preemption at the
end. Thus, this can only increase and never
decrease the amount by which the lower priority
job is preempted, hence can only increase and
never decrease its response time.
42Intuitive Critical Instant ProofInduction Step
- Suppose for a task with k higher-priority tasks,
we are given a phasing that has worst-case
response time. - Apply this argument to align the phase of the
next lower priority task. The response time is
the same as a single task combining the
dispatched work of both within the response time
interval. - Recursively apply the argument to the next lower
priority task.
43DMA is an Optimal PFP Assignment
- Among all possible preemptive fixed priority
assignments, DMA is optimal in the sense - If any priority assignment produces a feasible
schedule, then DM will also. - If a DM schedule is not feasible, then there
exists no feasible fixed priority assignment. - There are periodic task sets for which DMA and
non-DMA priority assignments are feasible. - There are periodic task sets for which PFP DMA is
infeasible but dynamic priority assignments are
feasible (e.g. EDF).
44Intuitive DMA Optimality Proof
busy interval
- Suppose we are given a feasible non-DM priority
assignment. - Search from highest to lowest priority until we
find a pair of tasks having non-DM priority. - We can switch their priorities without making the
schedule infeasible. - This has no effect on the total preemption by
higher priority tasks in their combined busy
interval. - The completion time of the formerly lower
priority task decreases. - The completion time of the formerly higher
priority task becomes that of the formerly lower
priority task, but this meets the deadline.
45Analyzing Worst-Case Response Times
A rate-monotonic priority assignment. By
convention in the literature, 1 is the highest
scheduling priority. (This convention is not
followed by all RTOS products, however.)
3
2
1
46Time Demand Analysis
The maximum number of dispatches that can occur
in an interval of length t.
The maximum amount of work that can be dispatched
in an interval of length t.
47Time Demand Analysis(aka Busy Period Analysis)
The maximum amount of work that can be dispatched
in an interval of length t for all tasks having
higher priority than task i.
48Time Demand Analysis
Find the least Wi(t) where this holds.
49Fixed-Point Time Demand Analysis(for a single
task Ti)
Loop
Until t does not change
50Analyzing Worst-Case Response Times
3
Analyze task 2
2
1
Ignore everything of lower priority.
51Analyzing Worst-Case Response Times
t e2 initially
t e2 e1 (one dispatch of each could have
occurred in the above interval)
t e2 2e1 (1 dispatch of task 2 and 2
dispatches of task 1 could have occurred in the
above interval)
52Time Demand Analysis(Another View)
Demand curve for tasks 1 and 2 (amount of work
dispatched)
Supply curve (amount of work processor could have
performed)
Point at which work performed catches up with
work dispatched
Response time
Critical instant
53Time Demand Criteria
- The processor will remain busy from the time job
i is dispatched until that job completes, which
occurs only when the processor has performed at
least this much work, i.e. wi(t)t. - This is actually the basis for two algorithms
- Check every release time for all jobs at priority
level i and higher (exact characterization
algorithm, in the text) - Compute the minimum t as a least fixed point
(response time algorithm, next slide)
54Response Time Analysis
- For each i from highest to lowest priority loop
- t ei
- If wt t then next i
- t wt
- If t gt Di then infeasible
- end loop
55Other Schedulability Analyses
- RMA for implicit deadlines and harmonic periods
has a theoretical breakdown utilization of 1. - EDF for implicit deadlines has a theoretical
breakdown utilization of 1. - Extensions exist to compute critical scaling
factors. - Extensions exist to include things like context
swap times, real-time semaphore waiting times,
etc. (covered in a future class).
56PFP vs EDF
- EDF
- is optimal, even for more general models than
this - (PFP breakdown is theoretically as low as 69
but generally gt90, and 100 for harmonics) - PFP
- Is stable and predictable under overload
- Is widely supported by available HW SW
- Has a large body of extensions tricks
- (EDF proponents have addressed many of these in
recent years)
57COTS RTOS Support
- RTOS maintains a priority ready queue
- Service calls to set thread priority
- Various service calls move threads to/from the
ready queue - At scheduling points, RTOS selects highest
priority ready thread to execute
58Self-Dispatching Periodic Thread
- Next_Dispatch 0
- loop
- ltltcomputationgtgt
- Next_Dispatch Next_Dispatch Period
- RTOS.Wait_Until (Next_Dispatch)
- end loop
59Self-Dispatching Periodic Thread
- RTOS.Setup_Periodic_Signal (Psig, Period)
- Loop
- RTOS.Wait_For (Psig)
- ltltcomputationgtgt
- end loop
60Periodic Dispatcher(aka System Executive)
- cycle 0
- Int_Period GCD (P1, P2, .)
- set periodic clock interrupt handler to
Scheduler - start clock with interrupt period Int_Period
- procedure Scheduler is
- if cycle mod (P1 / Int_Period) 0 then
dispatch T1 - if cycle mod (P2 / Int_Period) 0 then
dispatch T2 - .
- cycle (cycle 1) mod (Hyperperiod /
Int_Period) - end Scheduler
61Implementation Comments
- A separate dispatcher adds another thread, but
- Can assure synchronized dispatching
- Is a place to put com, I/O, initialization,
- Can help detect and manage overruns
- Centralized place for task data
- Dispatcher appears as the highest rate, highest
priority task in the workload model.
62Assignment for 8 Feb
- Chapter 6
- 6.8-6.8 (practical factors)
- Chapter 8
- 8.1-8.6 (real-time semaphore protocols)
63Select A Real-Time Project(to be completed by 1
March 2008)
- Implement a deadline-monotonic priority
assignment algorithm and a preemptive fixed
priority response time analysis algorithm. Run
on a few example workloads. UI not relevant. - Implement a non-preemptive cyclic scheduling
algorithm. Run on a few example workloads. UI
not relevant. You may assume harmonic periods. - Download apply a ROTS schedulability analysis
tool to an example problem (not one of the
included examples). - http//mast.unican.es/
- http//beru.univ-brest.fr/singhoff/cheddar/
- Implement a periodic dispatching idiom to
periodically dispatch a small number of
non-trivial procedures and capture sequences of
execution times. Experiment with changes in
input, other loads on the processor. Discuss
selecting a WCET value. - Student proposal