Title: CSE 237B Fall 2004 Tasks and Task Scheduling for Real Time
1CSE 237B Fall 2004Tasks and Task Scheduling for
Real Time
- Rajesh Gupta
- Computer Science and Engineering
- University of California, San Diego.
2Overview
- The goal of task modeling and management is to
understand the requirements of embedded software
for application programming as well as for the
operating system needs. - Task management
- Task interaction
- Embedded Software as Tasks
- Static and Dynamic Aspects of Task scheduling
- Memory Management Stack and Heap
- Real-time kernels
- Commercial and research real-time operating
systems
3Tasks
- An embedded system typically has many activities
(or tasks) occurring in parallel. - A task represents an activity in the system.
- Historically, one task represents one sequential
thread of execution - however, multithreading allows multiple threads
of control in the same task. - We will assume a single thread of control.
- The principles of concurrency are fundamental
regardless of the granularity of the concurrent
units (processes, tasks, or threads). - We will examine concurrency in terms of tasks.
4Concurrency
- Concurrent tasking means structuring a system
into concurrent tasks. - Advantages of concurrent tasking
- a natural model for many real-time applications.
- results in a separation of concerns of what each
task does from when it does it. This usually
makes the system easier to understand, manage,
and construct. - can result in an overall reduction in system
execution time by overlapping executions of
independent tasks. - allows greater scheduling flexibility since time
critical tasks with hard deadlines may be given a
higher priority than less critical tasks. - identifying the concurrent tasks early in the
design can allow an early performance analysis of
the system. - However, concurrent tasking introduces complexity
because of task interactions.
5Task Interaction
- Often, tasks execute asynchronously, i.e., at
different speeds, but may need to interact with
each other. - Three types of interactions are possible
- communication
- synchronization
- mutual exclusion
- Communication is simply used to transfer data
between tasks. - Synchronization is used to coordinate tasks.
- Mutual exclusion is used to control access to
shared resources.
6Results of Interaction
- Task interactions lead to three types of behavior
- independent
- cooperating
- competing
- Independent tasks have no interactions with each
other. - Cooperating tasks communicate and synchronize to
perform some common operation. - Competing tasks communicate and synchronize to
obtain access to shared resources.
7Task Implementation
- Two cases dedicated versus shared resources.
- Implementing on dedicated resources
(multiprocessing) - dedicate one processor for each task
- connect processors using communication links such
as a bus - Different arrangements are possible such as
shared memory (one big memory shared by all but
with local memories too) or distributed memory
(all local memories). - Implementing on shared resources
- sharing the processor
- sharing the memory
8Shared Processor Implementation
- Issues in implementing tasks on a shared
processor - how the processor is to be shared - what
mechanisms are required to enable a processor
executing one task to change its activity and
execute another task - when the processor is to be shared - at what
times, or as a result of what events, should the
processor change from executing one task to
executing another - which task should the processor direct its
attention to, when sharing of the processor
necessary (related to scheduling) - How and when in serial execution
- commence the next task at its starting point at
the completion of the current task - How and when in concurrent execution
- commence the next task at the point where it
previously left off when the current task gives
up use of the processor
9Shared Memory Implementation
- Issues in implementing tasks on a shared memory
- provide enough memory to hold all the tasks, or
- do code sharing and memory sharing
- Code sharing through
- serially re-useable code
- write the code in the subroutine shared (call it
S) in such a way that it makes no assumptions
about the values in its local variables when it
is entered. - Using a lock and unlock pair, only one task can
be made to use S at any time. - re-entrant code
- In the above scheme, all the temporary areas that
S needs reside in S. If these areas were to be
part of the task currently using S, then it would
consist of executable code only, and it could be
executed by more than one task at a time,
provided that S did not modify its own code in
any way. - S uses the data areas indirectly, typically via a
relocation pointer which is associated with each
task and which is passed as a parameter when S is
called.
10Task Management
Nonexisting
- A task can be in one of the states shown
- task creation
- In general, all tasks should be created before
run time and remain dormant until needed. - This guarantees that the resource demands will
be known and that performance can be evaluated
with respect to real-time deadlines.
Created
Initializing
Terminated
Executing
Waiting dependent termination
Waiting child initialization
Suspended
11Task Modeling Issues
- Variations in the task models of concurrent
programming languages are based on - structure
- level of parallelism
- granularity
- initialization
- termination
- representation
- Structure
- static the number of tasks is fixed and known
before run time. - dynamic tasks are created at any time. The
number of extant tasks is determined only at run
time. For example, Ada and C. - Level of parallelism
- nested tasks are defined at any level of the
program text in particular, they are allowed to
be defined within other tasks. For example, Ada
and C. - flat tasks are defined only at the outermost
level of the program text.
12Task Modeling Issues
- Granularity
- coarse grain such a program contains relatively
few big (long live history) tasks, e.g., Ada. - fine grain such a program contains a large
number of simple tasks. - Initialization when a task is created, it may
need to supplied with information pertinent to
its execution. Two ways to do that - pass information in the form of parameters to the
task - communicate explicitly with the task after is has
started its execution - Termination under the following circumstances
- completion of execution of the task body
- suicide, by execution of a self-terminate
statement - abortion, through the explicit action of another
task - occurrence of an un-trapped error condition
- never tasks are assumed to execute
non-terminating loops - when no longer needed
13Expressing Concurrency
- Representation there are four basic mechanisms
for expressing concurrent execution - coroutines
- fork and join
- cobegin and explicit task declaration
- explicit task declaration
14Expressing Concurrency Coroutines
- Like subroutines but allow control to pass
explicitly between them in a symmetric rather
than strictly hierarchical way - Control is passed from one coroutine to another
by means of the resume statement which names
the coroutine to be resumed. - When a coroutine executes a resume, it stops
executing but retains local state information so
that if another coroutine subsequently resumes
it, it can and will continue its execution. - No run-time support system is needed as the
coroutines themselves sort out their order of
execution. - In this scheme, tasks can be written by
independent parties, and the number of tasks need
not be known in advance. - Certain languages such as Ada and Modula-2 have
built-in support for coroutines. - error-prone due to the use of global variables
for communication
15Coroutines
C
A
B
Resume B
Resume C
Resume A
16Expressing Concurrency Fork and Join
- Fork and Join
- Fork specifies that a designated routine should
start executing concurrently with the invoker of
the fork. - Join allows the invoker to synchronize with the
completion of the invoked routine. - Fork and join allow for dynamic task creation and
provide a means of passing information to the
child task via parameters. Usually only a single
value is returned by the child on its
termination. - flexible but error-prone in use because they do
not provide a structured approach to task
creation. - available in Unix.
17Fork and Join
- Function F return
- procedure P
-
- C fork F
-
- J join C
- end P
18Expressing Concurrency Cobegin and Coend
- Cobegin
- a structured way of denoting the concurrent
execution of a collection of statements - Tasks between a pair of cobegin and coend
statements execute concurrently. - Can even support nesting of cobegins.
- Occam-2 supports cobegins.
- cobegin
- S1
- S2
- S3
- coend
19Explicit Task Declaration
- Explicit task declaration
- Routines themselves state whether they will be
executed concurrently. - Ada supports explicit task declaration by
implicit task creation in that all tasks declared
within a block start executing concurrently at
the end of the declarative part of that block. - Ada also supports dynamic task creation using the
new operator on a task type.
20Task Interaction Communication
- Communication is based on
- shared memory
- message passing
- Shared memory-based communication
- Each task may access or update pieces of shared
information/data. - Message passing-based communication
- A direct transfer of information occurs from one
task to another. - Communication mechanisms
- channels
- pools
21Communication Mechanisms
- Channels
- provide the medium for items of information to be
passed between one task and another - can hold more than one item at any time
- usually have the items passing through in an
ordered manner - Pools
- make items of information available for reading
and/or writing by a number of tasks in the system - act as a repository of information information
does not flow within a pool
22Implementing Communication
- Channel - provides a pipe of information passing
from one task to another. For the tasks to run
truly asynchronously, there must be some
buffering of information the larger the buffers,
the greater the system flexibility. - queues
- circular queues (or ring buffers or hoppers)
- event flags
- sockets and pipes
- Pool - usually takes the form of system tables,
shared data areas, and shared files. Since a pool
is shared by more than one task, it is essential
to control strictly the access to information in
pools. - mailboxes (or ports)
- monitors
- In all cases involving a finite-sized structure,
the size of the structure should be taken into
account during the design phase of the system to
prevent overflows.
23Implementing Communication
- Queues
- Items are placed on the tail of the queue by the
sending task and removed from the head of the
queue by the receiving task. - A common organization is First-In-First-Out
(FIFO) organization in which the first item come
in will be the first go out. - Items can have priorities and can be placed in
the queue based on their priorities. - For large items such as arrays, it is better to
use the address of the item in the queue. In this
case, the producer task allocates the memory and
the consumer task releases or reuses it. - Circular queues
- The underlying structure is a queue but the
arrangement is like a ring in that items are
placed into the slots in the queue which are
considered to be arranged around a ring. - easier to manage than a FIFO queue, e.g., using
sentinels
24Implementing Communication
- Event flags
- An event flag is associated with a set of related
Boolean events. The flag maintains the state of
the events and provides users access to read or
modify the events. A task can wait for a
particular event to change states. - In essence, they represent simulated interrupts,
created by the programmer. Raising the event flag
transfers control to the operating system, which
can then invoke the corresponding handler. An
example is the raise and signal facilities in C. - Liked by designers because they enable Boolean
logic to be applied to events, e.g., a task can
wait on the conjunction and/or disjunction of
discrete events. - Poor mechanisms because they do not have content,
and it is hard to decide who resets a flags
state and what to do if a flag indicates the
event is already set (or cleared).
25Implementing Communication
- Sockets and pipes
- most often associated with network-based systems
and provide a reliable communication path - should be used if portability is more important
than performance - Mailboxes
- A mailbox is a mutually agreed upon memory
location that multiple tasks can use to
communicate. - Each mailbox has a unique identification, and two
tasks can communicate only if they have a shared
mailbox. - uncommon in modern real-time systems
- Monitors
- A monitor is defined over a channel or a pool and
hides the internal structure of them. - A monitor is used to enforce synchronization (via
condition variables) and mutual exclusion under
the control of the compiler. - provide information hiding. Java uses monitors.
26Task Synchronization
- Synchronization involves the ability of one task
to stimulate or inhibit its own action or that of
another task. - In other words, in order to carry out the
activities required of it, a task may need to
have the ability to say stop or go or wait a
moment to itself, or another task. - Synchronization between two tasks centers around
two significant events, wait and signal. - One task must wait for the expected event to
occur, and the other task will signal that the
event has occurred. - Thus, synchronization can be implemented by
assuming the existence of the following two
procedures - WAIT(event) SIGNAL(event)
- WAIT and SIGNAL procedures are indivisible
operations in that once begun, they must be
completed and the processor cannot be swapped
while they are being executed.
27Implementing Synchronization
- WAIT(event)
- causes the task to suspend activity as soon as
the WAIT operation is executed, and it will
remain suspended until such time as notification
of the occurrence of an event is received. - Should the event have already occurred, the task
will resume immediately. - A waiting task can be thought of as being in the
act of reading event information from a channel
or pool. Once this information appears, it can
continue. - SIGNAL(event)
- broadcasts the fact that an event has occurred.
Its action is to place event information in a
channel or pool. This in turn may enable a
waiting task to continue. - Implementing synchronization via semaphores
- a non-negative integer that can only be
manipulated by WAIT and SIGNAL apart from the
initialization routine - event in WAIT and SIGNAL above refers to a
semaphore - also used to manage mutual exclusion
28Task Interaction - Mutual Exclusion
- Critical region
- a sequence of statements that must appear to be
executed indivisibly (or atomically) - Mutual exclusion
- the synchronization required to protect a
critical region - can be enforced using semaphores
- Potential problems - due to improper use of
mutual exclusion primitives - Deadlocks
- Livelocks
- Lockouts or starvation
- Priority inversion
29Mutual Exclusion Problems
- Deadlock
- Two or more tasks are waiting indefinitely for an
event that can be caused by only one of the
waiting tasks. - Livelock
- Two or more tasks are busy waiting indefinitely
for an event that can be caused by only one of
the busy-waiting tasks. - Lockout or starvation
- One task that wished to gain access to a resource
is never allowed to do so because there are
always other tasks gaining access before it. - Priority Inversion
- Effective inversion in priority because of
resource lock due to (transitively) dependent
tasks.
30Task Interaction - Mutual Exclusion
- If a task is free from livelocks, deadlocks, and
lockouts, then it is said to possess liveness.
This property implies that if a task wishes to
perform some action, then it will, eventually, be
allowed to do so. - In particular, if a task requests access to a
critical section, then it will gain access within
a finite time. - Deadlocks are the most serious error condition
among the three problems above. There are three
possible approaches to address the issue of
deadlock - deadlock prevention
- deadlock avoidance
- deadlock detection and recovery
- For a thorough discussion of these issues, refer
to standard operating systems books, - e.g., Silberschatz and Galvin, because real-time
systems use the same techniques.
31Embedded Software as Tasks
- Static and Dynamic Aspects of Implementation of
Embedded Software (Conceptualized as Tasks)
32Embedded software on a processor
- Typical implementation approaches
- Synchronous
- single program
- Asynchronous
- foreground/background system
- multi-tasking
33Consider the following example
- A process controller with following modules
- a clock tick comes every 20 ms when a clock
module must run - a control module must run every 40 ms
- three modules with soft constraints
- operator display update
- operator input
- management information logs
34Single Program Approach
- while (1)
- wait for clockdo clock module
- if (time for control) do control
- else if (time for display update) do display
- else if (time for operator input) do operator
input - else if (time for mgmnt. request) do mgmnt.
output -
- Must have t1 max(t2, t3, t4, t5) ? 20 ms
- may require splitting tasks gets complex!
35Another Example
- int main(void) Init_All() for ()
IO_Scan() IO_ProcessOutputs() KBD_Scan()
PRN_Print() LCD_Update() RS232_Receive()
RS232_Send() TMR_Process() // should
never ever get here // can put some error
handling here, just in case return (0) // will
keep most compilers happy
Each change of state of an input or output
results in an RS 232 message sent out, a printout
, and an LCD update. Rx RS 232 messages can
result in printouts, LCD updates, and output
status update.
36Observations on the Single Program Approach
- Each function called in the infinite loop
represents an independent task - Each of these tasks must return in a reasonable
time, no matter what thread of code is being
executed - We have no idea at what frequency our main loop
runs. - In fact, the frequency is not constant and can
significantly change with the changes in system
status - (as we are printing a long document or displaying
a large bitmap, for example) - Mix of periodic and event-driven tasks
- Most tasks are event driven
- e.g. IO_ProcessOutputs is an event-driven task
- dedicated input event queue associated with them
- e.g. IO_ProcessOutputs receives events from
IO_Scan, RS232_Receive, and KBD_Scan when an
output needs to be turned on - Others are periodic
- No trigger event, but may have different periods,
and may need to change their period over time
37Observations on the Single Program Approach
(contd.)
- Need some simple means of inter-task
communications - e.g. may want to stop scanning the inputs after a
particular keypad entry and restart the scanning
after another entry - require a call from a keypad scanner to stop the
I/O scanner task - e.g. may also want to slow down the execution of
some tasks depending on the circumstances - say we detect an avalanche of input state
changes, and our RS-232 link can no longer cope
with sending all these messages - like to slow down the I/O scanner task from the
RS-232 sending task - May need to perform a variety of small but
important duties - e.g. dim the LCD exactly one minute after the
very last key was pressed, flash a cursor on the
LCD at a periodic, fixed and exact frequency. - dedicating a separate task to each of these
functions may be an overkill
38Going Beyond Single Program Software
- Asynchronous implementation approaches
- Foreground/background systems
- Multitasking
- Foreground (interrupt)
- on interrupt
- do clock module if (time for control) do
control -
- Background
- while (1)
- if (time for display update) do display
- else if (time for operator input) do operator
- else if (time for mgmnt. request) do mgmnt.
-
- Decoupling relaxes constraint t1 t2 ? 20 ms
39Multi-tasking Approach
- Single program approach one task
- Foreground/background two tasks
- Generalization multiple tasks
- also called processes, threads etc.
- each task carried out in parallel
- no assumption about of processors
- tasks simultaneously interact with external
elements - monitor sensors, control actuators via DMA,
interrupts, I/O etc. - often illusion of parallelism
- requires
- scheduling of these tasks
- sharing data between concurrent tasks
40Task Characteristics
- Tasks may have
- resource requirements
- importance levels (priorities or criticalness)
- precedence relationships
- communication requirements
- And, of course, timing constraints!
- specify times at which action is to be performed,
and is to be completed - e.g. period of a periodic task
- or, deadline of an aperiodic task
41Preemption
- Non-preemptive
- task, once started, runs until it ends or has to
do some I/O - Preemptive a task may be stopped to run another
- incurs overhead and implementation complexity
- but has better schedulability
- with non-preemptive, quite restrictive
cobstraints - e.g. N tasks, with task j getting ready every Tj,
and needs Cj time during interval Tj then Tj ?
C1 C2 CN in the worst case - because all other tasks may already be ready
- i.e. period of every thread ? sum of computation
times!
42So, then how to organize multiple tasks?
- Cyclic executive (Static table driven scheduling)
- static schedulability analysis
- resulting schedule or table used at run time
- TDMA-like scheduling
- Event-driven non-preemptive
- tasks are represented by functions that are
handlers for events - next event processed after function for previous
event finishes - Static and dynamic priority preemptive scheduling
- static schedulability analysis
- no explicit schedule constructed at run time
tasks are executed highest priority first - Rate monotonic, deadline monotonic, earliest
deadline first, least slack
43Continued
- Dynamic planning-based scheduling
- Schedulability checked at run time for a
dynamically arriving task - admission control
- resulting schedule to decide when to execute
- Dynamic best-effort scheduling
- no schedulability checking is done
- system tries its best to meet deadlines
44Performance Characteristics to Evaluate
Scheduling Algorithms
- Static case off-line schedule that meets all
deadlines - secondary metric
- maximize average earliness
- minimize average tardiness
- Dynamic case no a priori guarantees that
deadline would be met - metric
- maximize of arrivals that meet deadline
45Cyclic Executive, or Static Table-driven
Scheduling
- Application consists of a fixed set of processes
- All processes are periodic, with known periods
- aperiodic tasks can be converted into periodic by
using worst case inter-arrival time - Processes are completely independent of each
other - Zero overhead costs
- Processes have deadlines equal to their periods,
i.e., each process must complete before it is
next released - All processes have fixed WCET
- A table is constructed and tasks are dispatched
accordingly repeatedly - feasible schedule exists iff there is a feasible
schedule for the LCM of the periods - heuristics such as earliest deadline first, or
shortest period first - Predictable, but inflexible
- table is completely overhauled when tasks or
their characteristics change
46Example
Process Period Computation Time C
A 25 10
B 25 8
C 50 5
D 50 4
E 100 2
47Cyclic Executive
- Minor cycle 25ms
- Major cycle 100ms
- A major cycle contains a number of minor cycles
- During execution a clock interrupts every 25ms
that enables the scheduler to loop through the
four minor cycles
48Cyclic Executive
- So, no actual processes exist at run-time, each
minor cycle is just a sequence of procedure calls - The procedures share a common address space and
can thus pass data between themselves. - This data does not need to be protected because
concurrent access is not possible. - It is difficult to incorporate sporadic
processes, processes with long periods (may need
to be split) - However, if it is possible to construct a CE,
then no further schedulability test I sneeded - A bin packing problem
- A typical system with 40 minor cycles and 400
entries
49Priority based scheduling
- Non-preemptive scheduling
- A lower priority tasks completes before the next
available higher priority task executes - Preemptive scheduling
- Preempt executing task based on priority
- Deferred preemption, or cooperative dispatching
- Allow a lower priority task to complete for a
bounded time (but not necessarily to completion)
50Priority-based Preemptive Scheduling
- Tasks assigned priorities, statically or
dynamically - priority assignments relates to timing
constraints - static priority attractive no recalculation,
cheap - At any time, task with highest priority runs
- if low priority task is running, and a higher
priority task arrives, the former is preempted
and the processor is given to the new arrival - Appropriate assignment of priorities allow this
to handle certain types of real-time cases - Process states
- Runnable
- Suspended waiting for a timing event
- Useful for periodic processes
- Suspended waiting for a non-timing event
- Useful for sporadic processes
- (assume no IPC for now)
51Scheduling and Schedulability Tests
- Scheduling is determination of next task to run
- Can be as simple as determine task priorities
- Schedulability test
- A test that determines whether a set of ready
tasks can be scheduled such that each task meets
its deadline - Tests can be
- Exact
- Necessary
- Sufficient
Exact
Necessary
Sufficient
Complexity of the task set
52Some Priority-based Preemptive Scheduling
Approaches
- Rate Montonic algorithm by Liu Layland
- static priorities based on periods
- higher priorities to shorter periods
- optimal among all static priority schemes
- Earliest-deadline First
- dynamic priority assignment
- closer a tasks deadline, higher is it priority
- applicable to both periodic and aperiodic tasks
- need to calculate priorities when new tasks
arrives - more expensive in terms of run-time overheads
- Key results on schedulability bounds!
53Rate Monotonic Priority Assignment
54Schedulability Tests
- Utilization Based
- Elegant but not exact
55Example 1
56Example 2
57Example 3
How far can we continue this visual test?
58Schedulability Tests
- Utilization based tests are not exact and can not
be generalized easily - Response Time Analysis
- Response Time of process i, Ri
- For the highest priority process, its worst-case
response time is equal to its own computation
time. - For other processes, however, it is a function of
the interference from other processes. - The maximum interference is bounded.
59Response Time Analysis
60Response Time Analysis
- Therefore,
- This can be solved by a recurrent relation.
61Example
- T1 has a response time of 3 OK
- For T2
- Response time 3 ceil ( 3 / 7 ) 3 6
- Response time 3 ceil ( 6 / 7 ) 3 6 OK
- For T3
- Response time 5 ceil ( 5 / 7 ) 3 ceil ( 5
/ 12 ) 3 11 - Response time 5 ceil (11 / 7 ) 3 ceil (
11 / 12 ) 3 14 - Response time 5 ceil ( 14 / 7 ) 3 ceil (
14 / 12 ) 3 17 - Response time 5 ceil ( 17 / 7 ) 3 ceil (
17 / 12) 3 20 - Response time 5 ceil ( 20 / 7 ) 3 ceil (
20 / 12 ) 3 20 OK
62Exercise
- Try response time analysis for the process set
63Deadline Monotonic Priority Assignment
- Sporadic processes period now provides a minimum
(or average) bound on their arrival - T 20 ms is guaranteed not to arrive more than
once in any 20 ms interval - Example a error handling routine, with timing
derived from a fault model - Assuming Deadline same as Period is no longer
reasonable. - DMPO Fixed priority of a process is inversely
proportional to its deadline
64Analysis
- Response time analysis
- Works perfectly for values of D less than T as
long as the stopping criterion is changed to w_i
gt D_i (instead of equality) - Determine response time for a given priority
ordering - Deadline Monotonic Priority Ordering or DMPO is
optimal - That is, if for any process set that is
schedulable by priority scheme, it is also
schedulable by DMPO
65Process Interactions and Blocking
66Priority Inversion and Inheritance
- Priority inversion
- A higher priority process L4 waits for a lower
priority process (because of resource locks) - Result of fixed priority scheme
- Priority inheritance
- If a process p is suspended waiting for process q
then the priority of q becomes that of p - L1 will have priority of L4
- And therefore run in preference to L3 and L2
67(No Transcript)
68Response Time Calculations with Blocking
- Very similar to the non-blocking case
- With priority inheritance
- Where usage is a 0/1 function if resource k is
used by at least one process with priority less
than i and at least one process with priority
greater than or equal to i.
69Dynamic Planning-Based Approaches
- Flexibility of dynamic approaches
predictability of approaches that check for
feasibility - On task arrival, before execution begins
- attempt made to create schedule that contains
previously admitted tasks and the new arrival - if attempt fails, alternative actions
70Dynamic Best Effort Approaches
- Task could be preempted any time during execution
- Dont know whether timing constraint is met until
the deadline arrives, or the task finishes
71Other Scheduling Issues
- Scheduling with fault-tolerance constraints
- Scheduling with resource reclaiming
- Imprecise computations
72Scheduling with Fault-tolerance Constraints
- Example deadline mechanism to guarantee that a
primary task will make its deadline if there is
no failure, and an alternative task (of less
precision) will run by its deadline in case of
failure - if no failure, time set aside for alternative
task is reused - Another approach contingency schedules embedded
in primary schedule, and triggered when there is
a failure
73Scheduling with Resource Reclaiming
- Variation in tasks execution times
- some tasks may finish sooner than expected
- Task dispatcher can reclaim this time and utilize
it for other tasks - e.g. non-real-time tasks can be run in idle slots
- even better use it to improve guarantees on
tasks that have timing constraints
Next Lecture Implementing RTOS