Title: Chapter 19 Real-Time Systems
1 Chapter 19 Real-Time Systems
-
- CS 540 Advanced Operating systems
- Instructor Dr. Behzad Perviz
- Fall 2010
-
- Presented By
- Monali Bhavsar
- Amee Joshi
2 Real-Time Systems
- System Characteristics
- Features of Real-Time Systems
- Implementing Real-Time Operating Systems
- Real-Time CPU Scheduling
- An Example VxWorks 5.x
3Objectives
- To explain the timing requirements of real-time
systems - To distinguish between hard and soft real-time
systems - To discuss the defining characteristics of
real-time systems - To describe scheduling algorithms for hard
real-time systems
4Overview of Real-Time Systems
- A Real-time system requires that results be
produced within a specified deadline period - Example Robot
- Contrast Desktop computer system, Batch
processing system. - An Embedded system is a computing device that is
part of a larger system.(i.e. automobile,
airliner) - No timing requirements.
- Embedded in specialized devices.
- Presence of computing device is not obvious.
- Examples dishwashers, microwave ovens, cameras,
MP3 players.
5Continue
- A Safety-critical system is a real-time system
with catastrophic results in case of failure. - Weapons systems, antilock brake system, flight
management system health related systems. - System must respond to events by specific
deadline period. - A Hard real-time system guarantees that
real-time tasks be completed within their
required deadlines. - Critical real time tasks be completed within
their deadlines. - Safety critical systems are typically hard real
time systems. - A Soft real-time system provides priority of
real-time tasks over non real-time tasks. - Priority retain until task completed.
- Linux and many OS provide soft real time system.
6System Characteristics
- Single purpose
- Small size
- Inexpensively mass-produced
- Specific timing requirements
7System Characteristics
- Single purpose
- Unlike PCs real time systems serves single
purpose. Design of it reflects single purpose and
its simple. - Small size
- Existing environment is constrained in physical
space so CPU power and memory available is less
then standard pcs. - Real time systems run on 8- or 16- bit processors
and less then megabytes of memory. - Footprint amount of memory required to run the
OS and its applications. - Real time systems must have small footprints.
- Specific timing requirements
- Real time operating systems meet timing
requirements by using scheduling algorithms that
gives real-time processes the highest scheduling
priorities. - Priority of scheduling tasks does not degrade
over time. - Technique for addressing timing requirements
minimize response time to events such as
Interrupts.
8System Characteristics
- Inexpensively mass-produced
- Real time systems are used in home appliances and
consumer devices which are cost conscious
environment, so microprocessors for real time
systems must inexpensively mass produced. - Example SOC.
9System-on-a-Chip
- Many real-time systems are designed using
system-on-a-chip (SOC)strategy - SOC allows the CPU, memory, memory-management
unit, and attached peripheral ports (I.e. USB) to
be contained in a single integrated circuit. - Less expensive then bus oriented organization.
10System-on-a-Chip
11Features of Real-Time Kernels
- Features provided by many operating systems are
- Support variety of peripheral devices
- Protection and security mechanism
- Multiple users
- Supporting these features results in large
kernel. Example, Windows XP. - Most real-time systems do not provide the
features found in a standard desktop system as
above. - Reasons include
- Real-time systems are typically single-purpose
- Real-time systems often do not require
interfacing with a user - Features found in a desktop PC require more
substantial hardware that what is typically
unavailable in a real-time system due to lack of
memory and fast processors. - Both of these are unavailable in real time
systems due to space constraints. - Addition to that many systems lack sufficient
space to provide graphical displays or disk
drives, they support file systems using NVRAM(Non
Volatile RAM). - Features of desktop PC increase the cost of real
time systems which makes systems economically
impractical.
12Virtual Memory in Real-Time Systems
- Providing virtual memory features requires that
the system include a Memory Management Unit(MMU). - MMUs increase the cost and power consumption.
- Time required to translate logical address to
physical address especially in case of
Translation Look aside Buffer(TLB) miss may be
prohibited in hard real time systems. - Address translation may occur via
- Real-addressing mode where programs generate
actual addresses - Relocation register mode
- Implementing full virtual memory
13Address Translation
14Address Translation
- Real-addressing mode
- CPU generates logical
- address L which must be
- mapped to physical address P.
- Bypass the logical address and directly generate
physical address. - Not employ virtual memory techniques so P equals
L. - Problem no memory protection between processes
and programmers need to specify physical location
of memory load. - Benefits fast, no time spent on address
translation. - Used in embedded systems with hard real time
constraints.
15Address Translation
- Relocation register mode
- Same as Dynamic
- relocation register.
- Relocation register R is set to memory location
where a program is loaded. - Physical address P is generated by adding the
contents of relocation register R to L. - Real time systems are configure the MMU to
perform this way because MMU can easily translate
logical addresses to physical addresses using
PLR. - This system will also not provide memory
protection between processes.
16Address Translation
- Implementing full virtual memory
- Address translation take
- place via page table and
- Translation Look aside buffer(TLB).
- This strategy provides program to be loaded at
any memory location memory protection between
two processes. - Without attaching disk drives not possible to
provide all virtual memory features like Demand
Paging and Swapping. - Contrast to that some system provides that using
NVRAM. - Examples LynxOS and OnCore Systems.
17Implementing Real-Time Systems
- In general, real-time operating systems must
provide - Preemptive, priority-based scheduling
- Preemptive kernels
- Latency must be minimized
18Implementing Real-Time Systems
- Preemptive, priority-based scheduling
- Important feature must respond immediately to a
real time process so system must support
Priority-based algorithm with Preemption. - Priority based scheduling algorithms assign
priority based on their importance. - If scheduler supports preemption, a process
currently running on CPU will be preempted if a
higher priority process will available to run. - Solaris, Windows XP Linux systems assign
highest scheduling priority to real time
processes. - This will provide only soft real time
functionality. For hard real time functionalities
we need additional scheduling features to meet
timing requirements.
19Windows XP priorities
- Windows XP has 32 different priority levels, the
highest priority level values 16-31 are reserved
for real time processes.
20Implementing Real-Time Systems
- Preemptive kernels
- Allows the Preemption of a task running in kernel
mode. - Designing preemptive kernel is difficult so if
quick response is not require its not
implemented. Ex. Windows XP is non-Preemptive. - In Hard real time systems preemptive kernels are
mandatory. - There are two strategies to make kernel
preemptible - First Insert Preemption points in long duration
system calls. - Preemption points can be placed at safe locations
in kernel that is where kernel data structure is
not modified. - Second Use of synchronization mechanisms.
- Any kernel data being updated are protected from
modification by the high priority process so
kernel will always preemptible.
21Minimizing Latency
- Event latency is the amount of time from when an
event occurs to when it is serviced.
22Interrupt Latency
- Interrupt latency is the period of time from when
an interrupt arrives at the CPU to when it is
serviced.
23Interrupt Latency
- Important factor contributing to interrupt
latency is the amount of time interrupts may be
disabled while kernel data structures are being
updated. - Real time operating systems required that
interrupts be disabled for very short period of
time. - In hard real time systems it must not only be
minimized, it must in fact bounded to guarantee
the deterministic behavior of hard real time
systems.
24Dispatch Latency
- Dispatch latency is the amount of time required
for the scheduler to stop one Process and start
another
25Dispatch Latency
- If we want to provide real time tasks with
immediate access to CPU mandates that operating
system should minimize Dispatch latency. To keep
Dispatch latency low, provide Preemptive kernels. - The Conflict phase of dispatch latency has two
components - Preemption of any process running in the kernel
- Release by low priority processes of resources
needed by a high-priority process. - Ex. Solaris.
- One issue that can affect the dispatch latency
arises when a higher priority process needs to
read or modify kernel data that are currently
being accessed by a lower priority process. Or a
chain of lower priority processes. - Problem Priority inversion.
- Solved by Priority-inheritance protocol.
26Priority Inversion
27Priority Inversion Priority Inheritance
28Real-Time CPU Scheduling
- Scheduling Deciding how to allocate a single
resource - among multiple clients.
- In what order and for how long.
- Usually refers to CPU scheduling.
- CPU Scheduling decisions may take place when a
process - Switches from running to waiting state
- Switches from running to ready state
- Switches from waiting to ready
- Terminates.
29Real-Time CPU Scheduling
- Two types of Real Time
- Soft Real Time Meet Deadline most of the time,
but not mandatory. - Example Live audio-video systems are usually
soft real-time violation of constraints results
in degraded quality, but the system can continue
to operate. - Hard Real Time Must meet deadline, otherwise
can cause fatal error. - Example a car engine control system is a hard
real-time system because a delayed signal may
cause engine failure or damage.
30Real-Time CPU Scheduling
- Periodic processes require the CPU at specified
intervals (periods) - p is the duration of the period
- d is the deadline by when the process must be
serviced - t is the processing time
31Real-Time CPU Scheduling
- Unusual about this scheduling is that a process
may have to announce its deadline requirements to
the scheduler. - Using Technique an Admission Control algorithm
the scheduler - either admits the process, guaranteeing that the
process will complete on time, - or rejects the request as impossible if it cannot
guarantee that task will be serviced by its
deadline.
32Real Time CPU Scheduling Algorithms
- Rate Monotonic Scheduling
- Earliest Deadline First Scheduling
- Proportional Share Scheduling
- Pthread Scheduling
33Rate Monotonic Scheduling
- It schedules periodic tasks using a static
priority policy with preemption. - If a lower priority process is running and a
higher priority process becomes available to run,
it will preempt the lower priority process. - Each periodic task is assigned a priority
inversely based on its period - The shorter the period , the higher the priority.
- The longer the period, the lower the priority.
- Rate monotonic scheduling assumes that the
processing time of a periodic process is the same
for each CPU burst. - Every time a process acquires the CPU, the
duration of its CPU burst is the same.
34Example
- Two Processes P1 and P2.
- The periods for p1 50 and p2 100.
- The Processing times are t1 20 for P1 and t2
35 for P2. - The deadline for each process requires that it
complete its CPU burst by the start of its next
period. - The CPU utilization of a process Pi as the ratio
of its burst to its period ti/pi so, for P1 it
is 20/50 0.40 and for P2 it is 35/100 0.35.
so total CPU utilization of 75 percent. - First, suppose we assign P2 a higher priority
than P1.
35Example - continue
- Now suppose we use rate monotonic scheduling, in
which we assign P1 a higher priority than P2.
36Missing Deadline with rate monotonic scheduling
- Assume that Process P1 has a period of p1 50
and CPU burst of t1 25. - For P2, the corresponding values are p2 80 and
t2 35. - The total CPU utilization of the two processes is
(25/50) (35/80) 0.94.
37Limitation
- CPU utilization is bounded.
- The worst case CPU utilization for scheduling N
processes is 2(2(1/n)-1). - With one process in the system, CPU utilization
is 100 percent, but it falls to approximately 69
percent as the number of processes approaches
infinity. - With two processes, CPU utilization is bounded at
about 83 percent.
38Earliest Deadline First Scheduling
- It dynamically assigns priorities according to
deadline. - The earlier the deadline, the higher the
priority. - The later the deadline, the lower the priority.
- If two tasks have the same absolute deadlines,
chose one of the two at random (ties can be
broken arbitrarily).
39Example
- Suppose we have two Process P1 and P2.
- P1 has values of p1 50 and t1 25.
- P2 has values of p2 80 and t2 35.
40- Unlike the rate monotonic algorithm, EDF
scheduling does not require that processes be
periodic, nor must a process require a constant
amount of CPU time per burst.. - The only requirement is that a process announce
its deadline to the scheduler when it becomes
runnable. - The appeal of EDF scheduling is that,
theoretically it can schedule processes so that
each process can meet its deadline requirements
and CPU utilization will be 100 percent. - Its impossible due to the cost of context
switching between processes and interrupt
handling.
41Proportional Share Scheduling
Wt2
Wt1
Applications
2/3
1/3
CPU bandwidth
- Associate a weight with each application and
allocate CPU bandwidth proportional to weight - T shares are allocated among all processes in the
system - An application receives N shares where N lt T
- This ensures each application will receive N / T
of the total processor time
42Example
- Assume that a total of T 100 shares is to be
divided among three processes, A, B and C. - A is assigned 50 shares, B is assigned 15 shares,
and C is assigned 20 shares. - This scheme ensures that A will have 50 percent
of total processor time, B will have 15 percent,
and C will have 20 percent.
43Proportional Share Scheduling
- Proportional Share Schedulers must work in
conjunction with an admission control policy to
guarantee that an application receives its
allocated shares of time. - An admission control policy will only admit a
client requesting a particular number of shares
if there are sufficient shares available.
44Pthread Scheduling
- The Pthread (POSIX Thread) Library is set of
functions that enable C/C code to spawn
multiple threads of execution to do multiple
tasks simultaneously. - The Pthread API provides functions for managing
real-time threads. - Pthreads defines two scheduling classes for
real-time threads - (1) SCHED_FIFO - threads are scheduled using a
FCFS strategy with a FIFO queue. There is no
time-slicing for threads of equal priority - (2) SCHED_RR - similar to SCHED_FIFO except
time-slicing occurs for threads of equal priority
45Pthread Scheduling API
- include ltpthread.hgt
- include ltstdio.hgt
- define NUM THREADS 5
- int main(int argc, char argv)
-
- int i, policy
- pthread_t tidNUM THREADS
- pthread_attr_t attr
- / get the default attributes /
- pthread _attr_ init(attr)
- / get the current scheduling policy /
- if (pthread_attr_getschedulepolicy (attr,
policy) ! 0) - fprintf (stderr, Unable to get policy.\n)
-
46Pthread Scheduling API
- else
- if (policy SCHED_OTHER)
- printf(SCHED_OTHER\n)
- elase if (policy SCHED_RR)
- printf(SCHED_RR\n)
- else if (policy SCHED_FIFO)
- printf(SCHED_FIFO\n)
-
- / set the scheduling policy - FIFO, RT, or
OTHER / - if (pthread_attr_setschedpolicy (attr,
SCHED_OTHER) ! 0) - / create the threads /
- for (i 0 i lt NUM _THREADS i)
- pthread _create (tidi, attr, runner, NULL)
47Pthread Scheduling API
- / now join on each thread /
- for (i 0 i lt NUM _THREADS i)
- pthread _join (tidi, NULL)
-
- / Each thread will begin control in this
function / - void runner(void param)
-
- / do some work /
- pthread _exit(0)
-
48An Example VxWorks 5.x
- A popular real-time operating system providing
hard real-time support. - Commercially developed by Wind River Systems.
- It is widely used in automobiles, consumer and
industrial devices, and networking equipment such
as switches and routers. - It is also used to control the two rovers
Spirit and Opportunity that began exploring the
planet Mars in 2004.
49The Organization of VxWorks
50Wind Microkernel
- The Wind microkernel provides support for the
following - (1) Processes and threads using Pthread API
- (2) preemptive and non-preemptive round-robin
scheduling - (3) manages interrupts (with bounded interrupt
and dispatch latency times) - (4) shared memory and message passing as
communication between separate tasks. Also allows
tasks to communicate using a technique known as
pipes. Also provides semaphores and mutex locks
with a priority inheritance protocol to prevent
priority inversion.
51Interesting approach to memory management
- Supports two levels of virtual memory.
- First Level
- Quite simple, allows for control of the cache on
a per-page basis. - Enables and application to specify certain pages
as non-cacheable. -
- Second Level
- Virtual memory requires the optional virtual
memory component VxVMI along with processor
support for a memory management unit(MMU). - VxWorks allows pages containing kernel code along
with the interrupt vector to be declared as
read-only.
52References
- Operating System Concepts, 8th edition by
Silberschatz, Galvin and Gange. - www.wikipedia.com
53Thank you