CHAPTER 7 CONCURRENT SOFTWARE - PowerPoint PPT Presentation

1 / 47
About This Presentation
Title:

CHAPTER 7 CONCURRENT SOFTWARE

Description:

Send EOI Command to PIC. ISR with Long Execution Time ... Clear the mask bit for this device in the 8259 PIC. Enable future interrupts from this device. ... – PowerPoint PPT presentation

Number of Views:73
Avg rating:3.0/5.0
Slides: 48
Provided by: daniel67
Category:

less

Transcript and Presenter's Notes

Title: CHAPTER 7 CONCURRENT SOFTWARE


1
CHAPTER 7CONCURRENT SOFTWARE
2
Program Organization of a Foreground/Background
System
Interrupt
start
Interrupt
Interrupt
ISR for Task 2
Initialize
ISR for Task 1
ISR for Task 3
Wait for Interrupts
IRET
IRET
IRET
3
Foreground/Background System
  • Most of the actual work is performed in the
    "foreground" ISRs, with each ISR processing a
    particular hardware event.
  • Main program performs initialization and then
    enters a "background" loop that waits for
    interrupts to occur.
  • Allows the system to respond to external events
    with a predictable amount of latency.

4
Task State and Serialization
unsigned int byte_counter void
Send_Request_For_Data(void) outportb(CMD_PORT
, RQST_DATA_CMD) byte_counter 0 void
interrupt Process_One_Data_Byte(void) BYTE8
data inportb(DATA_PORT) switch
(byte_counter) case 1 Process_Temperature
(data) break case 2 Process_Altitude(data)
break case 3 Process_Humidity(data)
break
5
Input Ready
STI
ISR with Long Execution Time
Input Data
Process Data
Output Device Ready?
Yes
Output Data
Send EOI Command to PIC
IRET
6
Removing the Waiting Loop from the ISR
Enter Background
Input Ready
Initialize
STI
Data Enqueued?
Input Data
Yes
Output Device Ready?
Process Data
Yes
Dequeue Data
Enqueue Data
FIFO Queue
Output Data
Send EOI Command to PIC
IRET
7
Output Ready
Input Ready
Interrupt-Driven Output
STI
STI
Input Data
Process Data
Data Enqueued?
Yes
Enqueue Data
Dequeue Data
FIFOQueue
Output Data
Send EOI Command to PIC
Send EOI Command to PIC
IRET
IRET
8
Kick Starting Output
Input Ready
STI
SendData Subroutine
Output Ready
Input Data
Data Enqueued?
STI
Process Data
Yes
FIFOQueue
Dequeue Data
Enqueue Data
Clear Busy Flag
Output Device Busy?
No!
Output Data
CALL SendData (Kick Start)
CALL SendData
Set Busy Flag
Send EOI Command to PIC
Send EOI Command to PIC
RET
IRET
IRET
9
Preventing Interrupt Overrun
Input Data
Removes the interrupt request that invoked this
ISR.
Input Ready
When interupts get re-enabled (see STI below),
allow interrupts from lower priority devices (and
this device too).
Send EOI Command to PIC
Yes ? Ignore this Interrupt! (Interrupts are
re-enabled by the IRET)
ISR Busy Flag Set?
Set ISR Busy Flag
STI
Allow interrupts from any device.
Process data, write result to output queue,
kick start.
Clear ISR Busy Flag
IRET
10
Preventing Interrupt Overrun
Allow interrupts from higher priority devices.
Input Ready
STI
Removes the interrupt request that invoked this
ISR.
Input Data
Disable future interrupts from this device.
Set the mask bit for this device in the 8259 PIC
Send EOI Command to PIC
Allow interrupts from lower priority devices.
Process data, write result to output queue,
kick start.
Enable future interrupts from this device.
Clear the mask bit for this device in the 8259 PIC
IRET
11
Moving Work into Background
  • Move non-time-critical work (such as updating a
    display) into background task.
  • Foreground ISR writes data to queue, then
    background removes and processes it.
  • An alternative to ignoring one or more interrupts
    as the result of input overrun.

12
Limitations
  • Best possible performance requires moving as much
    as possible into the background.
  • Background becomes collection of queues and
    associated routines to process the data.
  • Optimizes latency of the individual ISRs, but
    background begs for a managed allocation of
    processor time.

13
Multi-Threaded Architecture
Background Thread
ISR
Queue
ISR
Queue
ISR
ISR
Background Thread
Queue
Queue
Multi-threaded run-time function library (the
real-time kernel)
14
Thread Design
  • Threads usually perform some initialization and
    then enter an infinite processing loop.
  • At the top of the loop, the thread relinquishes
    the processor while it waits for data to become
    available, an external event to occur, or a
    condition to become true.

15
Concurrent Execution of Independent Threads
  • Each thread runs as if it had its own CPU
    separate from those of the other threads.
  • Threads are designed, programmed, and behave as
    if they are the only thread running.
  • Partitioning the background into a set of
    independent threads simplifies each thread, and
    thus total program complexity.

16
Each Thread Maintains Its Own Stack and Register
Contents
17
Concurrency
  • Only one thread runs at a time while others are
    suspended.
  • Processor switches from one thread to another so
    quickly that it appears all threads are running
    simultaneously. Threads run concurrently.
  • Programmer assigns priority to each thread and
    the scheduler uses this to determine which thread
    to run next

18
Real-Time Kernel
  • Threads call a library of run-time routines
    (known as the real-time kernel) manages
    resources.
  • Kernel provides mechanisms to switch between
    threads, for coordination, synchronization,
    communications, and priority.

19
Context Switching
  • Each thread has its own stack and a special
    region of memory referred to as its context.
  • A context switch from thread "A" to thread "B"
    first saves all CPU registers in context A, and
    then reloads all CPU registers from context B.
  • Since CPU registers includes SSESP and CSEIP,
    reloading context B reactivates thread B's stack
    and returns to where it left off when it was last
    suspended.

20
Context Switching
Thread A
Thread B
Suspended
Executing
Restore context B
Save context A
Executing
Suspended
Save context B
Restore context A
Executing
Suspended
21
Non-Preemptive Multi-Tasking
  • Threads call a kernel routine to perform the
    context switch.
  • Thread relinquishes control of processor, thus
    allowing another thread to run.
  • The context switch call is often referred to as a
    yield, and this form of multi-tasking is often
    referred to as cooperative multi-tasking.

22
Non-Preemptive Multi-Tasking
  • When external event occurs, processor may be
    executing a thread other than one designed to
    process the event.
  • The first opportunity to execute the needed
    thread will not occur until current thread
    reaches next yield.
  • When yield does occur, other threads may be
    scheduled to run first.
  • In most cases, this makes it impossible or
    extremely difficult to predict the maximum
    response time of non-preemptive multi-tasking
    systems.

23
Non-Preemptive Multi-Tasking
  • Programmer must call the yield routine
    frequently, or else system response time may
    suffer.
  • Yields must be inserted in any loop where a
    thread is waiting for some external condition.
  • Yield may also be needed inside other loops that
    take a long time to complete (such as reading or
    writing a file), or distributed periodically
    throughout a lengthy computation.

24
Context Switching in a Non-Preemptive System
Start
Scheduler selects highest priority thread that is
ready to run. If not the current thread, the
current thread is suspended and the new thread
resumed.
Thread Initialization
Yes
Wait?
Yield to other threads
Data Processing
25
Preemptive Multi-Tasking
  • Hardware interrupts trigger context switch.
  • When external event occurs, a hardware ISR is
    invoked.
  • After servicing the interrupt request the ISR
    raises the priority of the thread that processes
    the associated data, then switches context switch
    to the highest priority thread that is ready to
    run and returns to it.
  • Significantly improves system response time.

26
Preemptive Multi-Tasking
  • Eliminates the programmer's obligation to include
    explicit calls to the kernel to perform context
    switches within the various background threads.
  • Programmer no longer needs to worry about how
    frequently the context switch routine is called
    it's called only when needed - i.e., in response
    to external events.

27
Preemptive Context Switching
Thread A
Thread B
Thread A Executing
Thread B Suspended
ISR
Hardware Interrupt
Scheduler selects highest priority thread that is
ready to run. If not the current thread, the
current thread is suspended and the new thread
resumed.
Process Interrupt Request
Context Switch
IRET
Thread B Executing
Thread A Suspended
28
Critical Sections
  • Critical section A code sequence whose proper
    execution is based on the assumption that it has
    exclusive access to the shared resources that it
    is using during the execution of the sequence.
  • Critical sections must be protected against
    preemption, or else integrity of the computation
    may be compromised.

29
Atomic Operations
  • Atomic operations are those that execute to
    completion without preemption.
  • Critical sections must be made atomic.
  • Disable interrupts for their duration, or
  • Acquire exclusive access to the shared resource
    through arbitration before entering the critical
    section and release it on exit.

30
Threads, ISRs, and Sharing
  • Between a thread and an ISR
  • Data corruption may occur if the thread's
    critical section is interrupted to execute the
    ISR.
  • Between 2 ISRs
  • Data corruption may occur if the critical section
    of one ISR can be interrupted to execute the
    other ISR.
  • Between 2 threads
  • Data corruption may occur unless execution of
    their critical sections is coordinated.

31
Shared Resources
  • A similar situation applies to other kinds of
    shared resources - not just shared data.
  • Consider two or more threads that want to
    simultaneously send data to the same (shared)
    disk, printer, network card, or serial port. If
    access is not arbitrated so that only one thread
    uses the resource at a time, the data streams
    might get mixed together, producing nonsense at
    the destination.

32
Uncontrolled Access to a Shared Resource (the
Printer)
Shared Printer
Thread A
Thread B
HgoELodLO bye
"HELLO\n"
"goodbye"
33
Protecting Critical Sections
  • Non-preemptive system Programmer has explicit
    control over where and when context switch
    occurs.
  • Except for ISRs!
  • Preemptive system Programmer has no control over
    the time and place of a context switch.
  • Protection Options
  • Disabling interrupts
  • Spin lock
  • mutex
  • semaphore

34
Disabling Interrupts
  • The overhead required to disable (and later
    re-enable) interrupts is negligible.
  • Good for short critical sections.
  • Disabling interrupts during the execution of a
    long critical section can significantly degrade
    system response time.

35
Spin Locks
If the flag is set, another thread is currently
using the shared memory and will clear the flag
when done.
do disable() ok !flag flag TRUE
enable() while (!ok)
L1 MOV AL,1 XCHG _flag,AL OR AL,AL JNZ L1
Flag set?
No
Set Flag
Critical Section
Spin-lock in assembly.
Spin-lock in C.
Clear Flag
flag FALSE
MOV BYTE _flag,0
36
Spin Locks vs. Semaphores
  • Non-preemptive system requires kernel call inside
    spin lock loop to let other threads run.
  • Context-switching during spin lock can be a
    significant overhead (saving and restoring
    threads registers and stack).
  • Semaphores eliminate the context-switch until
    flag is released.

37
Semaphores
Kernel suspends this thread if another thread has
possession of the semaphore this thread does not
get to run again until the other thread releases
the semaphore with a post operation.
Semaphore Pend
Critical Section
Semaphore Post
38
Kernel Services
  • Initialization
  • Threads
  • Scheduling
  • Priorities
  • Interrupt Routines
  • Semaphores
  • Mailboxes
  • Queues
  • Time

39
Initialization Services
  • Multi-C
  • n/a
  • ?C/OS-II
  • OSInit()
  • OSStart()

40
Thread Services
  • Multi-C
  • ECODE MtCCoroutine(void (fn)())
  • ECODE MtCSplit(THREAD new, MTCBOOL old)
  • ECODE MtCStop(THREAD )
  • ?C/OS-II
  • BYTE8 OSTaskCreate(void (fn)(void ), void
    data, void stk, BYTE8 prio)
  • BYTE8 OSTaskDel(BYTE8 prio)

41
Scheduling Services
  • Multi-C
  • ECODE MtCYield(void)
  • ?C/OS-II
  • void OSSchedLock(void)
  • void OSSchedUnlock(void)
  • BYTE8 OSTimeTick(BYTE8 old, BYTE8 new)
  • void OSTimeDly(WORD16)

42
Priority Services
  • Multi-C
  • ECODE MtCGetPri(THREAD , MTCPRI )
  • ECODE MtCSetPri(THREAD , MTCPRI)
  • ?C/OS-II
  • BYTE8 OSTaskChangePrio(BYTE8 old, BYTE8 new)

43
ISR Services
  • Multi-C
  • n/a
  • ?C/OS-II
  • OS_ENTER_CRITICAL()
  • OS_EXIT_CRITICAL()
  • void OSIntEnter(void)
  • void OSIntExit(void)

44
Semaphore Services
  • Multi-C
  • ECODE MtCSemaCreate(SEMA_INFO )
  • ECODE MtCSemaWait(SEMA_INFO , MTCBOOL )
  • ECODE MtCSemaReset(SEMA_INFO )
  • ECODE MtCSemaSet(SEMA_INFO )
  • ?C/OS-II
  • OS_EVENT OSSemCreate(WORD16)
  • void OSSemPend(OS_EVENT , WORD16, BYTE8 )
  • BYTE8 OSSemPost(OS_EVENT )

45
Mailbox Services
  • Multi-C
  • n/a
  • ?C/OS-II
  • OS_EVENT OSMboxCreate(void msg)
  • void OSMboxPend(OS_EVENT , WORD16, BYTE8 )
  • BYTE8 OSMboxPost(OS_EVENT , void )

46
Queue Services
  • Multi-C
  • ECODE MtCReceive(void msgbfr, int msgsize)
  • ECODE MtCSendTHREAD , void msg, int size, int
    pri)
  • ECODE MtCASendTHREAD , void msg, int size, int
    pri)
  • ?C/OS-II
  • OS_EVENT OSQCreate(void start, BYTE8 size)
  • void OSQPend(OS_EVENT , WORD16, BYTE8 )
  • BYTE8 OSQPost(OS_EVENT , void )

47
Time Services
  • Multi-C
  • n/a
  • ?C/OS-II
  • DWORD32 OSTimeGet(void)
  • void OSTimeSet(DWORD32)
Write a Comment
User Comments (0)
About PowerShow.com