Title: EndtoEnd Realtime Guarantees in TAO
1End-to-End Real-time Guarantees in TAO
- Irfan Pyarali
- irfan_at_oomworks.com
2Presentation Outline
- Limitations of CORBA when applied to real-time
systems - RT-CORBA allows specification of end-to-end QoS
- Trace a CORBA invocation end-to-end
- Identifying sources of unbounded priority
inversion - TAOs RT-CORBA architecture
- Empirical evaluation of end-to-end behavior
- Conclusions and future work
3Historical Limitations of CORBA for Real-time
Systems
- Requirements
- Location transparency
- Performance transparency
- Predictability transparency
- Reliability transparency
- Historical Limitations
- Lack of QoS specifications
- Lack of QoS enforcement
- Lack of real-time programming features
- Lack of performance optimizations
4Experiment 1 Increasing Workload in Classic CORBA
- Experiment
- Measure the disruption caused by increasing
workload in Classic CORBA - Server
- 3 threads
- Client
- 3 rate-based invocation threads
- High ? 75 Hertz
- Medium ? 50 Hertz
- Low ? 25 Hertz
5Results Increasing Workload in Classic CORBA
6Conclusions Increasing Workload in Classic CORBA
- As workload increases and system capacity
decreases, the high priority 75 Hertz client is
affected first, followed by the medium priority
50 Hertz client, and finally by the low priority
25 Hertz client - The above behavior is because all clients are
treated equally by the server - Behavior is unacceptable for a real-time system
7Experiment 2 Increasing Best-effort Work in
Classic CORBA
- Experiment
- Measure the disruption caused by the increasing
best-effort work in Classic CORBA - Server
- 4 threads
- Client
- 3 rate-based invocation threads
- High ? 75 Hertz
- Medium ? 50 Hertz
- Low ? 25 Hertz
- Several best-effort threads ? continuous
invocations - Notes
- System is running at capacity ? any progress made
by best-effort threads will cause disruptions
8Results Increasing Best-effort Work in Classic
CORBA
9Conclusions Increasing Best-effort Work in
Classic CORBA
- All three priority based clients suffer as the
number of best-effort clients are added to the
system - The above behavior is because all client threads
are treated equally by the server - Behavior is unacceptable for a real-time system
10Real-time CORBA Overview
- RT-CORBA adds QoS control to regular CORBA
improve the application predictability - Bounding priority inversions
- Managing resources end-to-end
- Policies mechanisms for resource
configuration/control in RT-CORBA include - Processor Resources
- Thread pools
- Priority models
- Portable priorities
- Communication Resources
- Protocol policies
- Explicit binding
- Memory Resources
- Request buffering
- These capabilities address some (but by no means
all) important real-time application development
challenges
11Why is End-to-End Priority Propagation Relevant?
- Preserve priority as activities flow between
endsystems - Respect priorities to resolve resource contention
- Bounded request latencies
12Priority Propagation in RT-CORBA
- SERVER_DECLARED
- Server handles requests at the priority declared
when object was created - CLIENT_PROPAGATED
- Request is executed at the priority requested by
client - Priority is encoded as part of client request
13Tracing an Invocation End-to-end
Client ORB
Server ORB
Connection Cache
Memory Pool
Connection Cache
Memory Pool
A
B
S
S
S
X
Y
C
Reply Demultiplexer
Connector
Acceptor
Reactor
POA
Reactor
CV1
CV1
CV1
X
Y
C
C
C
A
B
CV2
S
S
S
1. Lookup connection to Server (S)
2. Lookup failed make new connection to Server
(S)
3. Add new connection to cache
4. Add new connection to Reactor
5. Accept new connection from Client (C)
6. Add new connection to Cache
7. Add new connection to Reactor
8. Wait for incoming events on the Reactor
9. Allocate memory to marshal data
10. Send data to Server marking connection (S)
busy
11. Wait in Reply Demuliplexer some other thread
is already leader, become follower
12. Read request header
13. Allocate buffer for incoming request
14. Read request data
15. Demultiplex request and dispatch upcall
16. Send reply to client
17. Wait for incoming events
18. Leader reads reply from server
19. Leader hands off reply to follower
20. Follower unmarshals reply
14Identifying Sources of Unbounded Priority
Inversion
Client ORB
Server ORB
Connection Cache
Memory Pool
Connection Cache
Memory Pool
A
B
S
X
Y
C
Reply Demultiplexer
Connector
Acceptor
Reactor
POA
Reactor
CV1
X
Y
C
A
B
CV2
S
- Connection cache
- Time required to send request is dependent on
availability of network resources and the size of
the request - Priority inheritance will help
- Creating new connections can be expensive and
unpredictable
- Memory Pool
- Time required to allocate new buffer depends on
pool fragmentation and memory management
algorithm - Priority inheritance will help
- Reply Demultiplexer
- If the leader thread is preempted by a thread of
higher priority before the reply is handed-off
(i.e., while reading the reply or signaling the
invocation thread), then unbounded priority
inversion will occur - There is no chance of priority inheritance since
signaling is done through condition variables
- Reactor
- No way to identify high priority client request
from one of lower priority
- POA
- Time required to demultiplex request may depend
on server organization - Time required to dispatch request may depend on
contention on dispatching table - Priority inheritance will help
15RT-CORBA Architecture
ORB A
ORB B
Connector
POA
Connector
POA
Low Priority Lane
Low Priority Lane
Connection Cache
Memory Pool
Connection Cache
Memory Pool
A
B
S
A
B
S
Leader/Followers
Acceptor
Leader/Followers
Acceptor
Reactor
Reactor
CV1
CV1
CV2
CV2
A
B
S
A
B
S
High Priority Lane
High Priority Lane
Connection Cache
Memory Pool
Connection Cache
Memory Pool
A
B
S
A
B
S
Leader/Followers
Acceptor
Leader/Followers
Acceptor
Reactor
Reactor
CV1
CV1
A
B
CV2
S
A
B
CV2
S
16Motivation for Real-time Experiments
- Illustrate RT, deterministic, and predictable ORB
behavior - Demonstrate end-to-end predictability by
utilizing the ORB to - Propagate and preserve priorities
- Exercise strict control over the management of
resources - Avoid unbounded priority inversions
- End-to-end predictability of timeliness in fixed
priority CORBA - Respecting thread priorities between client and
server for resolving resource contention during
the request processing - Bounding the duration of thread priority
inversions end-to-end - Bounding the latencies of operation invocations
17Test Bed Description
void method (in unsigned long work)
18Description of Experiments
- Increasing workload in RT-CORBA with lanes
- Increasing best-effort work in RT-CORBA with
lanes - Increasing workload in RT-CORBA without lanes
19Experiment 1 Increasing Workload in RT-CORBA
with Lanes
- Experiment
- Measure the disruption caused by increasing
workload in RT-CORBA with lanes - Server
- 3 thread lanes
- High / Medium / Low
- Client
- 3 rate-based invocation threads
- High ? 75 Hertz
- Medium ? 50 Hertz
- Low ? 25 Hertz
20Results A Increasing Workload in RT-CORBA with
Lanes (Client and Server on same machine)
21Results B Increasing Workload in RT-CORBA with
Lanes (Client and Server on remote machines)
22Conclusions Increasing Workload in RT-CORBA with
Lanes
- As workload increases and system capacity
decreases, the low priority 25 Hertz client is
affected first, followed by the medium priority
50 Hertz client, and finally by the high priority
75 Hertz client - The above behavior is because higher priority
clients are given preference over lower priority
clients by the server - When client and server are on separate machines,
the lower priority client threads are able to
sneak in some requests between the time a reply
is sent to the high priority thread and before a
new request is received from it - Behavior is consistent for a real-time system
23Experiment 2 Increasing Best-effort Work in
RT-CORBA with Lanes
- Experiment
- Measure the disruption caused by increasing
best-effort work in RT-CORBA with lanes - Server
- 4 thread lanes
- High / Medium / Low / Best-effort
- Client
- 3 rate-based invocation threads
- High ? 75 Hertz
- Medium ? 50 Hertz
- Low ? 25 Hertz
- Several best-effort threads ? continuous
invocations - Notes
- System is running at two levels
- At capacity ? any progress by best-effort threads
will cause disruptions - Just below capacity ? best-effort threads should
be able to capture any slack in the system
24Results A Increasing Best-effort Work in
RT-CORBA with LanesSystem Running at Capacity
(Work 30) (Client and Server on same machine)
25Results B Increasing Best-effort Work in
RT-CORBA with LanesSystem Running Slightly Below
Capacity (Work 28) (Client and Server on same
machine)
26Results C Increasing Best-effort Work in
RT-CORBA with LanesSystem Running Slightly Below
Capacity (Work 28) (Client and Server on
remote machines)
27Conclusions Increasing Best-effort Work in
RT-CORBA with Lanes
- Addition of best-effort client threads did not
affect any of the three priority based clients - Best-effort client threads were limited to
picking up slack left in the system - As the number of best-effort client threads
increase, throughput per best-effort client
thread decreases, but the collective best-effort
client throughput remains constant - When client and server are on separate machines,
there is more slack in the system since all the
client-side processing is done on another machine - Behavior is consistent for a real-time system
28Experiment 3 Increasing Workload in RT-CORBA
without Lanes
- Experiment
- Measure the disruption caused by increasing
workload in RT-CORBA without lanes - Server
- 3 threads in pool
- Client
- 3 rate-based invocation threads
- High ? 25 Hertz
- Medium ? 50 Hertz
- Low ? 75 Hertz
- Notes
- Server pool priority will be varied
- Low / Medium / High
29Result A Increasing Workload in RT-CORBA without
LanesServer Pool Priority Low
30Result B Increasing Workload in RT-CORBA without
LanesServer Pool Priority Medium
31Result C Increasing Workload in RT-CORBA without
LanesServer Pool Priority High
32Conclusions Increasing Workload in RT-CORBA
without Lanes
- When pool priority is low, a pool thread cannot
preempt an upcall thread when a new request
arrives from a client thread of higher priority.
Therefore, all three client threads receive
similar service from the server - When pool priority is medium, a pool thread can
only preempt an upcall thread running at low
priority when a new request arrives from a client
thread of medium or high priority. Therefore,
the medium and high priority client threads
receive similar service from the server - When pool priority is high, a pool thread can
preempt upcall threads running at low and medium
priorities when a new request arrives from a
client thread of high priority. Therefore, the
high priority client thread receives the best
service from the server - Most desirable behavior when pool priority is
high - Thread pool without lanes are more flexible than
thread pool with lanes - However, thread pool without lanes can incur very
high or unbounded priority inversions in some
cases
33Concluding Remarks
- End-to-end QoS guarantees requires careful
engineering of subsystems to ensure - Predictability, scalability, and performance
- Avoid unbounded priority inversion
- Demultiplexing, Dispatching, and Concurrency are
key subsystems in the critical path - Must be coupled with end-to-end priority
propagation
34Future Work
- Integrate with scheduling services and resource
managers - Including dynamic scheduling (RT-CORBA 2.0)
- Integrate with network QoS
- DiffServ and RSVP
- Integrate with higher level services
- Real-time Event Service
- Real-time Notification Service
- CORBA Component Model
- Integrate with monitoring tools
- TimeWiz, TotalView
- Integrate with RT fault-tolerance