SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging - PowerPoint PPT Presentation

About This Presentation
Title:

SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging

Description:

SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging ... Log-based recovery will lose recent messages BUT: CAA failover will not Tip #6: ... – PowerPoint PPT presentation

Number of Views:112
Avg rating:3.0/5.0
Slides: 78
Provided by: PSC77
Category:

less

Transcript and Presenter's Notes

Title: SONIC-7: Tuning and Scalability for Sonic Enterprise Messaging


1
SONIC-7 Tuning and Scalability for Sonic
Enterprise Messaging
  • Analyzing, testing and tuning ESB/JMS performance

David Hentchel
Principal Solution Engineer
2
Agenda
Analyzing, testing and tuning ESB/JMS performance
  • Methodology
  • Review the recommended approach to project and
    procedures
  • Analysis
  • Understand how to characterize performance
    requirements and platform capacity
  • Testing
  • Learn how to simulate performance scenarios using
    the Sonic Test Harness
  • Tuning
  • Know the top ten tuning techniques for the Sonic
    Enterprise Messaging backbone

3
Setting Performance Expectations
  • System performance is highly dependent on
    machine, application and product version.
    Performance levels described here may or may not
    be achievable in a specific deployment.
  • Tuning techniques often interact with specific
    features, operating environment characteristics
    and load factors. You should test every option
    carefully and make your own decision regarding
    the relative advantages of each approach.

4
Agenda
Analyzing, testing and tuning ESB/JMS performance
  • Methodology
  • Review the recommended approach to project and
    procedures
  • Analysis
  • Understand how to characterize performance
    requirements and platform capacity
  • Testing
  • Learn how to simulate performance scenarios using
    the Sonic Test Harness
  • Tuning
  • Know the top ten tuning techniques for the Sonic
    Enterprise Messaging backbone

5
Performance Concepts and Methodology
  • Terms and definitions
  • Performance engineering concepts
  • Managing a performance analysis project
  • Skills needed for the project
  • Performance Tools
  • Project timeline

6
Performance Engineering Terms
Platform
System Metric
Load Sessions Delivery Rate
System Under Test
R
V
Test Harness
Variable client param, app param, system param
V
V
Latency ReceiveTime SendTime
Test Components
External Components
7
Concepts Partitioning Resource Usage
  • Partitionable resources can be broken down as
    the sum of the contributions of each test
    component on the system
  • Total resource usage is limited by system
    capacity
  • Becomes the bottleneck as it nears 100
  • Goal is linear scalability as additional resource
    is added
  • Vertical versus Horizontal scalability
  • Total latency is the sum across all resource
    latencies, i.e.
  • Latency CPU_time Disk_time Socket_time
    wait_sleep

8
Concepts Computer Platform Resources
CPU time
Memory (in use, swap)
Threads
Network I/O (send/receive)
Disk I/O (read/write)
Favorite tools task mgr, perfmon, top, ping s,
traceroute
  • Use level of detail appropriate to the question
    being asked
  • Machine resource (such as CPU) artifacts
  • side effects from uncontrolled applications
  • timing of refresh interval
  • correlation with test intervals
  • Scaling across different platforms and resource
    types

9
The Performance Engineering Project
  • For each iteration
  • Test performance vs goal
  • Identify likeliest area for gain

Test
Analyze
Tune
  • Startup tasks
  • Define project completion goals
  • Staff benchmarking skills
  • Acquire test harness

The Project is Goal Driven
10
Performance Project Skills
  • Requirements Expert
  • SLA/QoS levels minimal optimal
  • Predicted load current future
  • Distribution topology
  • Integration Expert
  • Allowable design options
  • Cost to deploy
  • Cost to maintain
  • Testing Expert
  • Load generation tool
  • Bottleneck diagnosis
  • Tuning and optimization

R.E.
Cost/Benefit
Load/Distribution
SOLUTION
I.E.
T.E.
Design Options
11
Tools for a Messaging Benchmark
System Under Test
Test Analyzer
Test Harness
Test Configurator
  • Configurator creates conditions to bring system
    under test into known state
  • Harness the platforms and components whose
    performance response is being measured
  • Analyzer tools and procedures to make
    meaningful conclusions based on result data.

12
Performance Project Timeline
System Test
Development Project
Deployment Plan
Service Dev
Sizing
Process Dev
Performance Project
Perf Prototype
Launch
Week
13
Agenda
Analyzing, testing and tuning ESB/JMS performance
  • Methodology
  • Review the recommended approach to project and
    procedures
  • Analysis
  • Understand how to characterize performance
    requirements and platform capacity
  • Testing
  • Learn how to simulate performance scenarios using
    the Sonic Test Harness
  • Tuning
  • Know the top ten tuning techniques for the Sonic
    Enterprise Messaging backbone

14
Performance Analysis
Utilization ( total)

Capacity (units/sec)
  • Performance scenarios requirements and goals
  • Some generic performance scenarios
  • System characterization platforms and
    architecture
  • Test cases specification for benchmark

15
Performance Scenario Specification
  • First, triage performance-sensitive processes
  • substantial messaging load and latency
    requirements
  • impact of resource-intensive services
  • Document only process messaging services
  • leave out Application specific logic this is a
    prototype
  • Set specific messaging performance goals
  • Message rate and size
  • Minimum and average latency required
  • Try to quantify actual value and risk
  • Why this use case matters

16
Generic Scenario Decoupled process
  • Asynchronous, loosely coupled, distributed
    services.
  • Assumptions
  • Services allow concurrent, parallel distribution
  • Messaging is lightweight, pub sub
  • End-to-end process completes in real time
  • May be part of a Batch To Real Time pattern
  • Factors to analyze
  • Speed and scalability of invoked services
  • Distributed topology
  • Quality of Service
  • Aggregate Latency over time across batched
    invocations

17
Generic Scenario Real time data feed
  • High speed distribution of real time events
  • Assumptions
  • Read-only data pushed dynamically to users
  • Messages are small
  • Service mediation is simple and fast
  • Latency is very important, but QoS needs are
    modest
  • Factors to analyze
  • Quality of Service, esp. worst case for outage
    and message loss
  • Message rate and fanout (pub sub)
  • Scalability of consumers

18
Generic Scenario Simple request reply
  • Typical web service call that waits for response
  • Assumptions
  • client is blocked, pending response
  • small request message, response is larger
  • latency is critical
  • Factors to analyze
  • Latency of each component service
  • Load balancing of key services
  • Recovery strategy if loop is interrupted
  • Client network, protocol and security specs

19
Example Performance scenario specification
  • Overall project scope
  • Project goals and process
  • Deployment environment
  • System architecture
  • For each Scenario
  • Description of business process
  • Operational constraints (QoS, recovery,
    availability)
  • Performance goals, including business benefits

20
Characterizing platforms and architecture
  • Scope current and future hardware options
    available to the project
  • Identify geographical locations, firewalls and
    predefined service hosting restrictions
  • Define predefined Endpoints and Services
  • Define data models and identify required
    conversions and transformations.

21
Platform configuration specification
Network bandwidth latency speed
DMZ
Field
DMZ
CPU number type speed
Memory size speed
Firewall cryptos latency
Disk type speed
22
Platform Profile Real-time messaging
System resource limitations
90
5
capacity
20
70
23
Platform Profile Queued requests
System resource limitations
50
85
40
20
capacity
24
Architecture Spec Service distribution
ESB
ESB
ESB
Partner ESB
  • Identify services performance characteristics
  • Identify high-bandwidth message channels
  • Decide which components can be modified
  • Annotate with message load estimates

25
Architecture Spec Data Integration
  • Approximate the complexity of data schemas
  • Identify performance critical transformation
    events
  • Estimate message size
  • Identify acceptable potential services

26
DEMO Test lab setup
  • Test hardware
  • guidelines for lab computers
  • setting up the lab network
  • Test architecture
  • location of test components
  • installation of brokers
  • configuration of service containers
  • Test design assets
  • sample service definitions wsdls
  • sample test documents

27
Specifying Test Cases
  • Factors to include
  • Load, sizes, complexity of messaging
  • Range of scalability to try (e.g. min/max msg
    size)
  • Basic ESB Process model
  • Basic distribution architecture
  • Details to avoid
  • Application code (unless readily available)
  • Detailed transformation maps
  • Define relevant variables
  • Fixed factors
  • Test Variables
  • Dependent measures

28
Typical test variables
  • JMS Client variables
  • Connection / session usage
  • Quality of Service
  • Interaction mode
  • Message size and shape
  • ESB container variables
  • Service provisioning and parameters
  • Endpoint / Connection parameters
  • Process implementation and design
  • Routing branch or key field for lookup

29
Example Test Case Specification
  • For each identified Test Case there is a section
    specifying the following
  • Overview of test
  • How this use case relates to the scenario
  • Key decision points being examined
  • Functional definition
  • How to simulate the performance impact
  • Description of ESB processes and services
  • Samples messages
  • Design alternatives that will be compared
  • Test definition
  • Variables manipulated
  • Variables measured
  • Completion criteria
  • Throughput and latency goals
  • Issues and options that may merit investigation

30
Agenda
Analyzing, testing and tuning ESB/JMS performance
  • Methodology
  • Review the recommended approach to project and
    procedures
  • Analysis
  • Understand how to characterize performance
    requirements and platform capacity
  • Testing
  • Learn how to simulate performance scenarios using
    the Sonic Test Harness
  • Tuning
  • Know the top ten tuning techniques for the Sonic
    Enterprise Messaging backbone

31
Testing Performance
  • Staging test services in the test bed
  • Staging brokers and containers
  • Configuring the Sonic Test Harness
  • Running performance tests and gathering data
  • Evaluating results for each test case

32
Staging Test Services Deploying existing services
  • Appropriate to use actual implementation of a
    service IF
  • robust implementation exists
  • minimal effort to set up in test environment
  • no side effects with other test components
  • Production ready services merit special
    treatment
  • perform unit load tests to get baseline
  • document possible tuning / scaling options

33
Staging Test Services Prototyping proposed new
services
  • Prototype should include
  • Correct routing logic for Use Case process
  • Approximately correct resource usage
  • Generic data
  • Prototype does not need
  • Detailed business logic
  • Exception handling code
  • Invocation of non-critical library calls
  • Its a prototype. Just keep it simple

34
Staging Test Services Simulating non-essential
services
  • Use stub service as placeholder for service
    step that are not performance-sensitive
  • Can return generic data
  • Ensures ESB process for target use case will run
    correctly
  • Useful stub services
  • Transform service
  • GetFile service
  • PassThrough service
  • Enrichment service
  • Prototype service (version 7.5.2 or later)

35
Demo Provisioning test services
Business Use Case
Performance Test Case
Test Harness
Web Portal
Status Request
Status Query
WSI Address Svc
WSI Address Svc
Addr Info
Status Query
Status Info
XForm Build query
PassThru
Status Query
Status Query
DBSvr Query
DBSvr Query
Query Result
Query Result
Adapter M/F Callout
PassThru Sleep
Enriched Result
Query Result
36
Provisioning test brokers
  • Test broker must be similar to production
  • Correct Sonic release and JVM
  • Expected deployment heap size and settings
  • CAA configuration
  • Optimize network for replication channels
  • Locate on separate host to avoid bottlenecks
  • If failover testing is part of plan
  • define fault tolerant (JNDI) connections
  • DRA configuration
  • Set up subset of clusters and WAN simulations
  • Measure local broker configs first, then expand

37
Provisioning ESB containers
  • Use ESB Container to manage service groups
  • name accordingly to service group role
  • plan to reshuffle services during tuning phase
  • provision jar files out of sonicfs
  • Use MF Container to control distribution
  • name according to domain/host location
  • configure Java heap appropriately
  • for IBM jdk make -Xms -Xmx
  • for caching services (e.g. Transform, XML
    Server), add extra memory for locally-cached data

38
Demo Setting up containers for test
  • Workbench view of containers
  • Coding and debugging the prototype
  • Runtime view of containers
  • managing the distributed environment
  • reinitializing back to a known state

39
Simulating endpoint producers and consumers
System Under Test
Test Harness
  • Endpoint protocols and performance
  • Test Driver options for various protocols
  • Simulating process/thread configuration
  • Implementing endpoint interaction modes
  • Configuring client Quality of Service (QoS)
  • Generating message content
  • Demo of client/endpoint simulation

40
Endpoint protocols and performance
  • JMS
  • fastest client protocol
  • strongest range of QoS and Failover
  • HTTP
  • moderate performance and QoS
  • rigid connection model (requires client or router
    reconnect logic)
  • Web Services
  • slowest performance
  • QoS and recovery depend on WS- extensions
  • File-based
  • flat file pickup / dropoff, FTP
  • limited to disk speeds (i.e. 1 to 5 MB / sec)
  • appropriate for batch processing scenarios
  • JCA
  • appropriate for EJB server scenarios
  • limited to EJB transaction speeds (i.e. 100 to
    1000 msg / sec)

41
Client session configuration
  • Broker performance depends on scalability of
    connections and sessions
  • JMS best-practice is one thread per session
  • JMS sessions can efficiently share a connection
  • Use session pool for clients and app servers
  • For test simulation
  • determine allowable range of client threads
  • test connection/session numbers up to max
    threads
  • distribute client processes / drivers across
    multiple machines, if needed, to avoid
    client-side bottleneck

42
Configuring client Quality of Service (QoS)
  • HTTP and Web Services clients
  • best possible QoS is at least once
  • even with WS-Reliability
  • JMS Client
  • CAA w NonPersistentReplicated ? exactly once
  • Many shared subs versus one durable sub
  • NonPersistent Request/Reply ? at least once
  • Discardable send to avoid queue backup
  • Flow to disk to prevent blocked senders
  • ESB service
  • Exactly once uses JMS transaction
  • At least once uses client ack
  • Best effort uses dups_ok ack
  • Broker
  • Sync (default) versus Async disk i/o

43
Generating message content
gen random int
transform rule
gen sample xml
Addr svc lookup
  • Simulate message size / distribution for accurate
    results
  • Message content may trigger ESB routing rules
  • Some services depend on message content
  • key values must match existing data / rules
  • duplicate key value could cause error
  • services that cache content require accurate key
    distribution
  • Simulating content in the client / driver
  • file-based message pool
  • message template generation
  • Java / object message generator
  • Message properties

Sonic Test Harness supports all these
44
Demo Simulating clients with Test Harness
  • JNDI connection configuration
  • Producer / Consumer parameters
  • Message generation

45
Running performance test iterations
  • Logistics of test orchestration
  • managing multiple Test Harness clients
  • configuring test intervals
  • test warm-up and cool-down
  • Data collection correlation
  • Ensuring repeatability of results
  • Demo of Test Harness iterations

46
Logistics of test orchestration
  • Managing multiple Test Harness clients
  • Simplest option is multiple command windows
  • use telnet sessions for remote hosts
  • initiate test and warmup
  • hit ltentergt key in each window
  • Advanced environments can use distributed driver
  • Grinder, SLAMD, JMeter, LoadRunner,
  • Configuring test intervals
  • long enough to detect trend effects
  • short enough to allow fast iteration across tests
  • Test warm-up and cool-down
  • helps eliminate first-time test artifacts
  • ensures steady-state performance numbers

47
Data collection correlation
  • Test Harness output
  • Throughput (msg/sec)
  • Latency (msecs per round trip)
  • Message size (bytes)
  • System measures
  • CPU usage (usr, sys)
  • Disk usage (writes/sec)
  • Broker metrics
  • Messaging rate (bytes/second)
  • Peak queue size (bytes)

48
Ensuring repeatability of results
  • Experimental method requirement
  • critical in measuring impact of change
  • validate by rerunning identical test
  • Most common artifacts impacting repeatability
  • messages left in queue
  • duplicate files dropped in file system
  • growing database size / duplicate keys
  • disconnected Durable subscribers
  • cached Service artifacts (ESB default)

49
Demo of Test Harness iterations
  • Baseline test
  • Change test harness properties
  • Rerun test
  • Show spreadsheet across tests

50
Evaluating performance Measurement against goal
  • Short of goal
  • Perform bottleneck analysis / attempt tuning
  • Review option of scaling up resources
  • Review design change options
  • Give up and re-think goal
  • Meet or exceed goal
  • Continue scaling up and tuning til it breaks
  • Give up and declare success

51
Evaluating performance Bottleneck analysis
  • Review of resource consumption
  • Determine cpu-bound, disk-bound, net-bound
  • Identify components using the resource
  • Possibility of offloading to other hosts
  • Examine trends in scalability tests
  • Possibility of improving throughput by adding
    more client sessions, ESB listeners, clustered
    brokers, etc.
  • Option of rebalancing ( threads, java heap,
    priority
  • Go through Top Ten tuning tips and others
  • Last resort recode or redesign to save cycles

52
Evaluating performance Compare across test runs
  • Carefully planned test runs yield fertile
    comparisons
  • estimate cost/benefit of a feature or option
  • estimate incremental overhead of a tunable
    parameter
  • narrow the field of concerns and alternatives
  • Advice in collating and analyzing test runs
  • collect test summary results in spreadsheet
  • save raw data and logs in a separate place
  • save test config, so you can replicate later
  • schedule ad hoc review after each test sequence

53
Demo Example test result matrix
Update Scenario
Query Scenario
Msg Size
Thruput
Latency
DB Svc
Test 1
ORX 1 KB 112 331
ORX 10 KB 146 460
ORX 100 KB 152 1197
XSVR 1 KB 121 68
XSVR 10 KB 632 139
XSVR 100 KB 2113 688
Test 2
Test 3
Test 4
Test 5
Test 6
kbytes / second milliseconds
54
Agenda
Analyzing, testing and tuning ESB/JMS performance
  • Methodology
  • Review the recommended approach to project and
    procedures
  • Analysis
  • Understand how to characterize performance
    requirements and platform capacity
  • Testing
  • Learn how to simulate performance scenarios using
    the Sonic Test Harness
  • Tuning
  • Know the top ten tuning techniques for the Sonic
    Enterprise Messaging backbone

55
Performance Tuning with Sonic ESB
  • Diagnostics
  • Review of ESB architecture
  • Factors influencing message throughput
  • Factors influencing message latency
  • Factors influencing scalability
  • Top Ten Tuning Tips
  • Other tuning issues
  • Broker parameters
  • Java tuning
  • ESB tuning
  • Specialized ESB services

56
Review ESB Architecture
SOA view
System view
Router/Switch
Router/Switch
57
ESB System Usage CPU
  • Sources of CPU cost
  • Application code
  • XML Parsing
  • CPU cost of i/o
  • Network sockets
  • Log/Data disks
  • Web Service protocol
  • Web Service security
  • Security
  • Authorization
  • Message or channel encryption

58
ESB System Usage Disk
  • Sources of Disk overhead
  • Database services
  • Large File / Batch services
  • Message recovery log
  • Might not be used if other guarantee mechanisms
    work
  • Message data store
  • Disconnected consumer
  • Queue save threshold overflow
  • Flow to disk overflow
  • Message storage overhead depends on disk sync
    behavior
  • Explicit file synchronization ensures data
    retained on crash
  • Tuning options at disk, filesystem, JVM and
    Broker levels

59
ESB System Usage Network
  • Sources of Network overhead
  • Client application messages and replies
  • Service to service steps within the ESB (except
    intra-container)
  • ESB Web Service callouts
  • CAA broker replication messages
  • Metadata (JMX, cluster, DRA) messages (normally lt
    1)
  • Computing network bandwidth
  • Network card 100 mbit 12 MB/sec, 1 gbit
    120 MB/sec
  • Network switches are individually rated
  • Computing network load
  • message rate X message size
  • include response messages and intermediate steps
  • add ack packets (very small) for each message send

60
Tip 1 Increase sender and listener
threads to make service client scale
Container2
ServiceX
  • Increase Listeners for key entry Endpoints
  • Add more Service/Process instances
  • Warning Intra-Container ignores endpoint
  • split scalable service into separate container
  • turn intra-container messaging off
  • note sub-Process is always intra-container

61
Tip 2 Implement optimal QoS to balance speed
versus precision
ESB QoS MQ Setting Message Loss Events Duplicate Msg Events
N/A Discardable Buffer overflow, Any crash Never
Best Effort NonPersistent, DupsOK ack Broker crash, Client crash Never
At Least Once Persistent, DupsOK ack Never Never
Exactly Once Transacted Never Never
(Based on CAA brokers and fault-tolerant
connections)
62
Tip 3 Re-use JMS objects to reduce setup
costs
  • Objects with client and broker footprint
  • Connection
  • Session
  • Sender/Receiver/Publisher/Subscriber
  • Temporary destination
  • Tuning strategies
  • Reuse JMS objects in client code
  • Share each Connection across sessions
  • Share Sessions across Producers and Consumers
  • but not across JVM Threads
  • For low-load topics/queues
  • Use Anonymous Producer
  • Use wildcard or multi-topic subscription

63
Tip 4 Use intra-container service calls to
avoid broker hops
Inter-Container Messaging
Intra-Container Messaging
v7.5 better! faster!
64
Tip 5 Use NonPersistentReplicated mode to
reduce disk overhead
  • Normal broker mechanisms require disk sync
  • contributes to latency across the board
  • interferes with batching of packets
  • limited bandwidth
  • Disabling disk sync eliminates this overhead
  • Send mode NonPersistentReplicated
  • Optional broker params to disable entirely
  • WARNING Log-based recovery will lose recent
    messages
  • BUT CAA failover will not

65
Tip 6 Use XCBR instead of CBR to eliminate
Javascript overhead
  • CBR rules implemented via Javascript
  • dynamic change with complex rules
  • very high overhead for runtime engine
  • XCBR rules extract data fields for comparison
  • only simple comparisons supported
  • no script engine overhead
  • use message property data key for best effect

66
Tip 7 Use message batching to accelerate
message streams
Consumer
Producer
  • Message transfer overhead is generally fixed
  • Hidden ack messages amenable to tuning
  • AsyncNonPersistent mode decouples ack latency
  • Transaction Commit allows 1 ack per N messages
  • DupsOK ack mode allows lazy ack from consumer
  • Pre-Fetch Count allows batched transmit to
    consumer
  • ESB Design option send one multi-part message
    instead of N individual messages
  • XML transforms and other services handle
    multi-record data efficiently

67
Tip 8 Minimize XML/SOAP operations to avoid
parsing overhead
XML Transform
XCBR
Input Message
Custom JAXB
JAXB Service
  • Bypass SOAP and Web Services processing
  • Use HTTP Direct Basic instead of SOAP or WS
  • Risk of invalid XML if source is unreliable
  • Combine multiple XML parsing steps into one
  • Save target XPath results as Message props
  • Also relevant for BPEL correlation IDs

68
Tip 9 Use high-speed encryption to reduce
security overhead
  • Default SSL encryption uses old RCA stack
  • At least 2X slower than more modern options
  • Change to any JSSE compliant stack
  • modify client DSSL_PROVIDER_CLASS
    to progress.message.net.ssl.jsse.jsseSSLImpl
  • change broker SSL provider from RSA to JSSE
  • Use more efficient cipher suites
  • RSA_With_Null_MD5 is the smallest and fastest
  • Reduce broker memory overhead by deleting any
    unused ciphers

69
Tip 10 Use stream API's to improve large
message performance
  • SonicMQ Recoverable File Channels
  • Uses JMS layer to manage large file xfer
  • Queue-based initiation of transfer
  • High-speed JMS pipeline for blocks of data
  • Recovery continues at point interrupted
  • Sonic ESB open-source Large Message Service
  • Provides dynamic provisioning
  • Interacts with ESB processes
  • SonicStream API (version 7.5 or later)
  • Topic-based, pipeline into Java Stream api
  • No recovery

70
Broker Tuning Parameters
  • Core Resources
  • JVM heap size
  • JVM thread, stack limits
  • DRA, HTTP and Transaction threads
  • TCP settings
  • Message storage
  • Queue size and save threshold
  • Pub/sub buffers
  • Message log and store
  • Message management
  • Encryption
  • Flow control and flow-to-disk
  • Dead message queue management
  • Connections
  • Mapping to NICs
  • Timeouts
  • Load balancing

71
Java Tuning Options
  • Fastest JVM depends a little on the application
    and a lot on the platform
  • VM heap needs to be large enough to process load,
    but small enough to avoid system swapping
  • Garbage Collection
  • default settings good for optimal throughput
  • use advanced (jdk4 or later) GC options to
    optimize worst-case latency

72
ESB Tuning Options
  • Load balancing and scalability of services
  • number of distributed service instances
  • number of service listener threads
  • Container Java VM heap size
  • Intra-Container messaging
  • Endpoint and connection parameters
  • same principles as JMS client

73
Discussion of Service tuning
  • Transformations
  • XML Server
  • BPEL Server
  • Database Service
  • DXSI Service

74
Other fun things you can tune
  • Database indexing, query optimization
  • SOA patterns federated query, temporal
    aggregation, split/join, caching
  • XML DOM, SAX, XStream
  • Disk Device balancing, RAID, mount params
  • Network Nagle algorithm, timeouts

75
Other Performance Engineering Resources
No Magic Bullet, but plenty of places you can
go for info
  • SonicMQ Performance Tuning Guide
  • Benchmarking Enterprise Messaging Systems white
    paper
  • Sonic Test Harness User Guide
  • Progress Professional Services
  • Developer training courses
  • Sonic Performance Analysis package

76
Questions?
77
Thank you foryour time
78
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com