Title: Bringing IDS Internals to the Surface Forum 2002 Presentation November 9, 2002
1Bringing IDS Internals to the SurfaceForum
2002 PresentationNovember 9, 2002
- Using IDS Internals for Troubleshooting,
Maintenance, and Performance Tuning
Mark ScrantonPrincipal Consultant/Trainer IBM
Denver mscranto_at_us.ibm.com mail_at_markscranton.com w
ww.markscranton.com
2Who Am I?
- Started with Informix in July of 1995
- I do
- enginesonly.
- IDS, XPS, DB2
- focusing on internals, perf tuning,
configuration. - teaching and consulting.
- webcasts
3Contact Info
- mscranto_at_us.ibm.com
- mail_at_markscranton.com
- www.markscranton.com - website full of scripts,
tips, tricks, white papers, presentations, and of
course, pics of my kids.
4Presentation Overview
- this presentation will identify internals
concepts that could be used daily for
maintenance, support and troubleshooting of IDS. - a strong working knowledge of IDS is assumed.
- these concepts are also great at the IDS Trivial
Pursuit Partiesheld all over the country Im
sure
5Choices
- Disk Overhead
- Physical and Logical Pages
- Maximum Number of Extents
- Long Transactions
- TOPIC Checkpoint Realities (incl plog overflow)
- IDS 9.4 - 2G Chunk Size
- Physical Log Overflow
- TOPIC Partnums (and the Route to Data)
- TOPIC Forward Pointers
- TOPIC Fragmentation and Overhead
- TOPIC Extending the RSVD pages
- 9.3 Features Update Statistics Changes
- TOPIC Slot Re-use
- Detached IDXs Proof 7 to 9
6Detached IDXs Proof 7 to 9
7Detached IDXs Proof 7 to 9
8Detached IDXs Proof 7 to 9
9Detached IDXs Proof 7 to 9
10Detached IDXs Proof 7 to 9
11Topics
- Disk Overhead
- General Overhead row and page level
- Fragmentation and Overhead it all adds up
- Performance Impactors
- Forward pointers
- Maximum number of extents for a table
12Topics
- Show Stoppers
- Physical Log Overflow
- Long Transactions
- Extending the Root Reserved Pages
- Good To Know
- Slot Re-use on a page
- The Route To Data
- Two 9.3 Features
- update statistics
- dynamic logical logs
13TOPIC Logical vs. Physical Pages
- Understanding the difference can help
tremendously in a number of different scenarios - interpretation of msg log assertion failures
- interpretation of the extent list of a partition
page - lock level identification with respect to
rowid(s) - oncheck usage
- a bunch of other stuff
14Logical vs. Physical Pages
- Physical Pages
- always with respect to a chunk
- numbering starts with 0
- format 0xCCCPPPPPccc - chunk numberppppp -
page offset into the chunk
15Logical vs. Physical Pages
30023c
30023d
30023f
30023e
Chunk 3
0xCCCPPPPPc - chunk p - page offset
300243
300242
300241
300240
All addresses show physical pages
300247
300246
300245
300244
16Logical vs. Physical Pages
- Logical Pages
- always with respect to a tablespace
- numbering starts with 0
- format simply 0 through x
17Logical Pages
30023c
30023d
30023f
30023e
extent 1 for sparky
300243
300242
300241
300240
0
2
1
3
300247
300246
300245
300244
create table sparky...
5
4
6
7
18Usage - extent list
extent list on a partition page (oncheck -pt
skippysparky)
19Usage Example - bad page
20Usage Examples - lock level
rowid format 0xLLLLLLSS L - logical page in
the table S - slot/row on the page
onstat -k
oncheck -pp 0x100016 2
21TOPIC Disk Level Overhead
- Why the concern?
- table sizes are growing
- extent allocations, disk sizing areas of concern
- Two types
- page level
- the same regardless of page size (2K or 4K)
- row level
22Why Do You Care?
- Extents concatenate, right?
- Extents double, right?
- So WHY do I need to pre-allocate extent sizes?
- Heres a proof
23Extent Allocation(s) Proof
24Extent Allocation(s) Proof
5,604 4 dspaces 22,416 PTEXTENDS into the
llogs
25Disk Level Overhead
Page Level Overhead
- 2048 (2K page)
- 28
- -----------------
- 2020 bytes per page
Page Header (24 bytes)
- 4096 (4K page)
- 28
- -----------------
- 4068 bytes per page
Page Trailing Timestamp(4 bytes)
26Disk Level Overhead
Row Level Overhead
page header
row1
- each row has 4 bytes overhead for its slot
table entry - should be considered for large table sizing
- most pages have this structure
row 2
slot 1
slot 2
27Overhead Calculations
- example 1,000,000 rows, 208 bytes wide
- simple estimation of disk required
- data pages only
- nrows row_width 208,000,000 bytes
- 101,563 2K pages
28Overhead Calculations
- Other overhead not considered
- page overhead 28 bytes per page
- bitmap pages needed can track 4032 pages per
bitmap - slot table entry per row 4 bytes
- 255 rows per page max
- no unnecessary splitting of rows
- page size 2K or 4K
29Fundamentals What is a BITMAP Page?
- A bitmap page keeps track of the fullness and
type of other pages in a tablespace. - Each tablespace has at least onethe first one is
always logical page 0. - Used a great deal when identifying pages and
their types for a tablespace.
30Fundamentals What is a BITMAP Page?
- For 2K ports, a single bitmap can track 4032
pages. So large tables may have many bitmaps. - Used when inserting rows, allocating new pages,
etc
31Overhead Calculations
oncheck pt wisc_dbtenktup1
111,140 2K data pages used
32Overhead Calculations
- lets add some indexes
- 1 indexes of INT 4 bytes each
- Fragmented table and indexes
- quick estimate
- 4 bytes 1,000,000 rows 4,000,000 bytes
- 1,954 2K pages needed
33Overhead Calculations
- Other index overhead not considered
- page overhead 28 bytes per page
- bitmap pages needed can track 4032 pages per
bitmap - slot table entry per row 4 bytes
- no unnecessary splitting of rows
- page size 2K or 4K
- other B tree levels leaves, branches, root
34Overhead Calculations
7,248 pages
35Summary
36TOPIC IDS 9.4 - 2G Chunk Size
- IDS 9.4 solves the 2G chunk/file size limit.
- Has been an oustanding request for many years
now. - There is a 2-step conversion for moving to large
chunk support when going to 9.4. - after phase 1, you can still revert back.
- after phase 2, you cannot.
37Old Chunk Size Limit
- Rightmost 20 bits (unsigned) of pg_addr field
determined the - number of pages in a chunk
0x
001
00008
Page offset
Chunk number
1 nibble (half byte) 4 bits
f
f
f
f
f
1,048,575 pages
1,048,575 pages x 2,048 bytes 2,147,481,600
bytes
38Old Number of Chunks Limit
- Leftmost 12 bits (signed) of pg_addr field
determined the number - of chunks in an IDS instance
0x
001
00008
Page offset
Chunk number
1 nibble (half byte) 4 bits
7
f
f
-
2,047 chunks
39Howd We Do It?
- Modifying the page header structure while still
retaining the 24 - byte size has a number of advantages
- Minimizes conversion processing time
- over 99 of the contents of the page untouched by
system - allows on-the-fly conversion (discussed in the
next module) - Minimizes risk to data
- enables partial conversion for testing other 9.40
features - allows easy reversion from above partial
conversion
40Page Address Expanded to 48 Bits
- Pre-9.40 IDS a 12-bit chunk number and a 20-bit
page number in the chunk - IDS 9.40 a 16-bit chunk number and a 32-bit page
offset within the chunk - the chunk number has a range of 1 to 32,767.
- the offset has a range of 0 to 2,147,483,647
pages.
41IDS 9.40 Chunk Size Limit
- 32 bit (signed) pg_offset field determines the
number of pages in a chunk.
0x
0001
00000008
Page offset
1 nibble (half byte) 4 bits
7
f
f
f
f
f
f
f
-
2,147,483,648 pages
2,147,483,648 pages x 2,048 bytes
4,398,046,509,056 bytes
42IDS 9.40 Number of Chunks Limit
- A new 16 bit (signed) pg_chunk field determines
the number - of chunks in a 9.40 IDS instance.
0x
0001
00000008
Chunk number
1 nibble (half byte) 4 bits
7
f
f
f
-
32,767 chunks pages
43Before and After
44File Sizes of 18 Trillion MB Supported
- All relevant 32-bit Unix platforms support
compiler flags to enable large files (18
quintillion bytes) - All relevant Unix vendors support 64-bit hardware
and offer 64-bit versions of their Unix
operating systems - Where the offset of the beginning of a chunk was
a 32-bit number, it has been increased to a
64-bit offset. This effectively lets IDS make
full use of any device.
45TOPIC Max Number of Extents
- Whats the maximum number of extents for a
tablespace? - Do you really want to know?
- Odds of running out? Slim.
- Will running out happen to you?
- It did at CA Water.heres an real example.
46CA Water
47Errors Reported
48Disk/Dbspace Full?
49Out of Extents
50Wow
Tblspaces n address flgs ucnt tblnum physaddr
npages nused npdata nrows nextns resident454
15caf420 0 3 20013a 231a8b 5184 5184
0 0 167 0 635 166cd488 0 1
2001fa 2329da 138512 138512 0 0 152
0 636 15cafda0 0 1 2001fb 2329db
4172 4169 0 0 169 0 639
166cdde0 0 2 2001fe 2329de 4992 4982
0 0 87 0 674 166e9588 0 1
200224 232a04 3200 3089 0 0 87
0 1142 15c9cb98 0 2 2004a4 2463ba
1240 1240 1235 24598 53 0 1248
167d0488 2 2 200512 2467b5 98304 96504
75817 75817 136 0 1255 1664ac38 0 1
200526 247458 4384 4384 0 0 97
0 1265 16730a20 0 1 200536 247468
3264 3264 0 0 90 0 1285
166e9018 0 1 20054d 24747f 3264 3180
0 0 90 0 1498 170694f8 0 1
200663 24866f 22640 22619 22612 247191 79
0 2764 16495200 400 2 200c41 23f2f0
1720 1675 1674 14110 51 0 2766
16495d58 0 1 200c43 23f2f2 588 581
0 0 41 0 3581 16398590 0 1
20148f c210ab 1000 976 905 13546 46
0 3615 176de778 0 1 2014b4 c210d0
36100 36072 0 0 53 0 3616
16791bc8 0 1 2014b5 c210d1 31248 31247
0 0 30 0 3795 16791448 602 3
201569 c2757c 566112 565646 557000 10953950
166 0 3798 167209f8 0 3 20156c
c2757f 102468 102266 0 0 166 0
3921 16dd8888 0 2 800002 900005 6320
6311 6309 123530 104 0 5096 15ca9cf0 0
2 900002 a00005 6288 6285 6283
122875 77 0 5102 160e62f0 0 4 a00002
b00005 6304 6302 6300 123103 74 0
7666 17285c10 2 1 a00b0b 7cbf39 27504
27333 27326 914741 119 0 7877 16691da0 0
1 b00051 d05c9c 2536 2536 0 0
80 0 7881 1686a018 0 1 b00055
d05ca0 2044 2028 0 0 79 0
7882 166e9a90 2 1 b00056 d05ca1 398159
398159 303251 303251 169 0 7883 16748630 0
1 b00057 d05ca2 9000 8838 0 0
103 0 7884 1669ed28 0 1 b00058
d05ca3 2800 2800 0 0 84 0
7885 167296c0 0 1 b00059 d05ca4 2064
2064 0 0 79 0 7886 16749960 0
1 b0005a d05ca5 2144 2144 0 0
80 0 7890 1686a7a8 0 1 b0005f
d05caa 2004 2004 0 0 66 0
7968 1686a578 0 1 b000b1 d19932 3676
3676 0 0 51 0 7974 1674b3e0 0
1 b000b8 d19939 1792 1792 0 0
49 0 7975 1674b610 0 1 b000b9
d1993a 1648 1639 0 0 52 0
8002 1661bbf8 0 1 b000e5 d19c2c
1262565 1262565 1262251 1769171 196 8741
174447e0 0 1 b004e1 176b890 6584 6544
0 0 91 0 8882 1621d9c0 0 2
b00575 17acfa0 10864 10830 10827 119189 76
0 117 active, 9433 total
num of extents
51Max Number of Extents
- How is it determined?
- Number of rows?
- no.
- Chunk space available?
- uh, no.
- Page Size?
- actuallyyes. But not what you think.
- let me explainbut first, a definition.
52Max Number of Extents
- Definition a partition page is a page in the
tablespace tablespace of a dbspace. - Each table/fragment in a dbspace typically has 1
partition page. - It describes the tablespace/fragment.
- This includes
53Max Number of Extents
56 bytes 7.x, 92 bytes 9.2
varchars, blobs
Partition Page
54Max Number of Extents
- SOwhat determines the max number of extents?
- Simply this the bytes left on the partition
page for slot 5. - Each extent takes 8 bytes, plus we take 8 bytes
for the cap/on-deck entry.
55Max Number of Extents
- If your table has many indexes or varchars/blobs,
you get less extents per table. - With IBM IDS v9.20, all indexes are detached, so
a partition page for a tablespace will always
have an empty slot 4, or index slot, on the
partition page.
56Max Number of Extents
- Test Case
- Create 2 tables - this and that
- grab the partnums and number of extents
57Max Number of Extents
- Project the number of extents available
- firsteach new extent takes 8 bytes
- 4 bytes for the logical page address
- 4 bytes for the physical page address
- the engine adds a cap/on-deck entry to cap the
extent list. It takes 8 bytes. So
58Max Number of Extents
233 extents max
59Max Number of Extents
- start inserting rows alternately into the
tables, this and that. Eventually, you get
60Max Number of Extents
- validate against table this with oncheck pt
unusable
61Max Number of Extents
- validate extents for table this with oncheck pt
62Max Number of Extents
- validate both tables extents with onstat T
63TOPIC Checkpoint Realities
- What is a chkpt request? (anyone)
- What triggers checkpoints? (anyone)
- How often should they occur? (Bueller)
- Is everyone really FROZEN when the chkpt takes
place?
64Checkpoint Steps
- The chkpt flag is raised due to some event.
- The engines viewpointIF (thread in critical
section) THEN we wait (cant start yet)ELSE
(the normal chkpt steps occur)END IF - User thread viewpointIF (already in a critical
section) THEN CONTINUEELSE drop into a CHKPT
WAIT queue until chkpt completeENDIF
65Checkpoint Durations
- Whats an acceptable duration then?
- 1 second?
- 2 seconds?
- What DOES 0 seconds mean anyway?
- What kind of impact would, say, 7 seconds have on
the engine or users?
66Checkpoint Realities
- Consider the frozen question.
group1 in critical section
(they just go til done)
group2 want critical section
(how long do these guys have to wait?)
group3 readers
(reading is fundamental)
67Checkpoints
- How OFTEN should we checkpoint then?
- What should drive the chkpt requests?
- Lets look closer at the plog first
68The Physical Log File
- captures before images of changed buffer cache
pages - issue a checkpoint request at 75 full
- used primarily in fast recovery
- used in 9.4 for logical recovery
69Physical Log Overflow (1 of 2)
Physical Log File
what happens now?
back to the beginning, and keep writing!
70Physical Log Overflow (2 of 2)
Physical Log File
- Two potential scenarios
- the chkpt completes, and we give the physical log
a new logical beginning. In that case, we dont
care about the overwritten pages - if the engine falls over, youll most likely get
stuck in fast recovery when trying to come back up
71What to Do?
- For the size of the plog, should you err on the
side of larger or smaller? - if too small, plog overflow could occur?
- if too large, then chkpts may not take place at
the right time - question then what should trigger
checkpoints? - the plog, OR CKPTINTVL?
72A Scenario
imagine you rely on CKPTINVL to drive checkpoints
dirty pgs 15,000 duration 10 sec
unexpected workload
dirty pgs 5,000 duration 3 sec
normal workload
so what should drive checkpoints??
73And now, a word from our sponsor
memories
74TOPIC Partnums
- Three areas focused on here
- what is a partnum?
- how do we use it to get to stuff?
- what is a tablespace tablespace?
75Partnums
0x
1 1/2 bytes (3 nibbles, or 3 hex digits)
2 1/2 bytes (5 nibbles, or 5 hex digits)
76Partnums
- Definition tablespace tablespace
- keeps track of tablespaces, or partitions, in a
dbspace - each dbspace has one
- allocation of
- 250 pages in rootdbs in 9.2,
- 50 pages in rootdbs for 7.x, 9.1
- 50 pages for subsequent extents
77Partnums
- Definition partition page
- a page in a tablespace tablespace
- typically one page per tablespace
- it describes the tablespace
78More on Partnums The Route To Data
- How do we get to your data?
- This section explains
- more about partnums
- the tablespace tablespace
79create database this
create table that
select from that
database this
page 3c
80TOPIC Forward Pointers
- Forward pointer - a 4-byte number that points
to the remainder pieces of a row that spans
pages. - used with rows longer than a page
- format is same as a rowid, although its NOT the
rowid for the row
81Forward Pointers w/ varchars
- Problem - you can end up with two I/Os for a
single row that would fit onto a single page. - this can happen when youve updated a varchar
column and increased its size
82Forward Pointers
83What can I do?
- To rid yourself of the chaining effect
- unload/reload the table.
- If the row will fit onto a single page, then
there is no need for the forward pointer. - If its grown to be larger than a page (actual
row length), then youll have at least one. This
cant be avoided - the row is now longer than a
page. - alter index to cluster
- same caveats as above
84TOPIC Fragmentation and Overhead
- Question
- when does fragmentation bring overhead?
- how much overhead is there?
85Fragmentation and Overhead
1 logical table to apps/users
- each table fragment is a tablespace
- has its own unique partnum
- same as fragment id and is 4 bytes in length
86Fragmentation and Overhead
table fragments
index fragments (btrees)
87Fundamentals ROWIDs
- a ROWID allows direct access to a row on a data
page.
88Fundamentals ROWIDs
- Where does the ROWID physically reside?
- the index (!)
- What if there isnt an index? What then?
- Why you should know
- data rows per page max 255
- interpretation of onstat k for lock levels
- interrogation of btree pages
89Fragmentation and Overhead
partnum x fragid
partnum y fragid
2 tables/fragments to the engine
90Fragmentation and Overhead
Frag ID Rowid
this combination makes a unique pointer to the
row
- this means an additional 4 bytes per index row
- but not always
91Fragmentation and Overhead
- First, consider WHERE the ROWID physically
resides - not with the data row, or on the data page, BUT
- in the index row, or on the index page
- Yes the index.
- the reason you use an index is for direct access
to a row or rows. - avoiding a sequential scan of a data page is done
with rowid. - therefore, the rowid is a part of the index row.
92index pages (B tree)
data pages
root
branch
NO
key value fragid rowid del flag
leaf
93Fragmentation and Overhead
- Scenario 1 Fragmented Indexes, table not
fragmented
data/table
index fragments
the fragid in the index row is uneccessary since
there is only 1 data fragment.
94Fragmentation and Overhead
oncheck pt wisc_dbskippy (grepd)
95Fragmentation and Overhead
examine index page and slot 1
conclusion no frag id present in index row
96Fragmentation and Overhead
- Scenario 2 Fragmented Indexes, Fragmented table
97Fragmentation and Overhead
98Fragmentation and Overhead
note length of 13
99Fragmentation and Overhead
del flag
100Fragmentation and Overhead
- Look at the data page for key value 0xc314
(49,440)
oncheck pp 0x300002 0x56c
101Fragmentation and Overhead
- Scenario 3 Fragmented Table, Single Index
Fragment
102Fragmentation and Overhead
103Fragmentation and Overhead
table frag2 partnum 0x300003
index fragment partnum
104Proof
examine index page
105TOPIC Long Transactions
- Long transactions can
- cause rollback of work
- hang the engine
- generate a tech support call
106Long Transactions
- What is a long transaction?
- still open, AND
- starts in the oldest log
CLR
CLR
CLR
all other users continue
I
U
U
D
D
I
I
D
D
D
done
LTXEHWM (60)
crossed
LTXHWM (50)
crossed
- If txs are still being rolled back
- give them exclusive access to the logical logs
- all normal txs suspended
- look for long transactions
- roll them back
- all normal txs continue
107Long Transactions
- How to handle them?
- move highwater marks no.
- write better application code yes.
- add more, smaller logical logs.
- upgrade to 7.31.UD1, 9.2 or 9.3
108TOPIC Extending the RSVD pages
- What are the reserved pages?
- IDS first 12 pages of an engine
- XPS first 14 pages of an engine
- Job? To keep track of stuff
109Extending the RSVD pages
110Extending the RSVD pages
chunk1
chunk2
rootdbs
111Extending the RSVD pages
112Extending the RSVD pages
- Rulethe root reserved pages MUST fit in chunk 1
- Make initial rootchunk larger enough for growth
113Extending the RSVD pages
2 dbspaces 1 rootdbs 2 nother_dbspace
114Extending the RSVD pages
chunk1 (rootchunk)
chunk 2
115Extending the RSVD pages
onstat -d
13 chunks total in nother_dbspace
116Extending the RSVD pages
- attempting to add a 14th chunk to the engine
117TOPIC Slot Re-use
- Can IDS reclaim deleted rows/slots, even when
there is room on another page? - Yes.
- For now, a slot equals a row.
118Slot Re-use
- create a table row size 665.
- insert 4 rows
119Slot Re-use
- check bitmap for row distribution
logical page 0 bitmap page (this page)logical
page 1 full data page 3 rows there logical
page 2 partially full data page 1 row there
120Slot Re-use
121Slot Re-use
note plenty of room on page 2
122Slot Re-use
123Slot Re-use
insert into skippy values (0,new row) select
from skippy
124Slot Re-use
- look at page 1 - the first data page again
1259.3 Features Update Statistics Changes
- Customers
- larger table sizes upwards of 1B rows in a
table. - Need
- faster execution of UPDATE STATISTICS
- more parallelism
- Benefit
- more/better stats gathered better query plan.
- less performance issues during execution.
- smaller downtime for UPDATE STATS runs.
1269.3 Features Update Statistics Changes
- Two v7 and 9 myths
- 1st myth that the sample size when run in
medium is fixed. - Not true.
- It is determined by the CONFIDENCE and RESOLUTION.
1279.3 Features Update Statistics Changes
- Two v7 and 9 myths
- 2nd myth with MEDIUM, all rows are read, but
distributions are built just on the sample size. - Not true.
- Sample row size is read with MEDIUM.
1289.3 Features Update Statistics Changes
- The default RESOLUTION of 2.5 and CONFIDENCE of
.95 yields a sample size of 2,963 rows regardless
of the table size.
1299.3 Features Update Statistics Changes
- Changes in 7.31.UD2 and 9.30 first/largest
improvement MEDIUM and HIGH - allowing one scan
of the table to build multiple column
distributions - a table scan distributes each column to an
invocation of the psort package.
1309.3 Features Update Statistics Changes
1319.3 Features Update Statistics Changes
- second improvement allow LOW mode to scan
fragmented indexes in parallel. - the percentage of index fragments that will be
scanned at once is - 10X the PDQPRIORITY up to 100.
- a PDQPRIORITY of 0 will be done in serial.
- Any PDQPRIORITY of 10 will cause all index
fragments to be scanned at once.
1329.3 Features Update Statistics Changes
- Light Scans used for building data distributions
- data is scanned into private buffers in the
Virtual Portion of memory. - does not go through the buffer cache avoiding
LRU overhead. - data warehousing feature typically.
1339.3 Features Update Statistics Changes
- Tuning - Method 1 - Use DBUPSPACE.
- Default sort memory raised from 4M to 15M. Can be
changed with DBUPSPACE. - Previously, DBUPSPACE controlled the amount of
disk used by UPDATE STATISTICS. - Max amount of memory a user can request without
using PDQPRIORITY is 50MB. - Format of DBUPSPACE env var
- DBUPSPACE max diskdefault mem
1349.3 Features Update Statistics Changes
- Tuning - Method 2 - Use PDQPRIORITY.
- allocates more memory as a function of
DS_TOTAL_MEMORY and SHMVIRTSIZE in the Virtual
Portion of memory. - Warning Dont compile stored procedures with
PDQPRIORITY turned on! Use for table syntax.
1359.3 Features Update Statistics Changes
- Tuning - Method 3 tune the PSORT package
- PSORT_NPROCS
- environment variable only
- set between 2 and 10
- match NUMCPUVPS or max processors
- DBSPACETEMP
- configuration parameter or environment variable
- normal rules apply
1369.3 Features Update Statistics Changes
- New info put into sqexplain.out.
- Test case
- 215,000 row table
- varying column sizes
- results on the next slides
1379.3 Features Update Statistics Changes
Table dbs1.cust_tab Mode
HIGH Number of Bins 267 Bin size
1082 Sort data 101.4 MB Sort memory
granted 15.0 MB Estimated number of table
scans 5 PASS 1 c9,c8,c10,c5,c7 PASS 2
c6,c1 PASS 3 c3 PASS 4 c2 PASS 5 c4 Completed
pass 1 in 0 minutes 34 seconds Completed pass 2
in 0 minutes 19 seconds Completed pass 3 in 0
minutes 16 seconds Completed pass 4 in 0 minutes
14 seconds Completed pass 5 in 0 minutes 15
seconds
- no PDQPRIORITY used.
- total time 98 seconds.
- pre-9.3 total time 146 secs.
- time reduced by 33.
1389.3 Features Update Statistics Changes
Table dbs1.cust_tab Mode
HIGH Number of Bins 267 Bin size
1082 Sort data 101.4 MB PDQ memory
granted 106.5 MB Estimated number of
table scans 1 PASS 1 c1,c2,c3,c4,c5,c6,c7,c8,c9,c
10 Index scans disabled Light scans
enabled Completed pass 1 in 0 minutes 29 seconds
- with PDQPRIORITY set.
- light scans used.
- total time 29 seconds.
Many thanks to John Miller for the info on 9.3
update stats changes.
(end Update Statistics Changes)
1399.3 Features Dynamic Logical Logs
- Need
- allow easier addition of logical logs.
- make adding logical logs automatic by the
engine if configured. - reduce the possibility of engine hangs.
1409.3 Features Dynamic Logical Logs
- Prior to 9.3
- could only be added in quiescent.
- always took the 1st available slot in log list.
- required a level-0 (true or fake) to become
active.
1419.3 Features Dynamic Logical Logs
- With 9.3
- New parameter DYNAMIC_LOGS.
- At switch to log n, server checks log n1 for
oldest open transaction. - If true, a new log will be added, and usage
between log n and n1.
1429.3 Features Dynamic Logical Logs
- The engine will allocate logs for a dbspace in
this order - where the last log was allocated
- critical and mirrored (non-root)
- critical (non-root)
- containing the plog
- rootdbs
- non-critical and mirrored
- non-critical
1439.3 Features Dynamic Logical Logs
- Sizing
- new log size the midpoint between the largest
and smallest active logs. - if the requested amount of contiguous space is
not available, less will be taken. - minimum size 200K.
1449.3 Features Dynamic Logical Logs
- Long Transaction HW Mark changes
- LTHWM
- 80 if DYNAMIC_LOGS is on
- 50 otherwise
- LTXEHWM
- 90 if DYNAMIC_LOGS is on
- 60 otherwise
1459.3 Features Dynamic Logical Logs
- Config parameter changes
- LOGSMAX, LBU_PRESERVE gone.
- LTXHWM, LTXEHWM, DYNAMIC_LOGS are hidden.
(end Dynamic Logical Logs)
146Thanks...
- for listening. Hope you learned something.
- see you in class
147How do you find out more?
- Attend the IBM IDS Internal Architecture class
- 4 days - very intense use of oncheck, and topics
covered in this presentation - send me email mscranto_at_us.ibm.com
- www.markscranton.com