Title: Moving to OpenEdge 10'1B
1Moving to OpenEdge 10.1B
- Change is difficult but moving to OpenEdge isnt!
Tom Harris, Director OpenEdge RDBMS
Technology OpenEdge Product Management
2Agenda
- General Migration Strategy
- The Fast and Easy Upgrade
- Physical Upgrade
- Tuning Opportunity
3General Migration Strategy
First detail plan, review plan, test plan THEN
execute
- Preparation
- Truncate BI, Disable AI, 2PC, and Replication
- Backup Install
- Install OpenEdge
- Dont need to uninstall V9
- Upgrade
- Upgrade DB to OpenEdge 10
- Do your backups !!!!
- Recompile/Re-deploy your ABL code
- Run your application - no ABL changes (usually)
4Agenda
- General Migration Strategy
- The Fast and Easy Upgrade
- Physical Upgrade
- Tuning Opportunity
5Database Server Side Conversion
- Preparation
- Truncate BI, disable AI, 2PC, Replication (V9)
- Backup database (V9)
- Validate backup
- Store backup away from database
- Install OpenEdge 10 on server machine
- And everywhere else!
- Rebuild and re-deploy application (if need be)
- Remote V9 to OpenEdge 10 disallowed
Prepare
Install
6Database Conversion Steps
- Run conversion utility
- _proutil ltdbgt -C conv910 B 512
- Conversion runs in 5-minutes or less
- Basically just a schema upgrade
- Backup your new database
- Re-start application
Run
7Done - 5 minutes later
- You are now Good To Go with OpenEdge
- No physical changes to existing user data
- Data remains in Type I storage areas
- Data fragmentation exists as before
- Optimize physical layout when time permits...
or
8Agenda
- General Migration Strategy
- The Fast and Easy Upgrade
- Physical Upgrade
- Tuning Opportunity
9Why Do A Physical Database Change?
Upgrade
Get your database in good physical shape
- Performance, Scalability Maintenance
- Take advantage of new features
- No adverse application effects
- Physical reorg does NOT change the application
- Definitions are abstracted from the language by
an internal mapping layer - Different physical deployments can run with the
same compiled r-code
10Database Physical Change Overview
Reorg
Time for a change a database re-org
- Preparation (same as before)
- Truncate BI, disable AI, backup, validate,
install - Before Physical Reorg
- Upgrade database to OpenEdge 10
- conversion utility
- prostrct create (a must if changing block size)
- Physical Updates (no r-code changes required)
- Separate schema from user data
- Create new storage areas
- Specify records per block
- Specify Type II cluster sizes
11Database Physical Reorg Overview (cont.)
Time for a change
- Physical Reorg
- Spread data out amongst new areas
- Move indexes
- Online options vs offline options
- E.g. Database block size changes are offline
- After Reorg
- Reclaim Unused Space
- Truncate old data area
- Delete old data area
Reclaim
12Physical Database Update Options
- In Place (same database)
- Transformation done all at once or over time
- Add new storage areas to existing database
- Migrate data from old areas to new
- Reclaim space from old areas
- New database
- Transformation done in one maintenance window
- Dump old data
- Create new database
- Load into new database
- Prodel old database
- Mix of Option 1 and Option 2 (custom)
- Create new database
- Move data from old database to new database
- Reclaim space by deleting old database
Option 1
Option 2
Option 12
13Upgrade An Example of the Steps
Option 1
Getting started In-Place Changes Separate user
data from schema
14Moving schema tables
Separate Schema from user data (in place)
Option 1
proutil ltdbgt -C mvsch (offline operation)
Schema Area
Renames existing schema area
Old Default Area
15Moving schema tables
Separate Schema from user data
Option 1
proutil ltdbgt -C mvsch (offline operation)
Schema Area
Renames existing schema area
Creates new schema area
Old Default Area
Schema Area
Moves schema Tables Indexes
16Moving schema tables
Separate Schema from user data
Option 1
proutil ltdbgt -C mvsch (offline operation)
Schema Area
Renames existing schema area
Creates new schema area
Old Default Area
Schema Area
Moves schema Tables Indexes
You move data To new areas
User Area
User Area
User Area
User Area
You truncate Old Default Area
17Physical ChangesLocation, Location, Location
Option 1
- Create .st file with new layout
- Use Type II Storage Areas
- Tables 64 or 512 block clusters
- Indexes 8 or 64 block clusters
d Cust/Bill Indexes",18 /d_array2/myDB_7.d1
f 512000 d Cust/Bill Indexes",18
/d_array2/myDB_7.d2 d Customer Data",1664
/d_array2/myDB_8.d1 f 1024000 d Customer
Data",1664 /d_array2/myDB_8.d2 d Billing
Data",32512 /d_array2/myDB_9.d1 f 1024000
d Billing Data",32512
/d_array2/myDB_9.d2
18Physical Changes
Option 1
- Validate first
- prostrct add ltdbgt new.st -validate
- Then update
- prostrct add ltdbgt new.st
- OR
- prostrct addonline ltdbgt new.st
The Structure file format is valid. (12619)
19Moving Tables and Indexes
Option 1
3 Options for Data Movement
- Table move and Index move
- Online (somewhat)
- Dump and Load
- With and w/out index rebuild
- Application must be offline
(without additional customization) - Suggestion mix of option 1 and 2
- Table move small tables
- DL everything else
- 1st purge/archive unneeded data
20Option 1 Table/Index Move
Option 1
- Advantages
- Online (with no-lock access)
- Dump load in one step
- Schema is automatically updated
- No object changes to deal with
- Parallelism
- Disadvantages
- Only No-lock accessible during move
- Moves but doesnt rebuild indexes
- Too slow for large tables
- Changes are logged!
- BI growth may be of concern
21Table Move
Option 1
- proutil ltdbgt -C tablemove owner-name .
table-name table-area index-area - Move/reorg a table by its primary index
- Move a table AND its indexes
- Preferred performance
- Fast for small tables
- No object changes!
22Index Move
Option 1
- proutil ltdbgt -C idxmove owner-name .
table-name . index-name - area-name
- Move an index from one area to another
- Does NOT alter/optimize index block layout
- Fast but does not rebuild indexes
- Online but changes to owning table blocked
23Upgrade An Example of the Steps
Option 1
Getting started Make a New Database Separate
user data from schema
24Option 2 New Database - Dump and Load
Option 2
3 Dump and Load Flavors
- Textual Data
- Data Dump
- Data Load
- Bulk Load followed by index rebuild
- Binary
- Binary dump
- Binary load
- With index rebuild
- Followed by index rebuild
- Custom (Textual or raw)
- Dump/Load w/triggers
- buffer-copy
- Export/Import
25Dump and Load General Strategy
Option 2
- Create new database structure
- Add to existing DB
- New database
- Run tabanalys
- Dump table data
- Data definitions
- Dump
- Modify
- Load definitions
- Load table data
- Build indexes (if needed)
- Backup
If you have the disk space, creating a new db
saves time
If done in place the existing tables must be
delete prior the load (but you knew that already)
26Option 2
Dumping the data
27Using Binary Dump (Fastest)
Option 2
- Advantages
- Fastest and Easy
- No 2 GB file size limit
- No endian issues
- Can choose dump order (by index)
- Can dump table data in portions
- Multi threaded (10.1B)
- Can dump multiple tables concurrently (parallel)
- Disadvantages
- Must ensure no table changes between DL
- Not character based
28Binary Dump Threaded (Fastest)
Option 2
- proutil ltdbgt -C dump lttablegt ltdirgt -index ltindex
gt - -thread 1 -threadnum ltngt
- -dumpfile ltfilelistgt -Bp 64
- -index ltngt
- Choose index based on read order
- -index 0
- Faster dump, slower read
- Assumes coming from Type II
- -thread indicates threaded dump
- threads automatic ( CPUs)
- threadnum max of CPUs 2
- Threads only available in multi user mode
- Division of labor according to root block
- -dumpfile used as input for load
29Binary Dump Specified
Option 2
- proutil ltdbgt -C dumpspecified lttable.fieldgt
- ltoperatorgt ltvaluegt ltdirgt -preferidx
ltidx-namegt - Switches
- table.field MUST be lead participant in index
- Valid operators LT, GE, LE, GT, EQ
- -preferidx determines specific index to use
- -index, -thread are ignored
- Performance
- Same as w/2 threads (until 10.1B03)
- Inherent max of 2 concurrent dumpers per table
(till 10.1B03) - Cautions
- Must use an ascending indexes
- Output dir must be different!
30Dictionary Data Dump
Option 2
- Database Admin tool
- OR run prodict/dump_d.p(lttablegt", ltdirgt,
u1108). - Advantages
- Fast and Easy
- Parallel
- No endian issues
- Disadvantages
- 2 GB File size limit (10.1B)
- Cant choose dump order
- Have to dump entire table
- Must ensure no one changes table between DL
31Reorganizing Areas/Objects
Option 2
Data Dump Completed.
32Dump Modify data definitions
Option 2
- Use Data Administration tool
- OR
- run prodict/dump_df.p(ALL, ltmydbgt.df,
). - If using bulk load (slowest approach of all)
- run prodict/dump_fd.p(ALL, ltmydbgt.fd).
33Dump Modify Data Definitions
Option 2
- Update .df files
- Optionally delete old table
- Change tables area information
- Delete/Drop tables
- Load Data Definitions
- Data Administration tool
- OR run prodict/load_df.p(ltmytablegt.df").
34Alternative Data Definition Modification
Option 2
If all data in area dumped
- Truncate objects for fast move/delete
- proutil ltdbgt -C truncate area Old Default Area
- Warns then deletes data (but NOT schema)
- Rebuild/activate empty indexes (if moving)
- proutil ltdbgt -C idxbuild inactiveindexes
- Move empty tables/indexes to new area
- proutil ltdbgt -C tablemove lttablegt ltareagt
index-area - No effect on CRCs
35Dump Completed. Now We reload
Option 2
Load the data back in (finally). Remember to
load all data files. Be sure to validate data
loaded.
36Bulkload (The slowest alternative)
Option 2
- proutil ltdbgt -C bulkload ltfd-filegt -B 1000 i Mf
10 - Data input from dictionary or custom data dump
- Mentioned here for completeness only
- Drawbacks
- 2 GB file limit (10.1B)
- Loads one table at a time (single user)
- Does not insert index entries
- Requires index rebuild as separate step
- No advantage over other loads
- Slower than all other loads
37Dictionary Load
Option 2
- Data Administration Tool
- OR
- run prodict/load_d.p(table1", table1.d").
- Data input from dictionary or custom data dump
- 2 GB file limit (per load) in 10.1B
- Load data in parallel (to separate tables)
- Inserts index entries
- Index tree not perfect
- Performance close to binary load index rebuild
- (when loading multiple tables)
38Binary Load (The fastest option)
Option 2
- proutil ltdbgt -C load lttablegt.bd build
- Load to new or truncated area
- Truncated rather than emptied
- Parallel load to different tables
- Same or different areas w/o scatter!
- Optionally load with build indexes
- Somewhat better performance
- Restart db before running (bug)
- Suggested not to use online yet
39Binary Load
Option 2
From a threaded dump or dumpspecified
- proutil ltdbgt -C load lttablegt.bd
- -dumplist ltfilenamegt
- Dump List File
- /usr1/richb/mytable.bd
- /usr1/richb/mytable2.bd
- /usr1/richb/mytable3.bd
- Must load ALL dumps (.db, db2, .db3, )
40Tuning the Process
Option 2
Tune for high (non-recoverable) activity
- Dump with
- RO, high B and/or -Bp
- Dump on index w/fewest blocks (if possible)
- Load with
- High B, r or i
- BIW, 1.5 APWs per CPU,
- Large BI clusters w/16K BI blocks
- No AI or 2PC
- Spread data, BI and temp files across disks /
controllers
only use -r -I when complete data recovery
possible
41After the Load
Option 2
Build Indexes (where applicable)
- proutil ltdbgt -C idxbuild table lttablegt area
ltareagt - Many new idxbuild choices
- Helpful parameters
- -SG 64 (sort groups)
- -SS filename (file containing sort file list)
- -TM 32 (merge buffers)
- -TB 31 (temp block size)
- -B 1000
- Run tabanalys
- validate records
- Backup your database
42Reclaim space
Option 2
Option 1
or
For areas that were emptied
- proutil ltdbgt -C truncate area ltarea-namegt
- Warns then deletes data (but area !emptied)
- proutil ltdbgt -C truncate area
- Only truncates empty areas (but all of them)
- Area logically truncated (option 2)
- Extents can be deleted
- prostrct remove ltdbgt d ltold-area-namegt
DB Ready!
43Agenda
- General Migration Strategy
- The Fast and Easy Upgrade
- Physical Upgrade
- Tuning Opportunity
44Tuning Opportunity
- -Bt (temp tables are Type II storage areas)
- 10.1B changes default Temp table block size
- From 1K to 4K
- tmpbsize 1 restores old behavior
- Monitor BI Cluster Size
- BI notes are bigger in OpenEdge 10
- BI grow
- Run UPDATE STATISTICS for SQL
Run
45In Summary
- Conversion is quick
- Physical Upgrade at your leisure
- Lots of physical re-org options
- Rollout is simple
- 10,000 customers on OpenEdge
46Questions?
47Thank you foryour time
48(No Transcript)