Title: Get a Handle on DDL Changes Using Oracle Streams
1Get a handle on DDL using Oracle 9iR2 Streams
NoCOUG Summer Conference, Aug. 19, 2004
2James F. Koopmann
Where to Find Me on the NET
- N-years of IT/core RDBMS experience
- Oracle (V6-10g) 8i 9i OCP
- Writer
- Various Publications Portals
- Speaker Here, There, Anywhere
- 3 technical articles a month
- Forum expert / moderator
- Database centric vendor reviews
- Database general interest issues
3Content
- The DBA nightmare
- How do you detect DDL changes
- STREAMS overview
- STREAMS environment
- STREAMS setup
- STREAMS sample DDL session
- OEM
- Sweat Dreams for the DBA
- Running STREAMS
4The DBA Nightmare
Problem Detection Viable Solutions
- Problems
- Are problems problems only if seen
- Is it a problem if no one sees a benefit
- Should I be working on something that doesnt
solve a problem - Problems are caused by change
- We need to be able to answer the hard questions
- What has happened
- When did it happen
- Will it happen again
- We have to fix Problems
- Reduce finger pointing
- Research viable sourcesTest / ValidateAcceptance
- Be weary of Your mileage may vary type
statements
5The DBA Nightmare
Lazy Detection Methodology Shortcomings
- Your actions are reactive, not proactive, in
nature - The events that trigger an investigation are
often very specific to an incident, narrow in
scope, and the solutions typically do not take
the full health of a database into consideration. - Most of your time is spent in problem
investigation detection, not problem solving - Investigation detection inherently wastes money
- Customers / users drive the work flow of the
database administrators - Database administration group is seen as
ineffective
6The DBA Nightmare
Intelligent Detection Methodology Benefits
- Has the issue been seen by others?
- Will someone benefit from you working on the
issue? - Are you solving a real problem that is causing
pain for your users? - Before solving the problem, are you be able to
determine what was the true root cause? - Will solving the problem eliminate it from
recurring?
7The DBA Nightmare
What Can We Detect / Solve
8The DBA Nightmare
Buffer Cache - Hit Ratio
SQLgt SELECT name, block_size,
100(1 - (physical_reads / (db_block_getsconsiste
nt_gets))) buffhit FROM
vbuffer_pool_statistics
- How we size the buffer cache
- Just issue the SQL to see the hit ratio
- Increase the buffer cache if hit ratio is low
- Done if hit ratio is high
9The DBA Nightmare
Buffer Cache - diffd Hit Ratio
SQLgt SELECT b.name, b.block_size,
100(1 - ((e.physical_reads-b.physical_reads)
/ ((e.db_block_gets-b.d
b_block_gets) (e.consistent_gets-b.c
onsistent_gets)))) buffhit FROM
beg_buffer_pool_statistics b, end_buffer_pool_stat
istics e WHERE b.namee.name AND
b.block_sizee.block_size
- How to diff
- Create table beg_buffer_pool_statistics as select
from vbuffer_pool_statistics - Run workload through system
- Create table end_buffer_pool_statistics as select
from vbuffer_pool_statistics - Just issue the SQL to see hit ratio
- Increase the buffer cache if hit ratio is low
- Done if hit ratio is high
10The DBA Nightmare
Buffer Cache - Advisory
SQLgt SELECT name, block_size,
size_for_estimate,
estd_physical_read_factor, estd_physical_reads
FROM VDB_CACHE_ADVICE
WHERE advice_status 'ON'
- How to get the statistics
- Set the db_cache_advice to READY
- Set the db_cache_advice to ON
- Run a valid workload through the system
- Just issue the SQL to see report
11The DBA Nightmare
Buffer Cache - Advisory Report
BLOCK Cache Estd Phys Estd
Phys NAME SIZE Size Read Factor
Reads -------- ----- ----- -----------
---------- DEFAULT 8192 48 2.1133
343669940 DEFAULT 8192 96 1.7266
280783364 DEFAULT 8192 144 1.4763
240091867 DEFAULT 8192 192 1.3573
220733606 DEFAULT 8192 240 1.2801
208181172 DEFAULT 8192 288 1.2165
197842182 DEFAULT 8192 336 1.1686
190052917 DEFAULT 8192 384 1.1202
182180544 DEFAULT 8192 432 1.0877
176884743 DEFAULT 8192 480 1.0602
172420984 DEFAULT 8192 528 1.0196
165812231 DEFAULT 8192 544 1
162626093 DEFAULT 8192 576 .9765
158797352 DEFAULT 8192 624 .9392
152735392 DEFAULT 8192 672 .9216
149879874 DEFAULT 8192 720 .9013
146571255 DEFAULT 8192 768 .885
143928671 DEFAULT 8192 816 .8726
141908868 DEFAULT 8192 864 .8607
139972381
- Its all about reads
- Dont reduce the size of your cache if you are
going to incur more physical reads - Dont increase the size of your cache if you are
not going to reduce the number of reads
DEFAULT 8192 544 1.00 162626093
CURRENT
12How Do You Detect DDL Changes
Do I Care About Capturing DDL
- What kind of shop are you in
- You run a tight / secure database shop. (does it
need to be tighter) - Changes are kept to a minimal. (will one slip by)
- The time window is not sufficiently large between
DDL changes. - You need an inexpensive investigative approach
for DDL extraction. - Do you hedge on the side of caution or paranoia
- Do you need to be concerned with each and every
DDL statement - Are you pro-active
- Do you like systematic approaches to solving
problems - Do you need to be notified of problems
13The DBA Nightmare
Attempts at DDL stability through policy
? ? ? ? ? ?
- Have you tried to set policies?
- I dont know how many shops I have worked in and
tried to set policies about - NO changes after 300pm on friday
- Cant use production for Test / Stress / QA
- ALL changes must be approved through DBA
- ALL changes must be implemented by DBA
- ALL source must go through QA Test
- and the list goes on and on and on and on
- Bottom line
- Startup fallacy
- Everyone is in a rush
- Most dont want to do the job the right or proper
way the first time
Your get a call at 4am on Sunday You spend
time with data integrity This change MUST be in
today You dont know what anyone is doing Not one
will tell you what they are doing You have to
pick up the pieces
14How Do You Detect DDL Changes
Methodolgies
- Take a look at Change DATEs in DBA views
- SELECT Statements against DBA views
- Compare Database Objects Over Time
- Take Snapshot at T1
- Take Snapshot at T2
- Compare T1 to T2
- Use LogMiner
- Search and Extract DDL from Current Redo Logs
Archived Logs - Streams
- Set of database structures to capture and
replicate changes from a source database to a
destination database
15How Do You Detect DDL Changes
Take a Look at Dates In DBA Views
The Good - We know when object is added /
changed The Bad - No Granularity The Ugly -
We have to still validate a change through
looking at the total object
16How Do You Detect DDL Changes
Compare Database Objects Over time
- T0 - CREATE TABLE time_dba_tables with
DATE-TYPE column - T1 - INSERT INTO time_dba_tables (SELECT
sysdate, FROM dba_tables) - T2 - INSERT INTO time_dba_tables (SELECT
sysdate, FROM dba_tables) - TC - SELECT ltchangesgt FROM time_dba_tables t1,
time_dba_tables t2 - OUTER JOINS - for what was added and deleted
- DECODES - for proper column comparison
- t1.column ! t2.column - WHERE Cluase is ORd
for interested columns
The Good - Build Yourself. No reliance on 3rd
party software or database vendor The Bad -
Complicated code. (you better be good) The Ugly
- Oracle could change / add tables that are
important to object change
17How Do You Detect DDL Changes
LogMiner
- Lets you look at the redo log activity in your
database - You can look in the current redo logs or archived
logs - New in 9i is the ability to look at DDL
statements
18How Do You Detect DDL Changes
LogMiner
- Create a dictionary
- EXECUTE DBMS_LOGMNR_D.BUILD(
- dictionary_filename gt 'dictionary.log',
- dictionary_location gt '/ora/oradata/hcmc/log')
- Specify which log files to be monitored
- EXECUTE DBMS_LOGMNR.ADD_LOGFILE(
- LogFileName gt '/ora/oradata/hcmc/arch/arc00075.0
01', - Options gt dbms_logmnr.NEW)
- Start logminer
- EXECUTE DBMS_LOGMNR.START_LOGMNR(
- DictFileName gt'/ora/oradata/hcmc/log/dictionary.
log') - Query logminer views
- SELECT sql_redo FROM VLOGMNR_CONTENTS
- WHERE operationDDL
The Good - Easy to setup and use The Bad -
Pinpointing the time when something might have
happened The Ugly - Manual process of adding
logs and searching through them
19Oracle Streams
Overview
- Oracle Streams
- A feature within the Oracle database that allows
for the replication of database structures and
information between two separate databases
- Is a new feature that was introduced in 9iR2
- Is a more robust replication environment that
allows for the sharing of information and
structures - Improved performance of replication because
Streams utilizes the extraction of DML and DDL
from the redo log files through separate
background processes. - Supports the hot mining of the current active
redo log for change data at the same time it is
written to the log. - Supports the mining of archived log files.
- No loss of data
- Directed Networks
20Oracle Streams
Directed Networks
- A directed network is defined as a networked
system of hosts that allow for the passing of
information to a destination database where the
destination host is not directly accessible to
the source host. - Two methods of information propagation.
- Queue Forwarding, information is received by an
intermediate database, and automatically
forwarded to another intermediate or destination
database. - Apply Forwarding, information is received by an
intermediate database, applied, captured, and
re-queued and ready for propagation to another
destination database.
21Streams Environment
CAPTURE SET_UP_QUEUE ADD_GLOBAL_RULES
HCMC (source)
User DDL Change
PROPAGATION ADD_GLOBAL_PROPAGATION_RULES
APPLY SET_UP_QUEUE CREATE TABLE history_ddl_lcrs
CREATE PROCEDURE history_ddl ADD_GLOBAL_RULES
ALTER_APPLY
SAIGON (destination)
22Streams Environment
CAPTURE
HCMC (source)
User DDL Change
PROPAGATION
APPLY
SAIGON (destination)
23Streams Environment
CAPTURE
HCMC (source)
User DDL Change
PROPAGATION
APPLY
SAIGON (destination)
24Streams Environment
CAPTURE
HCMC (source)
User DDL Change
PROPAGATION
APPLY
SAIGON (destination)
25Streams Environment
CAPTURE
HCMC (source)
User DDL Change
PROPAGATION
APPLY
SAIGON (destination)
26Streams Environment
CAPTURE
HCMC (source)
User DDL Change
PROPAGATION
APPLY
SAIGON (destination)
27Streams Environment
CAPTURE
HCMC (source)
User DDL Change
PROPAGATION
APPLY
SAIGON (destination)
28Streams Environment
CAPTURE
HCMC (source)
User DDL Change
PROPAGATION
APPLY
SAIGON (destination)
29Streams Environment
CAPTURE
HCMC (source)
User DDL Change
PROPAGATION
APPLY
SAIGON (destination)
The Good - Able to report on every DDL
statement issued without intervention The Bad -
Learning curve is a bit high The Ugly -
Intensive cumbersome setup
30Streams Environment
CAPTURE SET_UP_QUEUE ADD_GLOBAL_RULES
HCMC (source)
User DDL Change
PROPAGATION ADD_GLOBAL_PROPAGATION_RULES
APPLY SET_UP_QUEUE CREATE TABLE history_ddl_lcrs
CREATE PROCEDURE history_ddl ADD_GLOBAL_RULES
ALTER_APPLY
SAIGON (destination)
31Streams Environment
Software Requirements
- All databases using Oracle Streams must be
upgraded to version 9.2.0.2 (patch 2632931) - Run catalog.sql catproc.sql after you have
upgraded to version 9.2.0.2
32Streams Environment
Archive Log Requirement
- The source database must be running in ARCHIVELOG
mode in order to capture changes. - No data loss
- No database hanging caused by LGWR waiting for
capture process to complete - Tune your redo log generation before implementing
streams - Do not use the NOLOGGING option
33Streams Environment
Parameter Requirements
34Intermission
35Streams Setup
Create Administrator
- CONNECT sys/ltpasswdgt_at_hcmcsaigon AS SYSDBA
- Normal Database Stuff
- CREATE TABLESPACE ddlman
- DATAFILE C\ORACLE\ORADATA\HCMCSAIGON\DDLMAN01
.DBF SIZE 100M - REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
- CREATE USER ddlman IDENTIFIED BY ddlman
- DEFAULT TABLESPACE ddlman
- QUOTA UNLIMITED ON ddlman
- GRANT CONNECT, RESOURCE TO ddlman
-
36Streams Setup
Grant Privileges to Administrator
- CONNECT sys/ltpasswdgt_at_hcmcsaigon AS SYSDBA
- Advanced Queuing Administration
- GRANT EXECUTE ON DBMS_AQADM TO DDLMAN
- Streams Administration
- GRANT EXECUTE ON DBMS_STREAMS_ADM TO DDLMAN
- GRANT EXECUTE ON DBMS_APPLY_ADM TO DDLMAN
- GRANT EXECUTE ON DBMS_CAPTURE_ADM TO DDLMAN
- GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO DDLMAN
- Instantiation Purposes
- GRANT EXECUTE ON DBMS_FLASHBACK TO DDLMAN
- Easy Monitoring and Usage
- GRANT SELECT_CATALOG_ROLE TO DDLMAN
- GRANT SELECT ANY DICTIONARY TO DDLMAN
- GRANT SELECT ON DBA_APPLY_ERROR TO DDLMAN
37Streams Setup
Grant Privileges to Administrator to Create Rules
- CONNECT sys/ltpasswdgt_at_hcmcsaigon AS SYSDBA
- BEGIN
- DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
- privilege gt DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
- grantee gt DDLMAN,
- grant_option gt FALSE)
- END
- /
- 3. BEGIN
- DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
- privilege gt DBMS_RULE_ADM.CREATE_RULE_OBJ,
- grantee gt DDLMAN,
- grant_option gt FALSE)
- END
- /
38Streams Setup
Switch LogMiner out of the SYSTEM Tablespace
- Why
- By default, LogMiner tables are in the SYSTEM
tablespace. - When you create a capture or apply process,
Oracle will create a subset of the data
dictionary to keep track of changes to
structures. - Streams will actually keep multiple versions of
object information. - The SYSTEM tablespace may not have enough room
for these tables. - How
- CONNECT sys/ltpasswdgt_at_hcmc AS SYSDBA
- CREATE TABLESPACE logminer
DATAFILE C\ORACLE\ORADATA\HCMC\LOGMIN
ER01.DBF SIZE 100M
REUSE AUTOEXTEND ON MAXSIZE
UNLIMITED - EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE(LOGMINER)
39Streams Setup
LogMiner / Streams Issues
- What
- If you move LogMiner after you have captured or
applied, you will lose the Streams directory
changes that have been recorded. - The Streams data dictionary is not kept clean by
Oracle which can also cause greater strains on
the Streams dictionary and allow it to grow
uncontrollably. - To Do
- Simple, dont forget to move LogMiner
- Remove an object is not being used you can clean
out the Streams dictionary by using
DBMS_STREAMS_ADM.PURGE_SOURCE_CATALOG for a
particular object.
40Streams Setup
Database Link from source to destination
- Why
- For transporting the captured DDL from the source
database to the destination database - How
- CONNECT ddlman/ddlman_at_hcmc
- CREATE DATABASE LINK saigon
CONNECT TO ddlman IDENTIFIED BY ddlman
USING saigon
41Streams Setup
Capture
- CONNECT ddlman/ddlman_at_hcmc
- BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table gt 'ddl_cap_table', queue_name
gt 'ddl_cap_q', queue_user gt 'ddlman')
END
/ - BEGIN
- DBMS_STREAMS_ADM.ADD_GLOBAL_RULES( streams_type
gt 'capture', streams_name gt
'cap_ddl', queue_name gt
'ddl_cap_q', include_dml gt
false,
include_ddl gt true,
include_tagged_lcr gt false,
source_database gt hcmc)
END
/
42Streams Setup
Propagation Rules
- CONNECT ddlman/ddlman_at_hcmc
- BEGIN
- DBMS_STREAMS_ADM.ADD_GLOBAL_PROPAGATION_RULES(
- streams_name gt 'prop_ddl',
- source_queue_name gt 'ddl_cap_q',
- destination_queue_name gt 'ddlman.ddl_apply_q_at_sai
gon, - include_dml gt false,
- include_ddl gt true,
- include_tagged_lcr gt false,
- source_database gt hcmc )
- END
- /
43Streams Setup
Create Queue
- CONNECT ddlman/ddlman_at_saigon
- BEGIN
- DBMS_STREAMS_ADM.SET_UP_QUEUE(
- queue_table gt 'ddl_apply_table',
- queue_name gt 'ddl_apply_q',
- queue_user gt 'ddlman')
- END
- /
44Streams Setup
Create Table to hold DDL
- CONNECT ddlman/ddlman_at_saigon
- CREATE TABLE ddlman.ddl_history(
- timestamp DATE,
- source_database_name VARCHAR2(128),
- command_type VARCHAR2(30),
- object_owner VARCHAR2(32),
- object_name VARCHAR2(32),
- object_type VARCHAR2(18),
- ddl_text CLOB,
- logon_user VARCHAR2(32),
- current_schema VARCHAR2(32),
- base_table_owner VARCHAR2(32),
- base_table_name VARCHAR2(32),
- tag RAW(10),
- transaction_id VARCHAR2(10),
- scn NUMBER)
45Streams Setup
Logical Change Records (LCRs)
When the capture process mines information from
the redo log, it reformats this information into
LCRs. These LCRs are specific to the type of
information captured and the completely defines
the changed that has occurred.
SYS.ANYDATA
This is an overloaded object type that can be of
any scalar (number, varchar, char,date) or user
defined data type. It has defined with it methods
that allows us to query what type of true data
type it holds as well as methods to retrieve the
values ORACLE_HOME/rdbms/admin/dbmsany.sql
46Streams Setup
Create Procedure to handle DDL
- CONNECT ddlman/ddlman_at_saigon
- CREATE PROCEDURE history_ddl(in_any IN
SYS.ANYDATA) IS - lcr SYS.LCR_DDL_RECORD
- rc PLS_INTEGER
- ddl_text CLOB
- BEGIN
- rc in_any.GETOBJECT(lcr)
- DBMS_LOB.CREATETEMPORARY(ddl_text, TRUE)
- lcr.GET_DDL_TEXT(ddl_text)
- INSERT INTO ddlman.ddl_history
- VALUES(SYSDATE, lcr.GET_SOURCE_DATABASE_NAME(),
lcr.GET_COMMAND_TYPE(),lcr.GET_OBJECT_OWNER(), - lcr.GET_OBJECT_NAME(), lcr.GET_OBJECT_TYPE(),d
dl_text, lcr.GET_LOGON_USER(), lcr.GET_CURRENT_SCH
EMA(), - lcr.GET_BASE_TABLE_OWNER(),
lcr.GET_BASE_TABLE_NAME(),lcr.GET_TAG(),lcr.GET_TR
ANSACTION_ID(), lcr.GET_SCN()) - COMMIT
- DBMS_LOB.FREETEMPORARY(ddl_text)
- END
47Streams Setup
Create Rules
- CONNECT ddlman/ddlman_at_saigon
- BEGIN
- DBMS_STREAMS_ADM.ADD_GLOBAL_RULES(
- streams_type gt 'apply',
- streams_name gt 'apply_ddl',
- queue_name gt 'ddl_apply_q',
- include_dml gt false,
- include_ddl gt true,
- include_tagged_lcr gt false,
- source_database gt hcmc)
- END
- /
48Streams Setup
Hook in the DDL handler
- CONNECT ddlman/ddlman_at_saigon
- BEGIN
- DBMS_APPLY_ADM.ALTER_APPLY(
- apply_name gt 'apply_ddl',
- ddl_handler gt 'ddlman.history_ddl')
- END
- /
49Streams Setup
Instantiate the Stream Environment
- Definition
- Before we can start capturing, propagating, and
applying within our Streams environment we must
instantiate the destination database. This is
nothing more than registering the source SCN with
the destination database so it knows the point in
time it can start applying captured information. - There are three methods for instantiating objects
- Instantiating and creating database objects by
using Export/Import - Setting the instantiation SCN for an existing
table, schema, or database manually, by executing
procedures in the DBMS_APPLY_ADM package at the
destination database - Performing a metadata-only export and import for
existing tables or schemas You can verify the
instantiation by querying DBA_APPLY_INSTANTIATED_O
BJECTS.
50Streams Setup
Instantiate the Stream Environment
- CONNECT ddlman/ddlman_at_hcmc
- exec dbms_capture_adm.PREPARE_GLOBAL_INSTANTIATION
- DECLARE
- iscn NUMBER
- BEGIN
- iscn DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER()
- DBMS_APPLY_ADM.SET_GLOBAL_INSTANTIATION_SCN_at_saigon
( - source_database_name gt hcmc,
- instantiation_scn gt iscn,
- apply_database_link gt saigon)
- END
- /
51Streams Setup
Start the Apply Process
- CONNECT ddlman/ddlman_at_saigon
- BEGIN
- DBMS_APPLY_ADM.START_APPLY(
- apply_name gt 'apply_ddl')
- END
- /
- BEGIN
- DBMS_APPLY_ADM.STOP_APPLY(
- apply_name gt 'apply_ddl')
- END
- /
- status of apply process set to enabled
- Starts Reader Server PX process (pnn)
- Start background coordinator process (apNN)
- Starts Apply Server PX process(pnn)
Stop the Apply Process
52Streams Setup
Start the Capture Process
- CONNECT ddlman/ddlman_at_hcmc
- BEGIN
- DBMS_CAPTURE_ADM.START_CAPTURE(
- capture_name gt 'cap_ddl')
- END
- /
- BEGIN
- DBMS_CAPTURE_ADM.STOP_CAPTURE(
- capture_name gt 'cap_ddl')
- END
- /
- Set the status of the capture process to enabled
- Start the background capture process (cpnn)
Stop the Capture Process
53OEM
Streams
54OEM
Streams - Capture
55OEM
Streams - Propagate
56OEM
Streams - Apply
57Sample DDL Session
Make Some Changes
HCMC-SQLgt connect scott/tigger_at_hcmc Connected. H
CMC-SQLgt CREATE TABLE DDL_CHECK_TABLE (COL1
NUMBER) Table created. HCMC-SQLgt ALTER TABLE
DDL_CHECK_TABLE ADD (COL2 VARCHAR2(500)) Table
altered. HCMC-SQLgt TRUNCATE TABLE
DDL_CHECK_TABLE Table truncated. HCMC-SQLgt
ALTER TABLE DDL_CHECK_TABLE DROP COLUMN
COL2 Table altered. HCMC-SQLgt DROP TABLE
DDL_CHECK_TABLE Table dropped.
58Sample DDL Session
View From DDL_HISTORY Table
SAIGON-SQLgt SELECT timestamp,substr(source_databa
se_name,1,4) source, 2
logon_user,command_type,object_owner
owner,object_name name, 3
object_type type 4 FROM
ddl_history TIMESTAMP SOURCE LOGON_USER
COMMAND_TYPE OWNER NAME
TYPE --------- ------ ----------
-------------------- --------- ---------------
----- 11-OCT-03 HCMC SYS CREATE USER
AA USER 11-OCT-03
HCMC SYS CREATE TABLESPACE 11-OCT-03
HCMC SYS CREATE USER TEMPUSER
USER 11-OCT-03 HCMC SYS
ALTER USER SCOTT
USER 11-OCT-03 HCMC SCOTT CREATE TABLE
SCOTT DDL_CHECK_TABLE TABLE 11-OCT-03
HCMC SCOTT ALTER TABLE SCOTT
DDL_CHECK_TABLE TABLE 11-OCT-03 HCMC SCOTT
TRUNCATE TABLE SCOTT DDL_CHECK_TABLE
TABLE 11-OCT-03 HCMC SCOTT ALTER TABLE
SCOTT DDL_CHECK_TABLE TABLE 11-OCT-03
HCMC SCOTT DROP TABLE SCOTT
DDL_CHECK_TABLE TABLE
59Sample DDL Session
View From DDL_HISTORY Table
SAIGON-SQLgt SELECT ddl_text 2 FROM
ddl_history DDL_TEXT -------------------------
--------------------------------------------------
CREATE user aa identified by VALUES
'1468620FBA6271E8' create temporary tablespace
temp01 create user tempuser identified by VALUES
'2B4C9C62A2919AEF' alter user scott identified by
VALUES 'A7E7E0150C6D5EF3' CREATE TABLE
DDL_CHECK_TABLE (COL1 NUMBER) ALTER TABLE
DDL_CHECK_TABLE ADD (COL2 VARCHAR2(500)) TRUNCATE
TABLE DDL_CHECK_TABLE ALTER TABLE DDL_CHECK_TABLE
DROP COLUMN COL2 DROP TABLE DDL_CHECK_TABLE
60The DBA Nightmare
Remember Me
61Sweet Dreams for The DBA
SQL Statements Executed Over Time
DROP INDEX
CREATE INDEX
db file sequential reads
buffer busy waits
parse call
Wait Time
Time
62Running Streams
DDL Types Captured
- CREATE/ALTER/DROP Tables includes table comments
- CREATE/ALTER/DROP Tablespace (requires global
rules to be set) - CREATE/ALTER/DROP Indexes
- CREATE/ALTER/DROP Triggers
- CREATE/ALTER/DROP Views
- CREATE/ALTER/DROP Synonyms
- CREATE/ALTER/DROP Sequences
- Creation of PL/SQL packages, procedures and
functions - Changes to users/roles
- GRANT or REVOKE on users/roles
- COMMIT
- ROLLBACK
- AUDIT (can be done on user objects)
63Running Streams
DDL Types Captured But NOT Applied
- CREATE , ALTER, or DROP MATERIALIZED VIEW LOG
- CREATE , ALTER, or DROP MATERIALIZED VIEW
- CREATE or ALTER TABLE for Index-organized tables
- CREATE SCHEMA AUTHORIZATION
- CREATE or DROP DATABASE LINK
- RENAME (use ALTER TABLE instead)
- CREATE TABLE ? AS SELECT for clustered tables
64Running Streams
DDL Types NOT Captured
- CREATE , ALTER, or DROP MATERIALIZED VIEW LOG
- CREATE , ALTER, or DROP MATERIALIZED VIEW
- CREATE or ALTER TABLE for Index-organized tables
- CREATE SCHEMA AUTHORIZATION
- CREATE or DROP DATABASE LINK
- RENAME (use ALTER TABLE instead)
- CREATE TABLE ? AS SELECT for clustered tables
- CREATE or ALTER DATABASE
- ALTER SESSION
- ALTER SYSTEM
- TRUNCATE
- CREATE/ALTER/DROP ROLLBACK
- CREATE/ALTER/DROP TYPE
- CREATE/ALTER/DROP PROFILE
- CREATE/ DROP LIBRARY
- CREATE/ DROP DIRECTORY
- SET ROLE
- SET TRANSACTION
- SET CONSTRAINT
- CREATE CONTROL FILE
- CREATE SPFILE
- CREATE PFILE
- ANALYZE
- EXPLAIN
- CALL
65Running Streams
Problems You May Encounter
- Setup / Running Streams
- Status of dba_capture dba_apply where ABORTED
- ORA-01925 Maximum of 30 enabled roles exceeded
- What object is that?
- Mixed case global_name causing mismatch Streams
and LogMiner - Remedy
- Stop the Capture and apply processes and start
them again - Increase current value for MAX_ENABLED_ROLES
- ALTER SYSTEM SET max_endabled_roles100
- Avoid using system generated names
- Creation will work
- ALTER / DROP statements will more than likely
Fail - Change db_name db_domain to uppercase
66Running Streams
Problems You May Encounter
- Bugs
- Analyze statement is not propagated on streams
environment to target database with using
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES. - DDL Issued through the EXECUTE IMMEDIATE are not
applied at destination - Checkpoints are skipped if there is any DDL
activity in the redo logs causing the capture
process to reprocess logs during startup. - Dropping the streams environment is difficult to
cleanup. - Remedy
- Can use DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RUL
ES. - Do not use dynamic DDL
- Ensure that no DDL has been issued around
scheduled shutdowns. - Get it right the first time or choose a version
naming schema.
67Running Streams
Monitoring
Just Some DBA_QUEUES DBA_QUEUE_TABLES DBA_APPLY DB
A_APPLY_PARAMETERS DBA_CAPTURE DBA_CAPTURE_PARAMET
ERS DBA_PROPAGATION DBA_APPLY_ERROR DBA_RULES DBA_
RULE_SETS DBA_RULE_SET_RULES DBA_JOBS DBA_QUEUE_SC
HDULES
68Get a handle on DDL using Oracle 9iR2 Streams
NoCOUG Summer Conference, Aug. 19, 2004