DBMS_LOGMNR
The LogMiner packages, DBMS _ LOGMNR and DBMS _ LOGMNR _ D , allow for analysis of Oracle's redo log files.
You would make use of this feature for some of the following reasons:
❑ You want to find out when a table was 'accidentally' dropped, and by whom.
❑ You want to perform some auditing on a given table, or set of tables, to see who has been modifying what pieces of it. You can do this auditing 'after the fact'. Normally, you might use the AUDIT command, but this must be enabled ahead of time, and it only tells you someone modified the table – not what they modified. LogMiner is good for post-facto 'who did that' discovery, and to see what data changed exactly.
❑
❑
❑
You would like to 'undo' a given transaction. In order to undo it, we'll need to see what it did, and get the PL/SQL for undoing it.
You would like to get some empirical counts of rows modified in an average transaction.
You would like to perform a historical analysis of how the database has been used over time.
❑ You would like to find out why your database is suddenly generating 10 MB of log every minute. It never used to do this, and now it is all of a sudden. Are there any obvious culprits to be found in a quick review of the logs?
❑ You would like to see what is really happening 'under the cover'. The contents of the redo logs show you what actually happened when you did that INSERT on a table with a trigger that does an UPDATE of another table. All of the effects of your transaction are recorded in the log. LogMiner is an excellent exploration tool.
LogMiner provides you the tools to do all this, and more. What I will provide here is a quick overview of how to use LogMiner, and then explain some of the caveats of its use that are not spelled out in the Supplied PL/SQL
Packages Reference guide shipped with Oracle. As with all of the other packages, it is recommended that you read the section in the Supplied PL/SQL Packages Reference on DBMS _ LOGMNR and DBMS _ LOGMNR _ D to get an overview of the functions and procedures they contain, and how they are used. Below in the Options and Usage section, we will give an overview of these procedures, and their inputs as well.
1097
Appendix A
LogMiner works best on archived redo log files, although it can be used with online redo log files that are not active. Attempting to use an active online redo log file could lead to an error message or just confusion on your part, as the redo log file will contain a mixture of old and new transaction data. An interesting thing to note about LogMiner is that you do not need to analyze a log file in the database that originally created it. It doesn't even have to be the same exact database version (you can analyze version 8.0 archive files in an 8.1 database).
You can move an archived redo log file to another system, and analyze it there instead. This can be quite convenient for auditing and looking at historical usage patterns, without impacting the existing system. In order to do this however, you must use a database that is on the same hardware platform (byte-ordering, word sizes, and so on will be affected by this). Also, you will want to make sure that the database block sizes are the same
(or that the database doing the analysis has a block size at least as big as the database originating the redo log), and have the same character set.
Using LogMiner is a two-step process. Step one involves creating a data dictionary for LogMiner to operate with. This is what allows a redo log file from one database to be analyzed on another – LogMiner does not use the existing data dictionary, it uses the data dictionary that was exported to an external file by the
DBMS _ LOGMNR _ D package. LogMiner can be used without this data dictionary, but you will find the resulting output virtually unreadable. We'll take a look at what this would look like later.
Step two involves importing the redo log files and starting LogMiner. Once LogMiner is started, you can review the contents of the redo log files using SQL. There are four V$ views associated with LogMiner. The main view is V$LOGMNR _ CONTENTS . This is the view you will use to review the contents of the redo log files you have loaded. We will take a look at this view in more detail in the example and at the end of this section we have a table that defines each column. The other three views are:
❑ V$LOGMNR _ DICTIONARY – This view contains information about the dictionary file that has been loaded. This is the dictionary you created in step one. In order to make sense of the contents of a redo log file, we need to have a dictionary file that tells us what object name goes with what object ID, what the columns and data types of each table are, and so on. This view contains at most, one row for the currently loaded dictionary only.
❑ V$LOGMNR _ LOGS – This view contains information about the redo log files you have requested
LogMiner to load into the system. The contents of these redo log files will be found in
V$LOGMNR _ CONTENTS . This view tells you about the redo log file itself. Attributes such as the name of the redo log file, the database name of the database it came from, the SCNs ( s ystem c hange n umbers) contained in it and so on, are found here. This view will have an entry per log file you are analyzing.
❑ V$LOGMNR _ PARAMETERS – This view shows the parameters that were passed to LogMiner during it's start up. This view will have one entry after you call the start up routine for log miner.
An important point to note here is that because LogMiner's memory allocation comes from the PGA, LogMiner cannot be used in an MTS environment. This is because with MTS, you will be assigned to a different shared server (process or thread) each time you make a request into the database. The data you loaded into Process
One (Shared Server One) is simply not available to Process Two (Shared Server Two). You must be using a dedicated server configuration for LogMiner to function. Also, the output is only visible in a single session, and only for the life of that session. If further analysis is needed, you must either reload the information, or make it permanent, perhaps using a CREATE TABLE AS SELECT . If you are analyzing a large amount of data, making the data permanent via a CREATE TABLE AS SELECT or INSERT INTO makes even more sense. You would then be able to index this information whereas with the V$LOGMNR _ CONTENTS table, you will always be performing a full scan of a V$ table, since it has no indexes. This full scanning of a V$ table can be quite resource-intensive.
1098
DBMS_LOGMNR
What we'll do now is present an overview of how to use the LogMiner facility. After that, we'll look at all of the inputs to the two LogMiner supplied packages, and what they mean. Then, we will investigate using LogMiner to find out when some operation took place in the database. After that, we'll take a quick look at how LogMiner affects your session's memory usage, and how it caches the redo log files internally. Lastly, we'll look at some of the limitations of LogMiner that are not mentioned in the documentation.
In order for LogMiner to map internal object IDs and columns to their appropriate tables, it needs a data dictionary. It will not use the data dictionary already present in the database. Rather, it relies on an external file to provide the data dictionary. LogMiner works this way in order to allow redo log files from other databases to be analyzed in different one. Additionally, the data dictionary that is current today in your database may not support all of the objects that were in the database when the redo log file was generated, hence the need to be able to import a data dictionary.
To see the purpose of this data dictionary file, we'll look at some output from LogMiner without having a data dictionary loaded. We'll do this by loading an archived redo log file and starting LogMiner. Then a quick query in V$LOGMNR _ CONTENTS to see what is there: tkyte@TKYTE816> begin
2 sys.dbms
_ logmnr.add
_ logfile
3 ( 'C:\oracle\oradata\tkyte816\archive\TKYTE816T001S01263.ARC',
4 sys.dbms
_ logmnr.NEW );
5 end;
6 /
PL/SQL procedure successfully completed. tkyte@TKYTE816> begin
2 sys.dbms
_ logmnr.start
_ logmnr;
3 end;
4 /
PL/SQL procedure successfully completed. tkyte@TKYTE816> column sql _ redo format a30 tkyte@TKYTE816> column sql _ undo format a30 tkyte@TKYTE816> select scn, sql _ redo, sql _ undo from v$logmnr _ contents
2 /
SCN SQL _ REDO SQL _ UNDO
---------- ------------------------------ ------------------------------
6.4430E+12
6.4430E+12 set transaction read write;
6.4430E+12 update UNKNOWN.Objn:30551 set update UNKNOWN.Objn:30551 set
Col[2] = HEXTORAW('787878') wh Col[2] = HEXTORAW('534d495448'
ere ROWID = 'AAAHdXAAGAAAAJKAA ) where ROWID = 'AAAHdXAAGAAAA
A'; JKAAA';
6.4430E+12
6.4430E+12 commit;
1099
Appendix A tkyte@TKYTE816> select utl _ raw.cast
_ to _ varchar2(hextoraw('787878')) from dual;
UTL _ RAW.CAST
_ TO _ VARCHAR2(HEXTORAW('787878'))
------------------------------------------------------------------ xxx tkyte@TKYTE816> select utl _ raw.cast
_ to _ varchar2(hextoraw('534d495448')) from dual;
UTL _ RAW.CAST
_ TO _ VARCHAR2(HEXTORAW('534D495448'))
---------------------------------------------------------------------
SMITH
This output is fairly unreadable. We know that object number 30551 was updated and column 2 was modified.
Further, we can turn the HEXTORAW('787878') into a character string. We could go to the data dictionary and find out that object 30551 is: tkyte@TKYTE816> select object _ name
2 from all _ objects
3 where data _ object _ id = 30551;
OBJECT _ NAME
------------------------------
EMP but only if we are in the same database in which the redo log file was originally generated, and only if that object still exists. Further we could DESCRIBE EMP and discover that column 2 is ENAME . Therefore, the SQL _ REDO column from LogMiner is really UPDATE EMP SET ENAME = 'XXX' WHERE ROWID = ...
. Fortunately, we do not need to go through this laborious conversion each and every time we use log miner. We'll find that by building and then loading a dictionary, we'll get much better results. The following example shows what output we could expect if we build a dictionary file for LogMiner to work with, and then load it.
We start by creating the dictionary file. Creating this data dictionary file is rather straightforward. The prerequisites for doing this are:
❑ UTL _ FILE has been configured in your init.ora file so that there is at least one directory that can be written to. See the section on UTL _ FILE for information on setting this up.
DBMS _ LOGMNR _ D , the package that builds the data dictionary file, relies on UTL _ FILE to perform I/O.
❑ The schema that will execute the DBMS _ LOGMNR _ D package has been granted EXECUTE ON
SYS.DBMS
_ LOGMNR _ D , or has a role that is able to execute this package. By default, the
EXECUTE _ CATALOG _ ROLE has the privilege to run this package.
Once you have UTL _ FILE set up and EXECUTE ON DBMS _ LOGMNR _ D , creating the data dictionary file is trivial.
There is only one call inside of DBMS _ LOGMNR _ D , and this is called BUILD . You would simply execute something along the lines of: tkyte@TKYTE816> set serveroutput on tkyte@TKYTE816> begin
2 sys.dbms
_ logmnr _ d.build( 'miner _ dictionary.dat',
3 'c:\temp' );
1100
DBMS_LOGMNR
4 end;
5 /
LogMnr Dictionary Procedure started
LogMnr Dictionary File Opened
TABLE: OBJ$ recorded in LogMnr Dictionary File
TABLE: TAB$ recorded in LogMnr Dictionary File
TABLE: COL$ recorded in LogMnr Dictionary File
TABLE: SEG$ recorded in LogMnr Dictionary File
TABLE: UNDO$ recorded in LogMnr Dictionary File
TABLE: UGROUP$ recorded in LogMnr Dictionary File
TABLE: TS$ recorded in LogMnr Dictionary File
TABLE: CLU$ recorded in LogMnr Dictionary File
TABLE: IND$ recorded in LogMnr Dictionary File
TABLE: ICOL$ recorded in LogMnr Dictionary File
TABLE: LOB$ recorded in LogMnr Dictionary File
TABLE: USER$ recorded in LogMnr Dictionary File
TABLE: FILE$ recorded in LogMnr Dictionary File
TABLE: PARTOBJ$ recorded in LogMnr Dictionary File
TABLE: PARTCOL$ recorded in LogMnr Dictionary File
TABLE: TABPART$ recorded in LogMnr Dictionary File
TABLE: INDPART$ recorded in LogMnr Dictionary File
TABLE: SUBPARTCOL$ recorded in LogMnr Dictionary File
TABLE: TABSUBPART$ recorded in LogMnr Dictionary File
TABLE: INDSUBPART$ recorded in LogMnr Dictionary File
TABLE: TABCOMPART$ recorded in LogMnr Dictionary File
TABLE: INDCOMPART$ recorded in LogMnr Dictionary File
Procedure executed successfully - LogMnr Dictionary Created
PL/SQL procedure successfully completed.
It is recommended that you issue a SET SERVEROUTPUT ON prior to executing DBMS _ LOGMNR _ D , as this will allow informational messages from DBMS _ LOGMNR _ D to be printed. This can be extremely useful when trying to diagnose an error from DBMS _ LOGMNR _ D . What the above command did was to create a file
C:\TEMP\MINER _ DICTIONARY.DAT
. This is a plain text, ASCII file that you may edit to see what is in there.
This file consists of a lot of SQL-like statements that are parsed and executed by the LogMiner start routine.
Now that we have a dictionary file on hand, we are ready to see what the contents of V$LOGMNR _ CONTENTS might look like now: tkyte@TKYTE816> begin
2 sys.dbms
_ logmnr.add
_ logfile
3 ( 'C:\oracle\oradata\tkyte816\archive\TKYTE816T001S01263.ARC',
4 sys.dbms
_ logmnr.NEW );
5 end;
6 /
PL/SQL procedure successfully completed. tkyte@TKYTE816> begin
2 sys.dbms
_ logmnr.start
_ logmnr
3 ( dictFileName => 'c:\temp\miner _ dictionary.dat' );
4 end;
5 /
PL/SQL procedure successfully completed.
1101
Appendix A tkyte@TKYTE816> column sql _ redo format a30 tkyte@TKYTE816> column sql _ undo format a30 tkyte@TKYTE816> select scn, sql _ redo, sql _ undo from v$logmnr _ contents
2 /
SCN SQL _ REDO SQL _ UNDO
---------- ------------------------------ ------------------------------
6.4430E+12
6.4430E+12 set transaction read write;
6.4430E+12 update TKYTE.EMP set ENAME = ' update TKYTE.EMP set ENAME = '
xxx' where ROWID = 'AAAHdXAAGA SMITH' where ROWID = 'AAAHdXAA
AAAJKAAA'; GAAAAJKAAA';
6.4430E+12
6.4430E+12 commit;
Now that's more like it – we can actually read the SQL that LogMiner generates for us, which would 'replay' (or undo) the transaction we are looking at. Now we are ready to go into Step 2 – Using LogMiner.
Here, we will take the dictionary file we just generated, and use it to review the contents of some archived redo log files. Before we load a redo log file, we will generate one with some known transactions in it. For the first time around, this will make it easier to see what we have. We'll be able to correlate what we find in the
V$LOGMNR _ CONTENTS view with what we just did. For this to work, it is important to have a 'test' database, one where you can be ensured you are the only one logged in. This allows us to artificially constrain just exactly what gets put into the redo log. Also, it would be necessary to have the ALTER SYSTEM privilege in this database so we can force a log file archive. Lastly, this is easiest to do if the database is in archive log mode with automatic archiving. In this fashion, finding the redo log file is trivial (it will be the one just archived – we'll see below how to find this). If you are using a NOARCHIVELOGMODE database, you will need to find the active log, and determine which log file was active just prior to it. So, to generate our sample transaction we could: tkyte@TKYTE816> alter system archive log current;
System altered. tkyte@TKYTE816> update emp set ename = lower(ename);
14 rows updated. tkyte@TKYTE816> update dept set dname = lower(dname);
4 rows updated. tkyte@TKYTE816> commit;
Commit complete. tkyte@TKYTE816> alter system archive log current;
System altered.
1102
DBMS_LOGMNR tkyte@TKYTE816> column name format a80 tkyte@TKYTE816> select name
2 from v$archived _ log
3 where completion _ time = ( select max(completion _ time)
4 from v$archived _ log )
5 /
NAME
--------------------------------------------------------------
C:\ORACLE\ORADATA\TKYTE816\ARCHIVE\TKYTE816T001S01267.ARC
Now, given that we were the only user logged in doing work, the archive redo log we just generated will have our two updates in it and nothing else. This last query against V$ARCHIVED _ LOG shows us the name of the archive redo log file we actually want to analyze. We can load this into LogMiner and get started by using the following SQL. It will add the last archive redo log file to the LogMiner list, and then start LogMiner: tkyte@TKYTE816> declare
2 l _ name v$archived _ log.name%type;
3 begin
4
5 select name into l _ name
6 from v$archived _ log
7 where completion _ time = ( select max(completion _ time)
8 from v$archived _ log );
9
10 sys.dbms
_ logmnr.add
_ logfile( l _ name, sys.dbms
_ logmnr.NEW );
11 end;
12 /
PL/SQL procedure successfully completed. tkyte@TKYTE816> begin
2 sys.dbms
_ logmnr.start
_ logmnr
3 ( dictFileName => 'c:\temp\miner _ dictionary.dat' );
4 end;
5 /
PL/SQL procedure successfully completed.
The first call, to DBMS _ LOGMNR .
ADD _ LOGFILE , loaded an archive redo log file into LogMiner. I passed in the name of the archived redo log file, as well as the option of DBMS _ LOGMNR .
NEW . Since this is the first log file I am adding in, I used DBMS _ LOGMNR.NEW
. The other options are ADDFILE to add another log file to an existing list of files and REMOVEFILE to remove a file from consideration. After we load the log files we are interested in, we can call DBMS _ LOGMNR .
START _ LOGMNR and tell it the name of the dictionary file we created. We used a minimal call to START _ LOGMNR here, passing just the name of the dictionary file. We will look at some of the other options to START _ LOGMNR in the next section Options and Usage.
Now that we've loaded a log file and started LogMiner, we are ready to take our first look at the contents of
V$LOGMNR _ CONTENTS . V$LOGMNR _ CONTENTS has lots of information in it and for now, we'll look at a very small slice of the data available. Specifically, we'll investigate the SCN , SQL _ REDO , and SQL _ UNDO columns. In case you are not familiar with it, the SCN is a simple timing mechanism that Oracle uses to guarantee ordering of transactions, and to enable recovery from failure. They are also used to guarantee read-consistency, and checkpointing in the database. Think of the SCN as a ticker – every time someone commits, the SCN is incremented by one. Here is a sample query from our example above where we lowercased the names in EMP and DEPT tables:
1103
Appendix A tkyte@TKYTE816> column sql _ redo format a20 word _ wrapped tkyte@TKYTE816> column sql _ undo format a20 word _ wrapped tkyte@TKYTE816> select scn, sql _ redo, sql _ undo from v$logmnr _ contents
2 /
SCN SQL _ REDO SQL _ UNDO
---------- -------------------- --------------------
6.4430E+12 set transaction read
write;
6.4430E+12 update TKYTE.EMP set update TKYTE.EMP set
ENAME = 'smith' ENAME = 'SMITH'
where ROWID = where ROWID =
'AAAHdYAAGAAAAJKAAA' 'AAAHdYAAGAAAAJKAAA'
; ;
6.4430E+12
6.4430E+12 update TKYTE.EMP set update TKYTE.EMP set
ENAME = 'allen' ENAME = 'ALLEN'
where ROWID = where ROWID =
'AAAHdYAAGAAAAJKAAB' 'AAAHdYAAGAAAAJKAAB'
; ;
...(many similar rows snipped out)...
6.4430E+12 update TKYTE.DEPT update TKYTE.DEPT
set DNAME = 'sales' set DNAME = 'SALES'
where ROWID = where ROWID =
'AAAHdZAAGAAAAKKAAC' 'AAAHdZAAGAAAAKKAAC'
; ;
6.4430E+12 update TKYTE.DEPT update TKYTE.DEPT
set DNAME = set DNAME =
'operations' where 'OPERATIONS' where
ROWID = ROWID =
'AAAHdZAAGAAAAKKAAD' 'AAAHdZAAGAAAAKKAAD'
; ;
6.4430E+12 commit;
22 rows selected.
As you can see, our two SQL statements generated many more than two SQL statements from the redo log. The redo log contains the bits and bytes that we changed – not SQL. Therefore our multi-row statement UPDATE
EMP SET ENAME = LOWER(ENAME) is presented by LogMiner as a series of single row updates. LogMiner currently cannot be used to retrieve the actual SQL performed at run-time. It can only reproduce equivalent
SQL, SQL that does the same thing but in many individual statements.
Now, we'll go one step further with this example. The V$LOGMNR _ CONTENTS view has 'placeholder' columns.
These placeholder columns are useful for finding particular updates for up to five columns in your table. The placeholder columns can tell us the name of the changed column, and show us the 'before' value of the column, and the 'after' column value. Since these columns are broken out from the SQL, it would be very easy to find the transaction such that the ENAME column was updated (the name placeholder column would have ENAME ) from
KING (the before image placeholder column would have KING in it) to king . We'll do another quick UPDATE example, and set up the necessary column mapping file to demonstrate this. The column mapping file (colmap for short) is used to tell LogMiner which columns are of interest to you by table. We can map up to five columns per table to be mapped, into these placeholder columns. The format of a colmap file is simply:
1104
DBMS_LOGMNR colmap = TKYTE DEPT (1, DEPTNO, 2, DNAME, 3, LOC); colmap = TKYTE EMP (1, EMPNO, 2, ENAME, 3, JOB, 4, MGR, 5, HIREDATE);
This will map the DEPT DEPTNO column to the first placeholder column when we are looking at a row for the
DEPT table. It will map the EMP EMPNO column to this placeholder column when we are looking at an EMP row.
The column mapping file in general has lines that consist of the following (items in bold are constants, <sp> represent a single, mandatory space) colmap<sp>=<sp> OWNER<sp>TABLE _ NAME<sp>(1,<sp>CNAME[,<sp>2,<sp>CNAME]...);
The case of everything is important – the OWNER must be uppercase, the table name must be the 'correct' case
(uppercase is usually the correct case unless you've used quoted identifiers to create objects). The spaces are mandatory as well. In order to make using the column mapping file a little easier, I use a script such as: set linesize 500 set trimspool on set feedback off set heading off set embedded on spool logmnr.opt select
'colmap = ' || user || ' ' || table _ name || ' (' ||
max( decode( column _ id, 1, column _ id , null ) ) ||
max( decode( column _ id, 1, ', '||column _ name, null ) ) ||
max( decode( column _ id, 2, ', '||column _ id , null ) ) ||
max( decode( column _ id, 2, ', '||column _ name, null ) ) ||
max( decode( column _ id, 3, ', '||column _ id , null ) ) ||
max( decode( column _ id, 3, ', '||column _ name, null ) ) ||
max( decode( column _ id, 4, ', '||column _ id , null ) ) ||
max( decode( column _ id, 4, ', '||column _ name, null ) ) ||
max( decode( column _ id, 5, ', '||column _ id , null ) ) ||
max( decode( column _ id, 5, ', '||column _ name, null ) ) || ');' colmap
from user _ tab _ columns group by user, table _ name
/ spool off in SQL*PLUS to generate the logmnr.opt
file for me. For example, if I execute this script in a schema that contains only the EMP and DEPT tables from the SCOTT / TIGER account, I will see: tkyte@TKYTE816> @colmap colmap = TKYTE DEPT (1, DEPTNO, 2, DNAME, 3, LOC); colmap = TKYTE EMP (1, EMPNO, 2, ENAME, 3, JOB, 4, MGR, 5, HIREDATE);
It always picks the first five columns of the table. If you desire a different set of five columns, just edit the resulting logmnr.opt
file this creates, and change the column names. For example, the EMP table has three more columns that are not shown in the above colmap – SAL , COMM , and DEPTNO . If you wanted to see the
SAL column instead of the JOB column, the colmap would simply be: tkyte@TKYTE816> @colmap colmap = TKYTE DEPT (1, DEPTNO, 2, DNAME, 3, LOC); colmap = TKYTE EMP (1, EMPNO, 2, ENAME, 3, SAL, 4, MGR, 5, HIREDATE);
1105
Appendix A
Important considerations for the colmap file, beyond its needs to have the correct case and whitespace, are
❑ The file must be named logmnr.opt
. No other name may be used.
❑ This file must be in the same directory as your dictionary file.
❑ You must be using a dictionary file in order to use a colmap file.
So, we will now modify all of the columns in the DEPT table. I am using four different UPDATE s, each against a different row and set of columns. This is so we'll see better the effect of the placeholder columns: tkyte@TKYTE816> alter system archive log current; tkyte@TKYTE816> update dept set deptno = 11
2 where deptno = 40
3 / tkyte@TKYTE816> update dept set dname = initcap(dname)
2 where deptno = 10
3 / tkyte@TKYTE816> update dept set loc = initcap(loc)
2 where deptno = 20
3 / tkyte@TKYTE816> update dept set dname = initcap(dname),
2 loc = initcap(loc)
3 where deptno = 30
4 / tkyte@TKYTE816> commit; tkyte@TKYTE816> alter system archive log current;
We can review the column-by-column changes now by loading the newly generated archived redo log file and starting Log Miner with the option USE _ COLMAP . Note that I did generate the logmnr.opt
file using the script above and I placed that file in the same directory with my dictionary file: tkyte@TKYTE816> declare
2 l _ name v$archived _ log.name%type;
3 begin
4
5 select name into l _ name
6 from v$archived _ log
7 where completion _ time = ( select max(completion _ time)
8 from v$archived _ log );
9
10 sys.dbms
_ logmnr.add
_ logfile( l _ name, sys.dbms
_ logmnr.NEW );
11 end;
12 /
PL/SQL procedure successfully completed.
1106
DBMS_LOGMNR tkyte@TKYTE816> begin
2 sys.dbms
_ logmnr.start
_ logmnr
3 ( dictFileName => 'c:\temp\miner _ dictionary.dat',
4 options => sys.dbms
_ logmnr.USE
_ COLMAP );
5 end;
6 /
PL/SQL procedure successfully completed. tkyte@TKYTE816> select scn, ph1 _ name, ph1 _ undo, ph1 _ redo,
2 ph2 _ name, ph2 _ undo, ph2 _ redo,
3 ph3 _ name, ph3 _ undo, ph3 _ redo
4 from v$logmnr _ contents
5 where seg _ name = 'DEPT'
6 /
SCN PH1 _ NA PH1 PH1 PH2 _ N PH2 _ UNDO PH2 _ REDO PH3 PH3 _ UNDO PH3 _ REDO
---------- ------ --- --- ----- ---------- ---------- --- -------- --------
6.4430E+12 DEPTNO 40 11
6.4430E+12 DNAME accounting Accounting
6.4430E+12 LOC DALLAS Dallas
6.4430E+12 DNAME sales Sales LOC CHICAGO Chicago
So, this output clearly shows us (from line 1 for example) that DEPTNO had a before image value of 40 ( PH1 ) and became 11. This makes sense, since we did SET DEPTNO = 11 WHERE DEPTNO = 40 . Notice that the remaining columns in that first row of output are Null. This is because Oracle logs changed bytes only; there is no before/after image of the DNAME and LOC columns for that row. The second row shows the update of the
DNAME column from accounting to Accounting , and no changes for DEPTNO or LOC , as these columns were not affected. The last row shows that when we modified two columns with our UPDATE statement, they both show up in the placeholder columns.
As you can see, using the placeholder columns can be very convenient if you are trying to locate a specific transaction in a large set of redo. If you know that the transaction updated the X table, and changed the Y column from a to b , finding this transaction will be a breeze.
These are the two packages that implement the LogMiner functionality – DBMS _ LOGMNR and
DBMS _ LOGMNR _ D . The DBMS _ LOGMNR _ D (the _ D stands for 'dictionary') package has exactly one procedure in it, which is BUILD . This is used the build the data dictionary used by the DBMS _ LOGMNR package when loading a redo log file. It will map object IDs to table names, determine data types, map column positions to column name, and so on. Using the DBMS _ LOGMNR _ D.BUILD
routine is very straightforward. It takes two parameters:
❑ DICTIONARY _ FILENAME – The name of the dictionary file to be created. In our examples, we have been using the file name miner _ dictionary.dat
.
❑ DICTIONARY _ LOCATION – The path to where this file will be created. This routine uses
UTL _ FILE to create the file, so this path must be a valid path as set in the utl _ file _ dir init.ora
parameter See the UTL _ FILE section later in this appendix for more details on configuring UTL _ FILE .
1107
Appendix A
That's it for BUILD . Neither parameter is optional, so both must be supplied. If you receive an error from this routine similar to the following: tkyte@TKYTE816> exec sys.dbms
_ logmnr _ d.build( 'x.dat', 'c:\not _ valid\' );
BEGIN sys.dbms
_ logmnr _ d.build( 'x.dat', 'c:\not _ valid\' ); END;
*
ERROR at line 1:
ORA-01309: specified dictionary file cannot be opened
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at "SYS.DBMS
_ LOGMNR _ D", line 793
ORA-06512: at line 1 it will mean that the directory you are attempting to use is not set in the utl _ file _ dir init.ora
parameter.
The DBMS _ LOGMNR package itself has only three routines:
❑ ADD _ LOGFILE – To register the set of log files to be analyzed.
❑ START _ LOGMNR – To populate the V$LOGMNR _ CONTENTS view
❑ END _ LOGMNR – To release all resources allocated by the LogMiner processing. This should be called before exiting your session to release resources cleanly, or when you are done using
LogMiner.
The ADD _ LOGFILE routine, as we have seen above, is called before actually starting LogMiner. It simply builds a list of log files that START _ LOGMNR will process, and populate into the V$LOGMNR _ CONTENTS view for us.
The inputs to ADD _ LOGFILE are:
❑ LOGFILENAME – The fully qualified filename of the archived redo log file you want to analyze.
❑ OPTIONS – Specifies how to add (or remove) this file. We use the DBMS _ LOGMNR constants:
DBMS _ LOGMNR.NEW
– Start a new list. If a list already exists, this will empty the list
DBMS _ LOGMNR.ADD
– Add to an already started list or empty list
DBMS _ LOGMNR.REMOVEFILE
– Remove a file from the list
If we wanted to analyze the last two archive redo log files, we would call ADD _ LOGFILE twice. For example: tkyte@TKYTE816> declare
2 l _ cnt number default 0;
3 begin
4 for x in (select name
5 from v$archived _ log
6 order by completion _ time desc )
7 loop
8 l _ cnt := l _ cnt+1;
9 exit when ( l _ cnt > 2 );
10
11 sys.dbms
_ logmnr.add
_ logfile( x.name );
12 end loop;
13
1108
DBMS_LOGMNR
14 sys.dbms
_ logmnr.start
_ logmnr
15 ( dictFileName => 'c:\temp\miner _ dictionary.dat',
16 options => sys.dbms
_ logmnr.USE
_ COLMAP );
17 end;
18 /
PL/SQL procedure successfully completed.
Within that same session, after we've started LogMiner, we may call ADD _ LOGFILE to add more log files, remove uninteresting ones, or if you use DBMS _ LOGMNR.NEW
, reset the list of log files to be just that one new file. Calling DBMS _ LOGMNR.START
_ LOGMNR after making changes to the list will effectively flush the
V$LOGMNR _ CONTENTS view, and repopulate it with the contents of the log files that are on the list now.
Moving onto DBMS _ LOGMNR.START
_ LOGMNR , we see that we have many inputs. In the above examples, we have only been using two out of the six available to us. We've used the dictionary file name and the options (to specify we wanted to use a colmap file). The available inputs in full are:
❑ STARTSCN and ENDSCN – If you know the system change number range you are interested in exactly, this will limit the rows in V$LOGMNR _ CONTENTS to be between these values. This is useful after you've loaded a full log file, and determined the upper and lower SCNs you are interested in. You can restart LogMiner with this range to cut down on the size of
V$LOGMNR _ CONTENTS . These values default to 0, implying they are not used by default.
❑
❑
STARTTIME and ENDTIME – Instead of using the SCN, you can supply a date/time range.
Only log entries that fall between the start and end time will be visible in the
V$LOGMNR _ CONTENTS view. These values are ignored if STARTSCN and ENDSCN are used, and default to 01-Jan-1988 and 01-Jan-2988.
DICTFILENAME – The fully qualified path to the dictionary file created by
DBMS _ LOGMNR _ D.BUILD
.
❑ OPTIONS – Currently there is only one option to DBMS _ LOGMNR.START
_ LOGMNR and this is
DBMS _ LOGMNR.USE
_ COLMAP . This directs LogMiner to look for a logmnr.opt
file in the same directory in which it finds the DICTFILENAME . It is important to note that the name of the colmap file must be logmnr.opt
, and it must reside in the same directory as the dictionary file.
The last procedure in DBMS _ LOGMNR is simply the DBMS _ LOGMNR.END
_ LOGMNR routine. This terminates the
LogMiner session, and empties out the V$LOGMNR _ CONTENTS view. After you call
DBMS _ LOGMNR.END
_ LOGMNR , any attempt to query the view will result in: tkyte@TKYTE816> exec dbms _ logmnr.end
_ logmnr;
PL/SQL procedure successfully completed. tkyte@TKYTE816> select count(*) from v$logmnr _ contents; select count(*) from v$logmnr _ contents
*
ERROR at line 1:
ORA-01306: dbms _ logmnr.start
_ logmnr() must be invoked before selecting from v$logmnr _ contents
1109
Appendix A
This is one of the more common uses of LogMiner I've seen. Someone 'accidentally' dropped a table. You would like to get it back, or just find out who did it. Or maybe someone updated an important table and you don't know whom, but no one will own up to it. In any case, something happened, you do not have auditing enabled, but you have been running in archive log mode, and have the backups. You would like to restore your backups and do a point-in-time recovery to the point in time immediately prior to the DROP TABLE . In this fashion, you can restore and recover that table, stop recovery (hence not dropping the table again), and then export this table from the restored database and import it into the correct database. This allows you to restore the table with all of the changes intact.
In order to do this, we'll need to know either the exact time or the SCN of the DROP TABLE . Since clocks are out of sync, and people are probably panicked, they may give bad information in this case. What we can do is load up the archived log files from around the time of the DROP TABLE , and find the exact SCN of the point we want to recover.
We'll do another quick example here to isolate the statements you might see in LogMiner when a table is dropped. I'm using locally managed tablespaces, so if you are using dictionary-managed tablespaces, you may see more SQL than I do below. This extra SQL you see with a dictionary-managed tablespace will be the SQL executed to return the extents back to the system, and free up the space allocated to the table. So, here we go to drop the table: tkyte@TKYTE816> alter system archive log current;
System altered. tkyte@TKYTE816> drop table dept;
Table dropped. tkyte@TKYTE816> alter system archive log current;
System altered.
Now, we want to locate the SQL _ REDO that represents the DROP TABLE . If you recall, the actual SQL executed at run-time is not reported by LogMiner. Rather, the equivalent SQL is reported. We will not see a DROP
TABLE statement – we will see data dictionary modifications. The one we are looking for will be a DELETE against SYS.OBJ$ , which is the base table for holding all objects. Part of dropping a table involves deleting a row from SYS.OBJ$ . Fortunately, when LogMiner builds the SQL _ REDO to process a DELETE , it includes the column values in the SQL along with the row ID. We can use this fact to search for the DELETE of DEPT from
OBJ$ . Here is how: tkyte@TKYTE816> declare
2 l _ name v$archived _ log.name%type;
3 begin
4 select name into l _ name
5 from v$archived _ log
6 where completion _ time = ( select max(completion _ time)
7 from v$archived _ log );
8
9 sys.dbms
_ logmnr.add
_ logfile( l _ name, sys.dbms
_ logmnr.NEW );
10 end;
11 /
1110
DBMS_LOGMNR
PL/SQL procedure successfully completed. tkyte@TKYTE816> begin
2 sys.dbms
_ logmnr.start
_ logmnr
3 ( dictFileName => 'c:\temp\miner _ dictionary.dat',
4 options => sys.dbms
_ logmnr.USE
_ COLMAP );
5 end;
6 /
PL/SQL procedure successfully completed. tkyte@TKYTE816> select scn, sql _ redo
2 from v$logmnr _ contents
3 where sql _ redo like 'delete from SYS.OBJ$ %''DEPT''%'
4 /
SCN SQL _ REDO
----------------- ----------------------------------------
6442991097246 delete from SYS.OBJ$ where OBJ# = 30553
and DATAOBJ# = 30553 and OWNER# = 337 an
d NAME = 'DEPT' and NAMESPACE = 1 and SU
BNAME IS NULL and TYPE# = 2 and CTIME =
TO _ DATE('29-APR-2001 12:32:11', 'DD-MON-
YYYY HH24:MI:SS') and MTIME = TO _ DATE('2
9-APR-2001 12:32:11', 'DD-MON-YYYY HH24:
MI:SS') and STIME = TO _ DATE('29-APR-2001
12:32:11', 'DD-MON-YYYY HH24:MI:SS') an
d STATUS = 1 and REMOTEOWNER IS NULL and
LINKNAME IS NULL and FLAGS = 0 and OID$
IS NULL and SPARE1 = 6 and ROWID = 'AAA
AASAABAAAFz3AAZ';
That is all there is to it. Now that we have found that the SCN was 6442991097246, we can do a point in time recovery elsewhere to recover this table, and restore it to our system. We can recover it to the very point in time immediately prior to it being dropped.
LogMiner uses PGA memory in order to perform its task. We have mentioned previously, that this implies you cannot use DBMS _ LOGMNR with MTS. What we haven't looked at is how much PGA memory LogMiner might actually use.
The log files on my system are 100 MB each. I loaded up two of them for analysis, measuring the before and after PGA memory use: tkyte@TKYTE816> select a.name, b.value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and lower(a.name) like '%pga%'
5 /
1111
Appendix A
NAME VALUE
------------------------------ ---------- session pga memory 454768 session pga memory max 454768 tkyte@TKYTE816> declare
2 l _ name varchar2(255) default
3 'C:\oracle\ORADATA\tkyte816\archive\TKYTE816T001S012';
4 begin
5 for i in 49 .. 50
6 loop
7 sys.dbms
_ logmnr.add
_ logfile( l _ name || i || '.ARC' );
8 end loop;
9
10 sys.dbms
_ logmnr.start
_ logmnr
11 ( dictFileName => 'c:\temp\miner _ dictionary.dat',
12 options => sys.dbms
_ logmnr.USE
_ COLMAP );
13 end;
14 /
PL/SQL procedure successfully completed. tkyte@TKYTE816> select a.name, b.value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and lower(a.name) like '%pga%'
5 /
NAME VALUE
------------------------------ ---------- session pga memory 11748180 session pga memory max 11748180
So, 200 MB of archive redo log is currently taking about 11.5 MB of PGA memory. This means that either the archived redo was mostly 'fluff', or that Oracle doesn't actually cache the entire redo log file in RAM. The answer is that Oracle doesn't actually cache the entire redo log file in RAM. Rather, it reads it from disk as needed. Only some information in cached in RAM.
If we actually query the V$LOGMNR _ CONTENTS view right now and measure the amount of PGA memory in use after this operation, we'll see that the memory requirements for this will go up as we access it: tkyte@TKYTE816> create table tmp _ logmnr _ contents unrecoverable
2 as
3 select * from v$logmnr _ contents
4 /
Table created. tkyte@TKYTE816> select a.name, b.value
2 from v$statname a, v$mystat b
3 where a.statistic# = b.statistic#
4 and lower(a.name) like '%pga%'
5 /
1112
DBMS_LOGMNR
NAME VALUE
------------------------------ ---------- session pga memory 19965696 session pga memory max 19965696
So, as you can see, our session now needs almost 20 MB of PGA memory.
LogMiner has some serious limits that you need to be aware of. These limits are in regards to using Oracle object types and chained rows.
Object types are somewhat supported by LogMiner. LogMiner is not able to rebuild the SQL you would typically use to access object types, and it is not able to support all object types. A quick example demonstrates best some of the limitations in this area. We'll start with a small schema with some common object types such as
VARRAY s and nested tables in it: tkyte@TKYTE816> create or replace type myScalarType
2 as object
3 ( x int, y date, z varchar2(25) );
4 /
Type created. tkyte@TKYTE816> create or replace type myArrayType
2 as varray(25) of myScalarType
3 /
Type created. tkyte@TKYTE816> create or replace type myTableType
2 as table of myScalarType
3 /
Type created. tkyte@TKYTE816> drop table t;
Table dropped. tkyte@TKYTE816> create table t ( a int, b myArrayType, c myTableType )
2 nested table c store as c _ tbl
3 /
Table created. tkyte@TKYTE816> begin
2 sys.dbms
_ logmnr _ d.build( 'miner _ dictionary.dat',
3 'c:\temp' );
4 end;
5 /
1113
Appendix A
PL/SQL procedure successfully completed. tkyte@TKYTE816> alter system switch logfile;
System altered. tkyte@TKYTE816> insert into t values ( 1,
2 myArrayType( myScalarType( 2, sysdate, 'hello' ) ),
3 myTableType( myScalarType( 3, sysdate+1, 'GoodBye' ) )
4 );
1 row created. tkyte@TKYTE816> alter system switch logfile;
System altered.
So, in the above example, we created some object types, added a table that utilizes these types, re-exported our data dictionary, and then finally did some isolated DML on this object. Now we are ready to see what LogMiner is able to tell us about these operations: tkyte@TKYTE816> begin
2 sys.dbms
_ logmnr.add
_ logfile( 'C:\oracle\rdbms\ARC00028.001',
3 dbms _ logmnr.NEW );
4 end;
5 /
PL/SQL procedure successfully completed. tkyte@TKYTE816> begin
2 sys.dbms
_ logmnr.start
_ logmnr
3 ( dictFileName => 'c:\temp\miner _ dictionary.dat' );
4 end;
5 /
PL/SQL procedure successfully completed. tkyte@TKYTE816> select scn, sql _ redo, sql _ undo
2 from v$logmnr _ contents
3 /
SCN SQL _ REDO SQL _ UNDO
---------- -------------------- --------------------
824288
824288
824288
824288 set transaction read
write;
824288 insert into delete from
TKYTE.C
_ TBL(NESTED _ T TKYTE.C
_ TBL where
ABLE _ ID,X,Y,Z) NESTED _ TABLE _ ID =
values HEXTORAW('252cb5fad8
(HEXTORAW('252cb5fad 784e2ca93eb432c2d35b
1114
DBMS_LOGMNR
8784e2ca93eb432c2d35 7c') and X = 3 and Y
b7c'),3,TO _ DATE('23- =
JAN-2001 16:21:44', TO _ DATE('23-JAN-2001
'DD-MON-YYYY 16:21:44',
HH24:MI:SS'),'GoodBy 'DD-MON-YYYY
e'); HH24:MI:SS') and Z =
'GoodBye' and ROWID
=
'AAAFaqAADAAAAGzAAA'
;
824288
824288
824288
824288 insert into delete from TKYTE.T
TKYTE.T(A,B,SYS _ NC00 where A = 1 and B =
00300004$) values Unsupported Type and
(1,Unsupported SYS _ NC0000300004$ =
Type,HEXTORAW('252cb HEXTORAW('252cb5fad8
5fad8784e2ca93eb432c 784e2ca93eb432c2d35b
2d35b7c')); 7c') and ROWID =
'AAAFapAADAAAARjAAA'
;
824288
10 rows selected.
As you can see, our original single INSERT : tkyte@TKYTE816> insert into t values ( 1,
2 myArrayType( myScalarType( 2, sysdate, 'hello' ) ),
3 myTableType( myScalarType( 3, sysdate+1, 'GoodBye' ) )
4 );
1 row created. was turned into two INSERT s. One for the child table (the nested table), and one for the parent able T .
LogMiner does not reproduce the single INSERT – equivalent SQL was produced. As we look closer however, we will notice that in the INSERT INTO T , we see Unsupported Type as one of the column values. Looking back at the original INSERT , we can see that the unsupported type is in fact our VARRAY . LogMiner is not capable of reproducing this particular construct.
This does not remove all of the usefulness of LogMiner with regards to objects. It does prevent us from using it to undo or redo transactions, since it cannot faithfully reproduce the necessary SQL. However, we can still use it to analyze historical trends, perform auditing, and the like. Perhaps of more interest is that it gives us the ability to see how Oracle physically implements object types under the covers. For example, look at the insert into T : insert into tkyte.t ( a, b, SYS _ NC0000300004$) values ...
It is pretty clear to us what A and B are. They are our INT and MyArrayType ( VARRAY ) columns. However, where is C and what is this SYS _ NC0000300004$ column? Well, C is our nested table, and nested tables are actually physically stored as a parent/child table. C is not stored in T ; it is stored in a wholly separate table. The column SYS _ NC0000300004$ is actually a surrogate primary key on T , and is used as a foreign key in C _ TBL
– the nested table. If we look at the INSERT into the nested table: insert into tkyte.c
_ tbl( nested _ table _ id, x, y, z ) values ...
1115
Appendix A we can see that the NESTED _ TABLE _ ID was added to our nested table, and this column is, in fact, used to join to the T.SYS
_ NC0000300004$ column. Further, looking at the value that is put into both of these columns:
HEXTORAW('252cb5fad8784e2ca93eb432c2d35b7c') we can see that Oracle is, by default, using a system-generated 16 byte RAW value to join C _ TBL with T .
Therefore, through LogMiner analysis, we can gain a better understanding of how various features in Oracle are implemented. Here, we have seen how a nested table type is really nothing more than a parent/child table with a surrogate key in the parent table, and a foreign key in the child table.
LogMiner currently does not handle chained or migrated rows. A chained row is a row that spans more than one Oracle block. A migrated row is a row that started on one block when it was inserted, but due to an
UPDATE , grew too large to fit on this block with the other rows that were there, and was therefore 'moved' to a new block. A migrated row retains its original row ID – the block it was originally on has a pointer to the new location for the row. A migrated row is a special type of chained row. It is a chained row where no data will be found on the first block, and all of the data will be found on the second block.
In order to see what LogMiner does with chained rows, we will artificially create one. We'll start with a table that has nine CHAR(2000) columns. I am using an 8 KB block size for my database so if all nine columns have values, they will be 18,000 bytes in size, which is too big to fit on a block. This row will have to be chained onto at least three blocks. The table we'll use to demonstrate this limitation is as follows: tkyte@TKYTE816> create table t ( x int primary key,
2 a char(2000),
3 b char(2000),
4 c char(2000),
5 d char(2000),
6 e char(2000),
7 f char(2000),
8 g char(2000),
9 h char(2000),
10 i char(2000) );
Table created.
Now, to demonstrate the issue, we'll insert a row into T with a value for only column X and column A . The size of this row will be about 2,000 plus bytes. Since B , C , D , and so on are Null, they will consume no space whatsoever. This row will fit on a block. We will then update this row, and supply values for B , C , D , and E .
Since CHAR s are always blank padded, this will cause the size of the row to increase from 2,000 plus bytes to
10,000 plus bytes, forcing it to chain onto two blocks. We'll then update every column in the row, growing the row to 18 KB, forcing it to span three blocks. We can then dump the redo with LogMiner, and see what it does with it: tkyte@TKYTE816> begin
2 sys.dbms
_ logmnr _ d.build( 'miner _ dictionary.dat',
3 'c:\temp' );
4 end;
5 /
PL/SQL procedure successfully completed.
1116
DBMS_LOGMNR tkyte@TKYTE816> alter system archive log current;
System altered. tkyte@TKYTE816> insert into t ( x, a ) values ( 1, 'non-chained' );
1 row created. tkyte@TKYTE816> commit;
Commit complete. tkyte@TKYTE816> update t set a = 'chained row',
2 b = 'x', c = 'x',
3 d = 'x', e = 'x'
4 where x = 1;
1 row updated. tkyte@TKYTE816> commit;
Commit complete. tkyte@TKYTE816> update t set a = 'chained row',
2 b = 'x', c = 'x',
3 d = 'x', e = 'x',
4 f = 'x', g = 'x',
5 h = 'x', i = 'x'
6 where x = 1;
1 row updated. tkyte@TKYTE816> commit;
Commit complete. tkyte@TKYTE816> alter system archive log current;
System altered.
Now that we've created the exact case we want to analyze, we can dump it via LogMiner. Don't forget, we need to rebuild your data dictionary file after we create table T , or the output will be unreadable! tkyte@TKYTE816> declare
2 l _ name v$archived _ log.name%type;
3 begin
4
5 select name into l _ name
6 from v$archived _ log
7 where completion _ time = ( select max(completion _ time)
8 from v$archived _ log );
9
10 sys.dbms
_ logmnr.add
_ logfile( l _ name, dbms _ logmnr.NEW );
11 end;
12 /
1117
Appendix A
PL/SQL procedure successfully completed. tkyte@TKYTE816> begin
2 sys.dbms
_ logmnr.start
_ logmnr
3 ( dictFileName => 'c:\temp\miner _ dictionary.dat' );
4 end;
5 /
PL/SQL procedure successfully completed. tkyte@TKYTE816> select scn, sql _ redo, sql _ undo
2 from v$logmnr _ contents
3 where sql _ redo is not null or sql _ undo is not null
4 /
SCN SQL _ REDO SQL _ UNDO
---------------- --------------------------- ---------------------------
6442991118354 set transaction read write;
6442991118354 insert into TKYTE.T(X,A) va delete from TKYTE.T where X
lues (1,'non-chained = 1 and A = 'non-chained
'); ' and ROWID
= 'AAAHdgAAGAAAACKAAA';
6442991118355 commit;
6442991118356 set transaction read write;
6442991118356 Unsupported (Chained Row) Unsupported (Chained Row)
6442991118356 Unsupported (Chained Row) Unsupported (Chained Row)
6442991118357 commit;
6442991118358 set transaction read write;
6442991118358 Unsupported (Chained Row) Unsupported (Chained Row)
6442991118358 Unsupported (Chained Row) Unsupported (Chained Row)
6442991118358 Unsupported (Chained Row) Unsupported (Chained Row)
6442991118359 commit;
12 rows selected.
As you can see, the original INSERT we did was reported by Log Miner as you would expect. The UPDATE however, since it caused the row to be chained, is not reported by LogMiner. Instead it reports Unsupported
(Chained Row) . It is interesting to note that it reports this twice for our first UPDATE , and three times for the second. LogMiner is reporting changes by database block. If your row is on two blocks, there will be two change entries in V$LOGMNR _ CONTENTS . If your database block is on three blocks, then there will be three entries.
You just need to be aware of the fact that LogMiner cannot faithfully reproduce the SQL to redo or undo actions against chained and migrated rows.
LogMiner has some other limitations similar to the above. In addition to the above it does not currently support:
❑ Analysis of IOTs.
❑ Analysis of clustered tables and indexes.
1118
DBMS_LOGMNR
The V$LOGMNR _ CONTENTS table contains a row for every logical change to the database retrieved from the processed redo log files. We have already used this view many times, but have utilized a small fraction of the columns within it. The following table describes all of the columns available in this view with a more detailed description of what is available in them, than is available in the Oracle documentation:
PRIVATE COLUMN DESCRIPTION
SCN
TIMESTAMP
THREAD#
LOG _ ID
S ystem C hange N umbers associated with the transaction that made this change.
Date when redo record was generated. Timestamps cannot be used to infer ordering of redo records. Since the SCN is assigned upon COMMIT , only the
SCN can be used to infer ordering of redo records. Ordering by timestamp in a multi-user system will result in the wrong order.
Identifies the thread that generated the redo record.
Identifies the log file within the V$LOGMNR _ FILES table that contains the redo record. This is a foreign key to the V$LOGMNR _ FILES view.
XIDUSN
XIDSLOT
XIDSQN
RBASQN
RBABLK
RBABYTE
UBAFIL
UBABLK
UBAREC
UBASQN
ABS
REL
_
_
FILE#
FILE#
Transaction ID ( XID ) U ndo S egment N umber ( USN ). The transaction identifier is constructed from the XIDUSN , XIDSLOT , and XIDSQN , and is used to identify the transaction that generated the change. These three fields taken together uniquely identify the transaction.
Transaction ID slot number. Identifies the transaction table entry number.
Transaction ID sequence number.
Uniquely identifies the log that contained this redo record, among a group of redo logs. A RBA ( R edo B lock A ddress) is composed of the RBASQN ,
RBABLK , and RBABYTE fields.
The block number within the log file.
The byte offset within the block.
UBA (Undo Block Address) file number identifying the file containing the undo block. The UBA is constructed from the UBAFIL , UBABLK , UBASQN , and UBAREC , and is used to identify the undo generated for the change.
UBA block number.
UBA record index.
UBA undo block sequence number.
Data block absolute file number. The ABS _ FILE# , together with the
REL _ FILE# , DATA _ BLOCK# , DATA _ OBJ# , DATA _ DOBJ , identify the block changed by the transaction.
Data block relative file number. The file number is relative to the tablespace of the object.
Table continued on following page
1119
Appendix A
PRIVATE COLUMN
DATA _ BLOCK#
DATA _ OBJ#
DATA _ DOBJ#
SEG _ OWNER
SEG _ NAME
SEG
SEG
_
_
TYPE
TYPE
TABLE
ROW _
_
_ NAME
SPACE
ID
SESSION#
SERIAL#
USERNAME
SESSION _
_ NAME
INFO
ROLLBACK
OPERATION
SQL _ REDO , SQL _ UNDO
DESCRIPTION
Data block number.
Data block object number.
Data block data object number identifying the object within the tablespace.
Name of the user owning the object.
Name of the structure the segment was allocated for (in other words, table name, cluster name, and so on). Partitioned tables will have a segment name constructed of two parts; the table name followed by a commaseparated partition name (for example,
(TableName,PartitionName) ).
The type of the segment in numeric form.
The type of segment in string form (in other words, TABLE , INDEX , and so on) Only the type, TABLE , will be supported in the initial release. Other segment types will be reported as UNSUPPORTED .
Name of the tablespace.
Row ID.
Identifies session that generated the redo. A Null value will be reported if the session number is not available from the redo log.
Serial number of the session, which generated the redo. The SESSION# and SERIAL# can be used to uniquely identify the Oracle session. A Null value will be reported if the session number is not available from the redo log.
Name of the user initiating the operation that generated the redo record.
The user name will always be Null if the archive auditing option is not enabled. This auditing is enabled via the init.ora
parameter
TRANSACTION _ AUDITING .
String containing login username, client information, OS username, machine name, OS terminal, OS PID, OS program name.
A value of 1 (True) identifies operations and SQL statements that were generated as a result of a rollback request, 0 (False) otherwise.
Type of SQL operation. Only INSERT , DELETE , UPDATE , COMMIT , and
BEGIN _ TRANSACTION will be reported. All other operations will be reported as UNSUPPORTED or INTERNAL _ OPERATION .
The SQL _ REDO and SQL _ UNDO columns contain SQL-compliant statements that represent the logical redo and undo operations decoded from one or more archive log records. A Null value indicates that no valid
SQL statement can be generated for this redo record. Some redo records may not be translatable. In this case, the SQL _ REDO and SQL _ UNDO will be
Null, and the STATUS column will contain the string UNSUPPORTED .
1120
DBMS_LOGMNR
PRIVATE COLUMN DESCRIPTION
PH1 _ REDO
PH1 _ UNDO
PH2 _ NAME
PH2 _ REDO
PH2 _ UNDO
PH3 _ NAME
PH3 _ REDO
PH3 _ UNDO
PH4 _ NAME
PH4 _ REDO
PH4 _ UNDO
PH5 _ NAME
PH5 _ REDO
PH5 _ UNDO
RS _ ID RS _ ID (Record Set ID). uniquely identifies the set of records used to generate a
SQL statement (a set may be a single record). It can be used to determine when multiple records generate a single SQL statement. The RS _ ID will be identical for all records within the set. The SQL statement will appear only in the last row the record set. The SQL _ REDO and SQL _ UNDO columns for all other rows, within the set, will be Null. Note that the RS _ ID / SSN pair together provide a unique SQL identifier for every SQL statement generated (see SSN).
SSN The SSN (SQL Sequence Number) can be used to identify multiple rows, with valid SQL _ REDO statements, that are generated from a single redo record (in other words, array inserts, direct loads). All such rows will have the same RS _ ID , but a unique SSN. The SSN is an incrementing value starting at
1 for each new RS _ ID .
CSF CSF (Continuation SQL Flag) set to 1 (True) indicates that either a LogMinergenerated REDO _ SQL or UNDO _ SQL statement is larger than the maximum size of the VARCHAR2 (currently 4000 characters) data type. SQL statements exceeding this limit will span multiple rows.
The next row entry will contain the remainder of the SQL statement. The
RS _ ID , SSN pair will be identical for all continued rows corresponding to the same SQL statement. The last of the continued rows will have CSF set to 0
(False) to indicate the end of the SQL continuation.
STATUS
PH1 _ NAME
Indicates the status of the translation. Null value indicates a successful translation, UNSUPPORTED will indicate that this version of LogMiner does not support the SQL translation, READ _ FAILURE will indicate an internal operating system failure to read from the log file, TRANSLATION _ ERROR will indicate that LogMiner was unable to complete the translation (this may be due to a corrupted log or an out of date dictionary file).
P lace h older column name. Placeholder columns are generic columns that can be assigned to specify database table columns, via an optional LogMiner mapping file.
Placeholder column redo value.
Placeholder column undo value.
Placeholder column name.
Placeholder column redo value.
Placeholder column undo value.
Placeholder column name.
Placeholder column redo value.
Placeholder column undo value.
Placeholder column name.
Placeholder column redo value.
Placeholder column undo value.
Placeholder column name.
Placeholder column redo value.
Placeholder column undo value.
1121
Appendix A
LogMiner is not a tool you will use every day – I cannot anticipate any application that would actually use it as part of its processing. It is an easy way to see what the database does however, and is an excellent exploration tool in this regard. We have seen how LogMiner can be useful in finding out 'who did what and when' after the fact – this is the use of LogMiner I've seen more often than others. You have the errant program that is doing something it wasn't supposed to, or you have a privileged person doing things they shouldn't be (and not owning up to it). If auditing wasn't turned on, you have no other way to go back in time, and see what happened. In a pinch, this tool can be used to undo an errant transaction as well, given that it supplies you the SQL UNDO and
REDO statements. In general, you'll find that LogMiner is a good 'stealth' tool. It won't be on the top ten list of executed procedures, but when you need it, it's good to know it is there.
1122