learn-oracle


Learn Oracle



You are just 10 steps away from becoming a Oracle DBA. Materialize your dream by following the The 10 Simple Steps .

Transaction Management with LogMiner and Flashback Data Archive

LogMiner is an often ignored yet very powerful tool in the Oracle Database. It is used to extract DML statements from the redo log files—the original SQL that caused the transaction and even the SQL that can undo the transactions. In Oracle Database 11g, however, Oracle Enterprise Manager has a graphical interface to extract transaction from the redo logs using LogMiner, which makes it extremely easy to use the tool to examine and rollback transactions. (Note: As in previous versions, you can continue to use the DBMS_LOGMNR package to perform command line-driven log mining if you wish.)

Let's see an example how this is done. To enable log mining, Oracle DBA need only minor supplemental logging enabled for the database or at least the table. Flashback Transaction requires primary key logging. To enable it for the entire database, issue the following commands:




  
SQL> alter database add supplemental log data;
 
Database altered.
 
SQL> alter database add supplemental log data (primary key) columns;
 
Database altered.
Now, consider the following statements issued by an application against your database:
 
SQL> insert into res values (100002,sysdate,12,1);
 
1 row created.
 
SQL> commit;
 
Commit complete.

SQL> update res set hotel_id = 13 where res_id = 100002;

1 row updated.
 
SQL> commit;
 
Commit complete.

SQL> delete res where res_id = 100002;
 
1 row deleted.
 
SQL> commit;
 
Commit complete.




Note the statements carefully: each one is succeeded by a commit statement, which indicates that each statement is a transaction. Now let's see how Oracle DBA can examine the transactions in LogMiner in Oracle Database 11g Database Control.

In the Enterprise Manager screen, from the Database homepage, go to the tab labeled Availability.

Logminer

Click View and Manage Transactions, listed under Manage. This brings up the main LogMiner interface, as shown below:

Logminer

Oracle DBA can enter specific ranges in time or SCNs to search for transactions. In the figure above, I have entered a range of time to search in the Query Time Range. In the Query Filter, I have used only the SCOTT's transactions, because that was used to perform all the DMLs. In the Advanced Query section, Oracle DBA can enter any additional filter. After all fields are entered, click Continue.

This kicks up the Log Mining process that searches through the redologs (both online and archived, if needed) and finds the transactions issued by the user SCOTT. After the process is completed, Oracle DBA will see the results screen.

The top portion of the results screen looks like this:

Logminer

The results indicate that the search found two transactions by SCOTT, which affected two records.

The bottom portion of the screen shows the details of those transactions. Here is a partial view of the screen. Oracle DBA can see the transactions show as 1 ins (meaning "1 insert statement"). The leftmost column shows the transaction identifiers (XID), a number that uniquely identifies a transaction.

Logminer

If Oracle DBA click on that transaction identifier, Oracle DBA can see the details of that transaction as shown in the screen below:

Logminer

As Oracle DBA can see, Oracle DBA can use Database Control to search and identify the transactions. Click the buttons Previous Transaction and Next Transaction to scroll through all the transactions found by the search.

Use Cases

How can you use this feature? Well, several ways. The most important use may be to find out "who" did "what." If you don't have auditing enabled for performance reasons, or just haven't kept the audit records, all you have to do is to search for the clues in the LogMiner interface by mining the redo logs—online as well as archived ones. In the search screen, you can enter additional filtering conditions in the Advanced Query field under Query Filter.

Suppose Oracle DBA want to find a transaction where the record for the RES_ID = 100002 was inserted, deleted, or updated. You can search for a specific value in the redo stream by using the function column_present in the dbms_logmnr package as shown below:

Logminer

This function will extract all the transactions that involved 100002 in the RES_ID column in RES table under the SCOTT schema.

Oracle DBA can also use this feature to unearth the DDL commands issued against the database. To do that, select the radio button View DDL Only in the Query Filter section.

Backout of Selected Transactions

When Oracle DBA examine a transaction, what do you want do with it? One thought—perhaps the reason Oracle DBA are looking into the transaction in the first place—is that the transaction was made in error and you want to undo it. That's fairly simple; if the transaction is an insert, you just have to delete it; or if it is an update, then the undo will be updating the row to the older value.

However, note the transactions used in the example carefully. The first transaction inserts a row. The second one updates the row just inserted and the third one deletes that very row. The first one (the insert) is the transaction you want to backout. But, here is a problem; the row is already deleted by the subsequent transactions; so what is the undo transaction going to be, in this case?

This is where the Dependent Transaction viewing feature in Oracle Database 11g comes handy. Click Flashback Transaction. After some searches, it will present a screen similar to below:

Logminer

This screen shows Oracle DBA the dependent transactions and update and deletes as well. Now when Oracle DBA back-out the transaction, you can back-out the dependents as well. To do so, choose the Cascade radio button from the list below and click OK.

Logminer

It will show you the different transactions you want backed out; click the Transaction IDs to see that what SQL statements Oracle will issue to undo the specific transaction.

Logminer

For instance, to undo the insert, it has to issue a delete, as shown above. If you click on the next transaction (just below it), Oracle DBA will see the details of what needs to be done to back that one out:

Logminer

Oracle DBA get the idea. Click Submit and all these transactions will be rolled back, in one sweep. This is the cleanest way to undo a transaction and its dependents.

Command Line Interface

What if you don't have access to the Enterprise Manager or perhaps you want this done through a script? The package DBMS_FLASHBACK, which is also present in Oracle Database 10g, has a new procedure called TRANSACTION_BACKOUT. This procedure is overloaded so you have to pass the value to the named parameters, as shown below.

declare
   trans_arr xid_array;
begin
   trans_arr := xid_array('030003000D040000','F30003000D04010');
   dbms_flashback.transaction_backout (
        numtxns         => 1,
        xids            => trans_arr,
        options         => dbms_flashback.cascade
   );
end;
The type xid_array is also new in Oracle Database 11g. It is present to pass a series of transaction identifiers to the procedure.

Other LogMiner Improvements

If you have been using XMLType as a data type and you have more reasons to use it in Oracle Database 11g, you will be happy to see that the XML data is mined as well in LogMiner. It shows up both in SQL_REDO and SQL_UNDO columns.

You can set an option called SKIP_CORRUPTION while starting LogMiner which will skip the corrupt blocks in redo logs. So, Oracle DBA can still salvage valid data from the redo logs even if it is partially damaged. Here is how you can use the improved syntax:

begin
   dbms_logmnr.start_logmnr(
        options => dbms_logmnr.skip_corruption
   ) ;
end;

Flashback Data Archive

Oracle9i Database Release 2 introduced the proverbial time machine in the form of the Flashback Query, which allows you to select the pre-changed version of the data. For example, had you changed a value from 100 to 200 and committed, Oracle DBA can still select the value as of two minutes ago even if the change was committed. This technology used the pre-change data from the undo segments. In Oracle Database 10g, this facility was enhanced with the introduction of Flashback Versions Query, where you can even track the changes made to a row as long as the changes are still present in the undo segments.

However, there was a little problem: When the database is recycled, the undo data is cleaned out and the pre-change values disappear. Even if the database is not recycled, the data may be aged out of the undo segments to make room for new changes.

Since pre-11g flashback operations depend on the undo data, which is available only for a short duration, you can't really use it over an extended period of time or for more permanent recording such as for auditing. As a workaround, we resorted to writing triggers to make more permanent records of the changes to the database.

Well, don't despair. In Oracle Database 11g, Flashback Data Archive combines the best of both worlds: it offers the simplicity and power of the flashback queries but does not rely on transient storage like the undo. Rather, it records changes in a more permanent location, the Flashback Recovery Area.

Let's look at an example. (Note: Oracle DBA need to activate Automatic Undo Management for Flashback Data Archive to work.) First, Oracle DBA create a Flashback Data Archive, as shown below:

SQL> create flashback archive near_term
  2  tablespace far_near_term
  3  retention 1 month
  4  /

Flashback archive created.
For the time being ignore the meaning of the term "retention"; we will revisit it later. (This is a location where the changes will be recorded.) The archive is created in the tablespace far_near_term.

Assume Oracle DBA have to record changes to a table called TRANS. All Oracle DBA need to do is enable the Flashback Data Archive status of the table to start recording the changes in that archive.

SQL> alter table trans flashback archive near_term;

Table altered.
This puts the table into the Flashback Data Archive mode. All the changes to the rows of the table will be now tracked permanently. Let's see a demonstration.

First, select a specific row of the table.

SQL> select txn_amt from trans where trans_id = 2;

   TXN_AMT
----------
  19325.67

SQL> update trans set txn_amt = 2000 where trans_id = 2;

1 row updated.

SQL> commit;

Commit complete.
Now, if you select the row, it will always display 2000 in this column. To find out the older value as of a certain time, Oracle DBA can use the Flashback query as shown below:
elect txn_amt
from trans
as of timestamp to_timestamp ('07/18/2007 12:39:00','mm/dd/yyyy hh24:mi:ss')
where trans_id = 2;

   TXN_AMT
----------
  19325.67
  
Now, after some time, when the undo data has been purged out of the undo segments, query the flashback data again:
select txn_amt
from trans
as of timestamp to_timestamp ('07/18/2007 12:39:00','mm/dd/yyyy hh24:mi:ss')
where trans_id = 2;
It comes back with the result: 19325.67. The undo is gone, so where did the data come from?

Let's ask Oracle. Oracle DBA can do that using autotrace and see the execution plan:


SQL> set autotrace traceonly explain
SQL> select txn_amt
  2  from trans
  3  as of timestamp to_timestamp ('07/18/2007 12:39:00','mm/dd/yyyy hh24:mi:ss')
  4  where trans_id = 2;

Execution Plan
----------------------------------------------------------
Plan hash value: 535458644

----------------------------------------------------------

| Id  | Operation                 | Name               | Rows  | 
------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                    |     2 |    
|   1 |  VIEW                     |                    |     2 |    
|   2 |   UNION-ALL               |                    |       |     
|*  3 |    FILTER                 |                    |       |     
|   4 |     PARTITION RANGE SINGLE|                    |     1 |    
|*  5 |      TABLE ACCESS FULL    | SYS_FBA_HIST_68909 |     1 |    
|*  6 |    FILTER                 |                    |       |   
|*  7 |     HASH JOIN OUTER       |                    |     1 |  
|*  8 |      TABLE ACCESS FULL    | TRANS              |     1 |    
|   9 |      VIEW                 |                    |     2 |  
|* 10 |       TABLE ACCESS FULL   | SYS_FBA_TCRV_68909 |     2 |  
---------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(NULL IS NOT NULL)
   5 - filter("TRANS_ID"=2 AND "ENDSCN">161508784336056 AND "ENDSCN"<=1073451 AND ("STARTSCN" IS NULL
              OR "STARTSCN"<=161508784336056))
   6 - filter("F"."STARTSCN"<=161508784336056 OR "F"."STARTSCN" IS NULL)
   7 - access("T".ROWID=("F"."RID"(+)))
   8 - filter("T"."VERSIONS_STARTSCN" IS NULL AND "T"."TRANS_ID"=2)
  10 - filter(("ENDSCN" IS NULL OR "ENDSCN">1073451) AND ("STARTSCN" IS NULL OR "STARTSCN"<1073451))

Note
-----
   - dynamic sampling used for this statement
This output answers the riddle "Where did the data come from?"; it came from the table SYS_FBA_HIST_68909, which is a location in the Flashback Archive Oracle DBA defined earlier for that table. Oracle DBA can check the table but it's not supported by Oracle to directly peek at that data there. Anyway, I don't see a reason Oracle DBA would want to do that.

The data inside the archive is retained but until how long? This is where the retention period comes into play. It's retained up to that period. After that, when new data comes in, the older data will be purged. You can also purge it yourself, e.g.

alter flashback archive near_term purge before scn 1234567;

Managing Flashback Archives

You can add more than one tablespace to an archive. Conversely you can remove a tablespace from one too. If you are planning to use a tablespace that has other user data as well, you run into the risk of crowding the tablespace with the Flashback Data Archive data and leaving no space for the user data. To reduce the risk, you can establish a quota on how much space the archive can take inside the tablespace. You can set the quota by:

alter flashback archive near_term modify tablespace far_near_term quota 10M;
You can check which tables have Flashback Data Archive turned on by querying the dictionary view:
SQL> select * from user_flashback_archived_tables;

TABLE_NAME                     OWNER_NAME
------------------------------ ------------------
FLASHBACK_ARCHIVE_NAME
-------------------------------------------------
TRANS                          ARUP
NEAR_TERM
You can find out about the archives by querying the dictionary view:
sql> select * from flashback_archives;

FLASHBACK_ARCHI FLASHBACK_ARCHIVE# RETENTION_IN_DAYS  PURGE_SCN STATUS
--------------- ------------------ ----------------- ---------- -------
NEAR_TERM                        1                30    1042653
MED_TERM                         2               365    1042744
LONG_TERM                        3              1825    1042838
Using multiple archives lets Oracle DBA use them creatively in different situations. For instance, a hotel company's database may need one year of reservation data but three years of payments. So you can define multiple archives with different retention policies and assign them to the tables. Or if you have a uniform retention policy, you can define only one archive and make it the default.
alter flashback archive near_term set default;
When you don't need an archive for a table, you can turn it off with:
alter table trans no flashback archive;
As you can see, Oracle DBA just enabled a powerful change recording system without writing a single line of code.

Differences vs. Regular Auditing

How does Flashback Data Archive differ from regular auditing? First of all, the latter requires the audit_trail parameter be set to DB or DB_EXTENDED and the trails are written to the table called AUD$ in the SYSTEM tablespace. Flashback Data Archives can be defined on any tablespace (or more than one, even on parts of a tablespace where user data exists) and therefore can be defined on cheaper storage.

Second, auditing is based on autonomous transaction, which has some performance overhead. Flashback Data Archives are written by a dedicated background process called FBDA so there is less impact on performance.

Finally, Flashback Data Archives can be purged at regular intervals automatically. Audit trails must be manually maintained.

Use Cases

Flashback Data Archive is handy for many purposes. Here are some ideas:

  • To audit for recording how data changed
  • To enable an application to undo changes (correct mistakes)
  • To debug how data has been changed
  • To comply with some regulations that require data must not be changed after some time. Flashback Data Archives are not regular tables so they can't be changed by typical users.
  • Recording audit trails on cheaper storage thereby allowing more retention at less cost

Conclusion

ppen but now you can assert with authority that you will be able to identify the specific changes that caused the mistake, and you have tools to roll back those mistakes in entirety using the transaction back-out. But you are no longer limited to mining the changes from the archived and online redo logs only; the changes are recorded for perpetuity in the Flashback Archives. Now you can audit the changes the tables for all practical purposes in the Flashback Recovery Area, using just a few commands.

More Tutorials on Oracle dba ...




Liked it ? Want to share it ? Social Bookmarking
Add to: Mr. Wong Add to: BoniTrust Add to: Newsider Add to: Digg Add to: Del.icio.us Add to: Reddit Add to: Jumptags Add to: StumbleUpon Add to: Slashdot Add to: Netscape Add to: Furl Add to: Yahoo Add to: Spurl Add to: Google Add to: Blinklist Add to: Technorati Add to: Newsvine Information


Source : Oracle Documentation | Oracle DBA

Want to share or request Oracle Tutorial articles to become a Oracle DBA. Direct your requests to webmaster@oracleonline.info