learn-oracle


Learn Oracle



You are just 10 steps away from becoming a Oracle DBA. Materialize your dream by following the The 10 Simple Steps .

Oracle 11g Security

Topics covered in this article are :

Learn Oracle : Transparent Tablespace Encryption
Learn Oracle : Encryption of Data Pump Dumpfiles
Learn Oracle : Case-Sensitive Passwords
Learn Oracle : Related Topics




Transparent Tablespace Encryption

Encryption is getting more and more attention these days, thanks to myriad new laws and regulations. You need to encrypt data somehow but the big question is, how?

For those still on Oracle Database 10g Release 1 and previous releases, the DBMS_CRYPTO and DBMS_OBFUSCATION_TOOLKIT toolkits let you build your own encryption framework. In Oracle Database 10g Release 2, this framework is baked in via the great Transparent Data Encryption feature.

Transparent Data Encryption lets you encrypt specific columns, which is adequate for most requirements. However, performance can be an issue with this feature (or any other encryption solution for that matter): Index range scan cannot be applied to encrypted columns, which creates a major drag on performance.

This is where Transparent Tablespace Encryption in Oracle Database 11g really shines. When the tablespace is declared encrypted any data on the tablespace (including transportable tablespaces, backups, and so on), not just tables individually declared as such, is encrypted. But during index scanning, the scanning occurs in memory where the data is unencrypted, therefore causing no performance impact.

Excited yet? Let's see how it is done. The encryption procedure is identical to that of Transparent Data Encryption: Oracle DBA need to create a wallet where the master encryption key is stored. If Oracle DBA don't have Transparent Data Encryption set up already, Oracle DBA will need to create the wallet and the key.

First, create the file location for the wallet; the default location is $ORACLE_BASE/admin//wallet. The wallet subdirectory does not exist by default; Oracle DBA need to create it. So, in my example, this directory will be /home/oracle/app/admin/PRODB3/wallet.

Next, create the encryption key in the wallet by issuing the following statement:

alter system set encryption key identified by "abcd1234!";
This statement creates both the wallet and the key. If Oracle DBA check the directory now, Oracle DBA will see the wallet file Oracle DBA just created (ewallet.p12).
$ cd /home/oracle/app/admin/PRODB3/wallet
$ ls
ewallet.p12
The wallet can only be opened by a password, which in this case is abcd1234!. The statement also opens the wallet. Subsequently, Oracle DBA do not need to create the wallet. After the database comes up, all Oracle DBA have to do is to open the wallet by issuing:
alter system set wallet open identified by "abcd1234!"

Now create the tablespace:

create tablespace secure1
datafile '/home/oracle/oradata/PRODB3/secure1_01.dbf'
size 1M
encryption using 'AES128' 
default storage (encrypt) 
/
The clause "encryption using ... default storage (encrypt)" marks the tablespace as encrypted. (Note: we have used the AES 128-bit encryption for this tablespace. Other choices are Triple DES 168-bit key, AES 192-bit key, and AES 256-bit key.)

Now that the tablespace is created, Oracle DBA can create tables as Oracle DBA would in a regular tablespace.

create table secure_trans
tablespace secure1
as
select * from trans
where rownum < 201
/

create table secure_res
tablespace secure1
as
select * from res
where rownum < 201
/
The above statements create tables inside the encrypted tablespace SECURE1. For comparison, create another tablespace normally (no encryption) called INSECURE1 and create the tables INSECURE_TRANS and INSECURE_RES there. INSECURE_TRANS and SECURE_TRANS are identical in terms of structure and data but are in different tablespaces. The same is true for SECURE_RES and INSECURE_RES.

Now update a text field in the tables so that you can search inside the datafile for that:

update secure_trans set comments = 'Transaction Comments';
update insecure_trans set comments = 'Transaction Comments';
commit;
Force the contents to be written to disk by offlining and then onlining the tablespaces:
alter tablespace secure1 offline;
alter tablespace secure1 online;
alter tablespace insecure1 offline;
alter tablespace insecure1 online;
At this time, the data from the cache has been written to the disk. What happens if you search for it?
$ strings insecure1_01.dbf | grep Transaction
Transaction Comments
...
The string is in clear text in the datafile. Now do the same thing for the SECURE1 tablespace, which is encrypted.
$ strings secure1_01.dbf | grep Transaction
$
It returns nothing because the datafile is encrypted and you can't see the value of the columns in cleartext. <0> That's all fine and good, but what about performance? Let's experiment by running the following query.
select hotel_id, sum(amt)
from secure_trans t, secure_res r
where t.res_id = r.res_id
group by hotel_id
While running the query, we will also trace it. Here is an excerpt from the tracefile.

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       14      0.01       0.01          4          6          0         186
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       16      0.01       0.01          4          6          0         186
 
Rows     Row Source Operation
-------  ---------------------------------------------------
    186  HASH GROUP BY (cr=6 pr=4 pw=4 time=5 us cost=8 size=10400 card=200)
    200   HASH JOIN  (cr=6 pr=4 pw=4 time=45 us cost=7 size=10400 card=200)
    200    TABLE ACCESS FULL SECURE_TRANS (cr=3 pr=2 pw=2 time=8 us cost=3 size=5200 card=200)
    200    TABLE ACCESS FULL SECURE_RES (cr=3 pr=2 pw=2 time=9 us cost=3 size=5200 card=200)
Now, run the same test against INSECURE_RES and INSECURE_TEST, which are on a normal (unencrypted) tablespace.

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       14      0.00       0.01          4          6          0         186
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       16      0.01       0.01          4          6          0         186
 
Rows     Row Source Operation
-------  ---------------------------------------------------
    186  HASH GROUP BY (cr=6 pr=4 pw=4 time=4 us cost=8 size=10400 card=200)
    200   HASH JOIN  (cr=6 pr=4 pw=4 time=46 us cost=7 size=10400 card=200)
    200    TABLE ACCESS FULL INSECURE_TRANS (cr=3 pr=2 pw=2 time=8 us cost=3 size=5200 card=200)
    200    TABLE ACCESS FULL INSECURE_RES (cr=3 pr=2 pw=2 time=9 us cost=3 size=5200 card=200)
Note the execution times in each case; they are similar. The CPU consumption due to decryption is not significant either. Thus there is no impact on performance due to the tablespace being encrypted.

The view DBA_TABLESPACES has a new column, ENCRYPTED, to show if a tablespace is encrypted. In addition, a new view, V$ ENCRYPTED_TABLESPACES, shows what type of encryption is enabled for the tablespace.

SQL> desc v$encrypted_tablespaces
 Name                                      Null?    Type
 ----------------------------------------- -------- ------------
 TS#                                                NUMBER
 ENCRYPTIONALG                                      VARCHAR2(7)
 ENCRYPTEDTS                                        VARCHAR2(3)
 
SQL> select * from v$encrypted_tablespaces;
 
       TS# ENCRYPT ENC
---------- ------- ---
         5 AES128  YES
This view can be joined with the TS# column of the view V$TABLESPACE to get the complete picture. Here is how the view looks:
SQL> desc v$tablespace
 Name                                      Null?    Type
 ----------------------------------------- -------- ------------
 TS#                                                NUMBER
 NAME                                               VARCHAR2(30)
 INCLUDED_IN_DATABASE_BACKUP                        VARCHAR2(3)
 BIGFILE                                            VARCHAR2(3)
 FLASHBACK_ON                                       VARCHAR2(3)
 ENCRYPT_IN_BACKUP                                  VARCHAR2(3)
Note that the column ENCRYPT_IN_BACKUP has nothing to do with the transparent tablespace encryption. Rather it's the RMAN encryption of the tablespace during backup, introduced in Oracle Database 10g Release 2.

As you can see, Transparent Tablespace Encryption solves two problems in a rather elegant way: It encrypts data at rest on disks without affecting performance because the data management occurs inside the SGA.

Encryption of Data Pump Dumpfiles

Oracle Database 10g introduced one of the most powerful features for data movement: Data Pump, the successor to the original export/import tool. Apart from being just plain faster, Data Pump provided many benefits such as parallelizing the process and remapping the tablespace. In Oracle Database 11g, it also helps secure dumpfiles via a new parameter called ENCRYPTION.

A dumpfile is outside the database and realm of database security and contains potentially sensitive data. In today's security conscious environments, they present a unique set of problems. In some really security conscious environments, DBAs resort to encrypting dumpfiles through third-party utilities after exporting the data—not a very convenient approach if you do a lot of exporting.

First, let's see how a typical dumpfile might be vulnerable. Suppose you have a table called TRANS, which contains a column called COMMENTS. The value in the column is "Transaction Comments". If you export this table normally:

$ expdp scott/tiger tables=trans dumpfile=insec.dmp directory=tmp_dir
Examine the dmpfile to see the presence of the column value:
$ strings /tmp/insec.dmp | grep Transaction
A lot of matches will come up. The data in the dumpfile is not encrypted; it's in cleartext.

Now perform the export with the new parameter, ENCRYPTION. You also have to specify what type of algorithm to use. We'll use the AES 128-bit algorithm.

$ expdp scott/tiger tables=trans dumpfile=sec.dmp directory=
   tmp_dir encryption=data_only encryption_algorithm=aes128
 
Export: Release 11.1.0.5.0 - Beta on Sunday, 22 July, 2007 18:17:30
 
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
 
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.5.0 - Beta
With the Partitioning, Oracle Label Security, OLAP, Data Mining
and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  
'/******** AS SYSDBA' tables=scott.insecure_trans dumpfile=
sec.dmp directory=tmp_dir encryption=data_only encryption_algorithm=aes128 Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 64 KB Processing object type TABLE_EXPORT/TABLE/TABLE . . exported "SCOTT"."TRANS" 16.82 KB 200 rows Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYS.SYS_EXPORT_TABLE_01 is: /tmp/sec.dmp Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at 18:17:44
Search this dumpfile for the existence of the text string:
$ cat /tmp/sec.dmp | grep Transaction
$
No matches will come up because the values in the dumpfile are encrypted, not cleartext.

"But wait," you may ask, "Doesn't encryption require a key? Where did we pass the key to be used? Better yet, when decrypting, how will decryption occur without a key?"

The answer is very simple: The key will be from the wallet that was used in Transparent Tablespace Encryption you saw earlier. Oracle DBA do not need to use Transparent Tablespace Encryption feature to use the Data Pump encryption, but Oracle DBA need to follow the steps to create the wallet. Needless to say, the wallet must be open during the encryption and decryption processes.

If Oracle DBA are familiar with the Data Pump tool, Oracle DBA might recall that a similar feature is implemented thorough the parameter ENCRYPTION_PASSWORD. And now Oracle DBA may be wondering what's different now.

Case-Sensitive Passwords

In Oracle Database prior to release 11g, user passwords were case insensitive. For example:

SQL> conn scott/tiger
Connected.
SQL> conn scott/TIGER
Connected.
This arrangement presents a problem for standards such as the Payment Card Industry (PCI) Data Security Standard, which require passwords to be case sensitive.

Problem solved; in Oracle Database 11g passwords can be case sensitive as well. While creating the database via DBCA, Oracle DBA will be prompted whether Oracle DBA want to upgrade to the "new security standards," one of which is the case-sensitive password. If Oracle DBA accept, passwords will be recorded in the same case as they were created. Here is the resulting behavior, assuming you have accepted the new standard:

SQL> conn scott/tiger
Connected.
SQL> conn scott/TIGER
ERROR:
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.
Note how "tiger" and "TIGER" are treated differently.

Now, some of your apps may not be passing the password in proper case right now. A typical example is a user input form: Many forms accept passwords with no case conversion being performed. However, with Oracle Database 11g, that login may fail unless the user enters the password in case-sensitive format or the developer changes the app to convert to upper or lower case (which may not be possible quickly).

If Oracle DBA wish, however, it is still possible to revert to case insensitivity by altering a system parameter, SEC_CASE_SENSITIVE_LOGON, as shown in the example below.

SQL> conn / as sysdba
Connected.
SQL>  alter system set sec_case_sensitive_logon = false;
 
System altered.
 
SQL> conn scott/TIGER
Connected.	
When Oracle DBA upgrade an existing Oracle 10g database to 11g, you can migrate your passwords to the new standard. You can check the status of the password by querying the DBA_USERS view, especially the new column PASSWORD_VERSIONS.
select username, password, password_versions
from dba_users;

USERNAME                  PASSWORD                       PASSWORD
------------------------- ------------------------------ --------
SYSTEM                                                   10G 11G
SYS                                                      10G 11G
MGMT_VIEW                                                10G 11G
The first thing you notice is that the password column is NULL, not populated with the hashed value as it is in Oracle Database 10g and prior versions. So what happened to the password? It's still stored in the database (in the table USER$) but it is not visible in the DBA_USERS view. When the user is created as either global or externally authenticated, the status is indicated—GLOBAL or EXTERNAL—but the hash value of the password is not displayed.

Next, note the column PASSWORD_VERSIONS, which is new in Oracle Database 11g. This column signifies the case sensitivity of the password. The value "10G 11G" signifies that the user was either created in 10g and migrated to 11g or created in 11g directly.

Oracle DBA can enforce, if you wish, the sensitivity of the SYSDBA password as well by entering a new parameter, ignorecase, while creating the password file as shown below:

$ orapwd file=orapwPRODB3 password=abc123 entries=10 ignorecase=n
In the above example the SYSDBA password will be abc123, not ABC123 or any other variation in case.

The possibility of enforcing a case-sensitive password not only makes it more difficult to crack passwords by brute force, but also enables you to meet many more compliance requirements. Even more important, you can enforce the password requirement dynamically without needing a database shutdown, which comes in handy during upgrades and debugging login issues when upgrading legacy apps.

Related topics :

learn oracle : Data Replay
learn oracle : Partitioning to Perfection
learn Oracle : Transaction Management with logminer
learn Oracle : Schema Management
learn Oracle : SQL Plan Management
learn Oracle : SQL Performance Analyzer
learn Oracle : SQL Access Advisor




More Tutorials on Oracle dba ...



Liked it ? Want to share it ? Social Bookmarking
Add to: Mr. Wong Add to: BoniTrust Add to: Newsider Add to: Digg Add to: Del.icio.us Add to: Reddit Add to: Jumptags Add to: StumbleUpon Add to: Slashdot Add to: Netscape Add to: Furl Add to: Yahoo Add to: Spurl Add to: Google Add to: Blinklist Add to: Technorati Add to: Newsvine Information


Source : Oracle Documentation | Oracle DBA

Want to share or request Oracle Tutorial articles to become a Oracle DBA. Direct your requests to webmaster@oracleonline.info