I don't know whether it's a bug or something...but if you try to drop a user with cascade that is having some queue tables. It won't allow you to drop!!!
Would throw the below error:
ORA-00604: error occurred at recursive SQL level 1
ORA-24170: SCOTT.IAI_QUEUE_R is created by AQ, cannot be dropped directly
ORA-06512: at "SYS.DBMS_RULE_ADM", line 158
ORA-06512: at line 1
How to ressolve?
1. Grant AQADM execute permission to the user
GRANT EXECUTE ON DBMS_AQADM to SCOTT.
2. Login as scott and find the QUEUE tables it belongs to.
select * from user_tables
where table_name like'AQ%'
3. Drop the QUEUE tables using the following command.
A logical read occurs whenever a user requests data from the database. It occurs whenever the data is in the buffer cache or whether the user process must read it from disk. If the data must be read from disk then a physical read occurs. Oracle keeps track of logical and physical reads in the V$SYSSTAT table.
Use the following SQL statement to determine the values required for the hit radio calculation:
select name, value From v$sysstat where name in ('db block gets', 'consistent gets', 'physical reads');
Hit ratio = 1 - (physical reads / (db block gets + consistent gets))
Resizing buffer_cache and shared_pool:
If the cache-hit ratio goes below 90% then:
• For Oracle 8 and earlier: increase the initialisation parameter DB_BLOCK_BUFFERS. • For Oracle 9 onwards: increate the initialisation parameter DB_CACHE_SIZE.
calculating the cache hit ratio:
select 100*(1 - (v3.value / (v1.value + v2.value))) "Cache Hit Ratio [%]" from v$sysstat v1, v$sysstat v2, v$sysstat v3 where v1.name = 'db block gets' and v2.name = 'consistent gets' and v3.name = 'physical reads'
SQL> show parameter db_cache_size; NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_cache_size big integer 721420288
SQL> show parameter show parameter shared_pool SQL> show parameter shared_pool_size
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ shared_pool_size big integer 134217728
Activity:
1. Resize Buffer cache alter system set db_cache_size=4096M scope=spfile;
2. Resize Shared pool size alter system set shared_pool_size=4096M scope=spfile;
3. Restart database Shutdown immediate; Startup;
5. Checking parameter change.
show parameter db_cache_size show parameter shared_pool
6. Align pfile with spfile Create pfile from spfile;
This allows SELinux to continue running, and
logging denial messages, but SELinux will not actually deny any operations.
Once Development has resolved this issue, you can (and should) return SELinux
to the default “Enforcing” mode as follows:
Commands, as root:
======================
setenforce 1
getenforce (returns “Enforcing”)
Oracle Database 10g provides means to encrypt and decrypt your sensitive data using the built-in API called DBMS_CRYPTO. Using DBMS_CRYPTO API one can encrypt sensitive information like SSN, Credit Card Numbers, Debit Card Numbers, etc stored in the database as these are not supposed to be stored in plain text for security reasons.
Let’s see how DBMS_CRYPTO can be used to encrypt data and also consider its performance implications.
Function encrypt_data( p_data IN VARCHAR2 ) Return RAW DETERMINISTIC
IS
l_data RAW(2048) := utl_raw.cast_to_raw(p_data);
l_encrypted RAW(2048);
BEGIN
NULL;
l_encrypted := dbms_crypto.encrypt -- Algorithm
( src => l_data,
typ => DBMS_CRYPTO.DES_CBC_PKCS5,
key => V_KEY );
Return l_encrypted;
END encrypt_data;
Function decrypt_data( p_data IN RAW ) Return VARCHAR2 DETERMINISTIC
IS
l_decrypted RAW(2048);
BEGIN
l_decrypted := dbms_crypto.decrypt -- Algorithm
( src => p_data,
typ => DBMS_CRYPTO.DES_CBC_PKCS5,
key => V_KEY );
Return utl_raw.cast_to_varchar2(l_decrypted);
END decrypt_data;
End cryptit;
/
Package body created.
The package body consists of functions to encrypt and decrypt data along with a key. As these functions will always output the same value for a specific input, we can make them Deterministic functions. (A Deterministic Function always returns the same result any time they are called with a specific set of input values.)
Let's now make use of these functions to encrypt our sensitive data.
Well, the index "T_SSN" has been selected by the optimizer to be cost efficient. When a range of values is to be scanned, optimizer falls flat on its face as it has no idea and picks up FTS.
select * from emp where cryptit.decrypt_data(ssn) between '123456702' and '123456704';
AND "CRYPTIT"."DECRYPT_DATA"(HEXTORAW("SSN"))<='123456704')
Note
-----
- dynamic sampling used for this statement
We can also create Function Based Indexes on encrypted columns such that data is accessed faster with a better execution plan. This is the reason I have marked both ENCRYPT_DATA and DECRYPT_DATA as DETERMINISTIC functions.
create index f_ssn_idx on emp(cryptit.decrypt_data(ssn));
Index created.
select * from emp where cryptit.decrypt_data(ssn) between '123456702' and '123456704';
The optimizer has rightly picked up the Function based index to access requested data quickly.
DBMS_CRYPTO is a great way to encrypt sensitive data and we can also create indexes on the encrypted columns to speedup our queries. Lastly, remember to wrap the CRYPTIT package body so that the key is not exposed.
DBMS_CRYPTO can also be used to encrypted data recursively. Let us try to encrypt the already encrypted SSN from the above example.
What happens with a NOLOGGING tablespace ? Will all the tables ( segments) created in that tablespace will not be logged by default?
By default the Tables (created in this Tablespace) will inherit the option NOLOGGING. It can be overridden at the Table level if you specify the option LOGGING when you create it.
So you can have Tables with the option LOGGING in a Tablespace with the option NOLOGGING.
The only reason to create a tablespace with nologging, or to later set the nologging attribute on a tablespace, is to change the default of all objects (segments) created in that tablespace from that point forward. It has no effect at all on segments that already exist in the tablespace.
That's it. It has no effect on how long it will take the tablespace to be created, or recoverability of that tablespace.
At the table/index level, it will only have an effect for direct load operations, such as insert /*+ append */, CTAS, or alter table move on a table, or index creation or rebuild. Besides this, you have also the FORCE LOGGING /NO FORCE LOGGING mode.
The FORCE LOGGING / NO FORCE LOGGING is used to control at the Database or Tablespace level the Logging mode. By default the Database and Tablespaces are in NO FORCE LOGGING.
The FORCE LOGGING mode always overrides the NOLOGGING option at the Tablespace or Table level.
So if you have the Database in FORCE LOGGING mode, your Tables will generates Redo logs even if these Tables are created with the option NOLOGGING.
It can be useful in a Data Guard so as to enforce Redo log generation to maintain the Standby Databases.