— Ivica Arsov's blog

As you may already now we can’t export scheduler jobs which are owned by SYS user. This is because the package/function dbms_metadata.get_ddl which is used to export the DDL is working similarly to the datapump export, and since SYS objects are marked non-exportable we can’t export them.

If you try to get DDL for scheduler job owned by SYS you’ll hit this error ORA-31603 as you can see from the example bellow


SQL> show user;
USER is "SYS"
SQL>
SQL>
SQL> begin
  2  dbms_scheduler.create_job(job_name => 'my_job',
  3  job_type => 'plsql_block',
  4  job_action => 'begin null; end;',
  5  start_date => (trunc(sysdate) + 10));
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> select dbms_metadata.get_ddl('PROCOBJ','MY_JOB','SYS') from dual;
ERROR:
ORA-31603: object "MY_JOB" of type PROCOBJ not found in schema "SYS"
ORA-06512: at "SYS.DBMS_METADATA", line 5805
ORA-06512: at "SYS.DBMS_METADATA", line 8344
ORA-06512: at line 1

no rows selected

 

Solution for this is to copy the scheduler job to another user and then get the DDL.

 

SQL>
SQL> show user;
USER is "SYS"
SQL>
SQL> exec dbms_scheduler.copy_job('SYS.MY_JOB','IARSOV.MY_JOB');

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> conn iarsov/iarsov
Connected.
SQL>
SQL>
SQL> set long 300
SQL>
SQL> select dbms_metadata.get_ddl('PROCOBJ','MY_JOB','IARSOV') from dual;

DBMS_METADATA.GET_DDL('PROCOBJ','MY_JOB','IARSOV')
--------------------------------------------------------------------------------
BEGIN
dbms_scheduler.create_job('"MY_JOB"',
job_type=>'PLSQL_BLOCK', job_action=>
'begin null; end;'
, number_of_arguments=>0,
start_date=>TO_TIMESTAMP_TZ('15-DEC-2014 12.00.00.000000000 AM +01:00','DD-MON-R
RRR HH.MI.SSXFF AM TZR','NLS_DATE_LANGUAGE=english'), repeat_interval=>
NULL
, end_date

DBMS_METADATA.GET_DDL('PROCOBJ','MY_JOB','IARSOV')
--------------------------------------------------------------------------------
SQL>

 

Regards,
Ivica

Read More

If you use application logic to mask sensitive data now with 12c you have an option to accomplish the same effect at database level, but more efficient and secure. This is called Oracle Data Redaction. The solution is very simple, it modifies sensitive data based on user-defined policies. Important note to remember is that the policies that are defined does not modify data blocks at storage level. Instead, the data is modified on-the-fly before the results are returned to the user/application. This represents very fast and easy to implement solution for data masking.
We can mask characters, numbers and dates. Before redaction to take place we need to define policies which will determine the type of redaction, what to redact and how to redact.

There are several types from which we can choose: NONE , FULL, PARTIAL, RANDOM and REGEXP.

NONE – masking is not used.
FULL – data will be replaced with fixed value.
PARTIAL – we define what to be masked.
RANDOM – the system will generate random value which will be used to replace original data.
REGEXP – data is masked based on regular expression pattern.

In this part 1 I am going to show data redaction for characters, in the following part I’ll write for numbers and dates.

#Preparation
I have created the following table just for the purpose of this post (I haven’t included any architectural design principals).

SQL> conn iarsov@pdbx
Enter password:
Connected.

SQL> create table mtab(user_id number,
name varchar2(20),
surname varchar2(50),
account varchar2(15),
created date);

Table created.

SQL>
SQL> insert into mtab values(1, 'Steven', 'King', '123-4567-891234', sysdate);

1 row created.

SQL>
SQL> select user_id,name,surname,account,created from mtab;

   USER_ID NAME   SURNAM ACCOUNT         CREATED
---------- ------ ------ --------------- ----------
         1 Steven King   123-4567-891234 24.11.2014

SQL>

Lets define policy to redact/mask account column. The package that we need is called DBMS_REDACT.

We add policy with ADD_POLICY procedure. Required parameters are object_name, policy_name and expression. All other parameters have default values. For object_name we need to pass the object for which we want to create the policy, this can be table or view. For expression we need to specify expression that needs to evaluate to boolean value, if the expression evaluates to TRUE then the policy will be used. In this example I’ve set expression so that this policy is used for all users except “IARSOV” user.

FULL data redaction

For FULL type the system will use default values. We can determine which default values are used by querying REDACTION_VALUES_FOR_TYPE_FULL dictionary view. Also, those values can be changed/updated with UPDATE_FULL_REDACTION_VALUES procedure.

I’ve created the following policy for mtab table and account column with FULL type.

<pre>SQL> conn iarsov@pdbx
Enter password:
Connected.

SQL> @redact_full.sql
SQL>
SQL> begin
  2
  3  DBMS_REDACT.ADD_POLICY (
  4     object_schema => USER,
  5     object_name => 'mtab',
  6     policy_name => 'mtab_account_full',
  7     column_name => 'ACCOUNT',
  8     function_type => DBMS_REDACT.FULL,
  9     function_parameters => null,
 10     expression => 'SYS_CONTEXT(''USERENV'',''USER'') <> ''IARSOV''');
 11  end;
 12  /

PL/SQL procedure successfully completed.

SQL>

In order to test previously created policy I’ve created another user orax which has been granted SELECT privilege for iarsov.mtab table.

SQL> conn system@pdbx
Enter password:
Connected.

SQL>
SQL>
SQL> create user orax identified by orax;

User created.

SQL> grant create session to orax;

Grant succeeded.

SQL> grant select on iarsov.mtab to orax;

Grant succeeded.

SQL> conn orax@pdbx
Enter password:
Connected.

SQL> select name,surname,account from iarsov.mtab;

NAME   SURNAM ACCOUNT
-----  ------ ---------------
Steven King

SQL>

The value for account column has been replaced with empty string which is default value for FULL data redaction.

Read More

Oracle with the release of 12c introduced new feature called invisible columns which basically allows developers to create new columns (as I would say) not visible to the world :) . It enables end-users to continue accessing the application while developers enhancing the application.
These kind of columns are not visible to the application because in order to access the column we have to explicitly specify the column name in the select statement. “SELECT * FROM …” will also not show the column. This is because COLUMN_ID column in *_TAB_COLUMNS dictionary views which is used to determine the order of columns retrieved when you issue ” SELECT * FROM …” has NULL value (although in my opinion you should never use “*” in your queries).

How do you define invisible columns?
It is very straightforward with the INVISIBLE keyword.

SQL> create table mytable(id number, col1 number invisible);

Table created.

SQL>

If I describe previously created table we won’t see the columns that are set invisible.

SQL> desc mytable
 Name              Null?    Type
 ----------------- -------- ------------
 ID                         NUMBER

SQL>

Of course there is a solution for that.
We can set COLINVISIBLE command in sqlplus, lets try.

SQL> set colinvisible on
SQL>
SQL> desc mytable
 Name                      Null?    Type
 ------------------------- -------- ------------
 ID                                 NUMBER
 COL1 (INVISIBLE)                   NUMBER

SQL>

In order to check which columns are invisible we have to search NULL value for COLUMN_ID in *_USER_TAB_COLUMNS dictionary views.

SQL> col table_name format a10
SQL> col column_name format a10
SQL>
SQL> select table_name,column_name 
from user_tab_columns 
where table_name = 'MYTABLE' and column_id is null;

TABLE_NAME COLUMN_NAME
---------- -----------
MYTABLE    COL1

SQL>

When we’re going to make the column visible the system will generate new highest number for COLUMN_ID column.

SQL> select table_name,column_name,column_id 
from user_tab_columns 
where table_name = 'MYTABLE' 
order by column_id;

TABLE_NAME COLUMN_NAME  COLUMN_ID
---------- ----------- ----------
MYTABLE    ID                  1
MYTABLE    COL1                2

SQL>

We can also specify INVISIBLE on virtual columns.

SQL>
SQL> alter table mytable add col2 as (col1 * 10);

Table altered.

SQL>
SQL> alter table mytable modify col2 invisible;

Table altered.

SQL>
SQL> desc mytable;
 Name                                      Null?    Type
 ----------------------------------------- -------- ---------------
 ID                                                 NUMBER
 COL1                                               NUMBER
 COL2 (INVISIBLE)                                   NUMBER

SQL>

Lets insert sample data and see the effect.

SQL> select id, col1, col2 from mytable;

no rows selected

SQL> insert into mytable values (1,10);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from mytable;

        ID       COL1
---------- ----------
         1         10

As you can see we didn’t got COL2 column because it’s set to INVISIBLE.
Lets specify the column in the SELECT statement.

SQL> select id, col1, col2 from mytable;

        ID       COL1       COL2
---------- ---------- ----------
         1         10        100

SQL>



*
The restrictions for this feature is that you can’t use it on temp, external or cluster tables.
You can’t also make system-generated invisible columns visible.




Regards,
Ivica

Read More

Today I encountered a problem with an insert statement which was executing slowly (the client started to complain).
It was about sql statement which had to insert something about 592 000 rows into table that already had ~2 million rows.
The process was taking about 90 seconds for simple insert into … select … from … where ….

I’ve used tkprof to trace the session in order to see which statement was causing the problem.
From tkprof output (sort by elapsed time) I got this:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.02       0.02          0          0          0           0
Execute      1     74.95      77.38      14353     201088    1910040      642400
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2     74.98      77.40      14353     201088    1910040      642400

As you can see 77.40 seconds were used for execution of that insert statement.
When I reproduced this in new session I run query for V$SESS_TIME_MODEL to see the time model statistics:

SQL> @tune

... insert statement ...

       SID STAT_NAME                                               VALUE
---------- -------------------------------------------------- ----------
       357 DB time                                              56.27379
       357 sql execute elapsed time                            56.258497
       357 DB CPU                                                 54.913
       357 sequence load elapsed time                          50.660571
       357 repeated bind elapsed time                            .748207
       357 connection management call elapsed time                .00347
       357 parse time elapsed                                    .001308
       357 PL/SQL execution elapsed time                         .000323

From the output we can see that ~51 seconds are spent on sequence load elapsed time which is amount of elapsed time spent getting the next sequence number from the data dictionary. This shows that the problem is with the sequence which is used for this particular statement.

I checked the sequence in USER_SEQUENCES in order to see if cache is used. But as I assumed cache was not used.

SQL> select sequence_name,cache_size from user_sequences where lower(sequence_name) = 'asc_restriction_lists_s';

SEQUENCE_NAME                  CACHE_SIZE
------------------------------ ----------
ASC_RESTRICTION_LISTS_S                 0

Next what I did was to set cache for this sequence.
With CACHE we are pre-allocating sequence numbers in memory so those cached numbers can be accessed faster. Be careful because in some circumstances those numbers in memory can be lost and you can end up with gaps.

SQL> alter sequence ASC_RESTRICTION_LISTS_S cache 100;

Sequence altered.

And when I re-run the .sql script (insert statement) it finished for ~2 seconds.

SQL> @tune

592007 rows created.

Elapsed: 00:00:02.09

Regards,
Ivica

Read More

DBMS_AUDIT_MGMT represents package which you can use to manage audit trail records. It enables for database administrators to define different settings related to audit trail records such as maximum size, cleanup operations. job for periodical deletion etc.
In this post I’ll show the steps that I used to setup automatic job that deletes OS audit trail records older than 90 days.
Here you can find nice overview of the DBMS_AUDIT_MGMT package http://docs.oracle.com/cd/E11882_01/appdev.112/e16760/d_audit_mgmt.htm#ARPLS241

As first we have to initialize the cleanup process by invoking INIT_CLEANUP procedure from DBMS_AUDIT_MGMT package.
This procedure needs to be invoked only once. You have to specify for which audit trail type you want to init the cleanup and set the default cleanup interval (in my case every 24 hours).

BEGIN
 DBMS_AUDIT_MGMT.INIT_CLEANUP(AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS, DEFAULT_CLEANUP_INTERVAL => 24);
END;
SQL> / 

PL/SQL procedure successfully completed.

Let’s confirm the initialization.
We can access that information in DBA_AUDIT_MGMT_CONFIG_PARAMS dictionary view.

SQL> col parameter_name format a40
SQL> col parameter_value format a40
SQL> set linesize 140
SQL> select * from dba_audit_mgmt_config_params;

PARAMETER_NAME             PARAMETER_VALUE   AUDIT_TRAIL
-------------------------- ----------------- ----------------------------
....
DEFAULT CLEAN UP INTERVAL  24                OS AUDIT TRAIL

11 rows selected.

Because I don’t want to delete all files, just those (modified/created) which are older than 90 days I’ve set the last archive timestamp parameter.
When CLEAN_AUDIT_TRAIL procedure is called by the purge job (define later) if USE_LAST_ARCH_TIMESTAMP is set to true the clean up will delete all files modified before last archive timestamp.

BEGIN
 DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,LAST_ARCHIVE_TIME => trunc(systimestamp - 90));
END;
SQL> /

PL/SQL procedure successfully completed.

We can confirm this by looking in DBA_AUDIT_MGMT_LAST_ARCH_TS dictionary view.

SQL> select audit_trail, last_archive_ts from DBA_AUDIT_MGMT_LAST_ARCH_TS;

AUDIT_TRAIL LAST_ARCHIVE_TS
--------------- ------------------------------------
OS AUDIT TRAIL 06-AUG-14 12.00.00.000000 AM +02:00

Next we have to create the purge job which will automatically invoke CLEAN_AUDIT_TRAIL procedure.

SQL> BEGIN
 2 DBMS_AUDIT_MGMT.CREATE_PURGE_JOB(
 3 AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
 4 AUDIT_TRAIL_PURGE_INTERVAL => 24,
 5 AUDIT_TRAIL_PURGE_NAME => 'Standard_Audit_Trail_OS',
 6 USE_LAST_ARCH_TIMESTAMP => TRUE);
 7* END;
SQL> /

PL/SQL procedure successfully completed.

As last step I’ve created scheduler job which will forward last archive timestamp parameter.

BEGIN
  DBMS_SCHEDULER.create_job (
  job_name => 'update_audit_last_archive_time',
  job_type => 'PLSQL_BLOCK',
  job_action => 'BEGIN
  DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS, TRUNC(SYSTIMESTAMP)-90);
  END;',
  start_date => SYSTIMESTAMP,
  repeat_interval => 'freq=daily; byhour=0; byminute=0; bysecond=0;',
  end_date => NULL,
  enabled => TRUE,
  auto_drop => FALSE,
  comments => 'Move forward audit last archive time.');
END;
SQL> /

PL/SQL procedure successfully completed.

* Audit records written to syslog are not deleted. Cleanup is performed only  on *.aud files in directory specified by the AUDIT_FILE_DEST initialization parameter.

 

Regards,
Ivica

Read More