— Ivica Arsov's blog

Today I encountered a problem with an insert statement which was executing slowly (the client started to complain).
It was about sql statement which had to insert something about 592 000 rows into table that already had ~2 million rows.
The process was taking about 90 seconds for simple insert into … select … from … where ….

I’ve used tkprof to trace the session in order to see which statement was causing the problem.
From tkprof output (sort by elapsed time) I got this:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.02       0.02          0          0          0           0
Execute      1     74.95      77.38      14353     201088    1910040      642400
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2     74.98      77.40      14353     201088    1910040      642400

As you can see 77.40 seconds were used for execution of that insert statement.
When I reproduced this in new session I run query for V$SESS_TIME_MODEL to see the time model statistics:

SQL> @tune

... insert statement ...

       SID STAT_NAME                                               VALUE
---------- -------------------------------------------------- ----------
       357 DB time                                              56.27379
       357 sql execute elapsed time                            56.258497
       357 DB CPU                                                 54.913
       357 sequence load elapsed time                          50.660571
       357 repeated bind elapsed time                            .748207
       357 connection management call elapsed time                .00347
       357 parse time elapsed                                    .001308
       357 PL/SQL execution elapsed time                         .000323

From the output we can see that ~51 seconds are spent on sequence load elapsed time which is amount of elapsed time spent getting the next sequence number from the data dictionary. This shows that the problem is with the sequence which is used for this particular statement.

I checked the sequence in USER_SEQUENCES in order to see if cache is used. But as I assumed cache was not used.

SQL> select sequence_name,cache_size from user_sequences where lower(sequence_name) = 'asc_restriction_lists_s';

SEQUENCE_NAME                  CACHE_SIZE
------------------------------ ----------
ASC_RESTRICTION_LISTS_S                 0

Next what I did was to set cache for this sequence.
With CACHE we are pre-allocating sequence numbers in memory so those cached numbers can be accessed faster. Be careful because in some circumstances those numbers in memory can be lost and you can end up with gaps.

SQL> alter sequence ASC_RESTRICTION_LISTS_S cache 100;

Sequence altered.

And when I re-run the .sql script (insert statement) it finished for ~2 seconds.

SQL> @tune

592007 rows created.

Elapsed: 00:00:02.09

Regards,
Ivica

Read More

DBMS_AUDIT_MGMT represents package which you can use to manage audit trail records. It enables for database administrators to define different settings related to audit trail records such as maximum size, cleanup operations. job for periodical deletion etc.
In this post I’ll show the steps that I used to setup automatic job that deletes OS audit trail records older than 90 days.
Here you can find nice overview of the DBMS_AUDIT_MGMT package http://docs.oracle.com/cd/E11882_01/appdev.112/e16760/d_audit_mgmt.htm#ARPLS241

As first we have to initialize the cleanup process by invoking INIT_CLEANUP procedure from DBMS_AUDIT_MGMT package.
This procedure needs to be invoked only once. You have to specify for which audit trail type you want to init the cleanup and set the default cleanup interval (in my case every 24 hours).

BEGIN
 DBMS_AUDIT_MGMT.INIT_CLEANUP(AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS, DEFAULT_CLEANUP_INTERVAL => 24);
END;
SQL> / 

PL/SQL procedure successfully completed.

Let’s confirm the initialization.
We can access that information in DBA_AUDIT_MGMT_CONFIG_PARAMS dictionary view.

SQL> col parameter_name format a40
SQL> col parameter_value format a40
SQL> set linesize 140
SQL> select * from dba_audit_mgmt_config_params;

PARAMETER_NAME             PARAMETER_VALUE   AUDIT_TRAIL
-------------------------- ----------------- ----------------------------
....
DEFAULT CLEAN UP INTERVAL  24                OS AUDIT TRAIL

11 rows selected.

Because I don’t want to delete all files, just those (modified/created) which are older than 90 days I’ve set the last archive timestamp parameter.
When CLEAN_AUDIT_TRAIL procedure is called by the purge job (define later) if USE_LAST_ARCH_TIMESTAMP is set to true the clean up will delete all files modified before last archive timestamp.

BEGIN
 DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,LAST_ARCHIVE_TIME => trunc(systimestamp - 90));
END;
SQL> /

PL/SQL procedure successfully completed.

We can confirm this by looking in DBA_AUDIT_MGMT_LAST_ARCH_TS dictionary view.

SQL> select audit_trail, last_archive_ts from DBA_AUDIT_MGMT_LAST_ARCH_TS;

AUDIT_TRAIL LAST_ARCHIVE_TS
--------------- ------------------------------------
OS AUDIT TRAIL 06-AUG-14 12.00.00.000000 AM +02:00

Next we have to create the purge job which will automatically invoke CLEAN_AUDIT_TRAIL procedure.

SQL> BEGIN
 2 DBMS_AUDIT_MGMT.CREATE_PURGE_JOB(
 3 AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS,
 4 AUDIT_TRAIL_PURGE_INTERVAL => 24,
 5 AUDIT_TRAIL_PURGE_NAME => 'Standard_Audit_Trail_OS',
 6 USE_LAST_ARCH_TIMESTAMP => TRUE);
 7* END;
SQL> /

PL/SQL procedure successfully completed.

As last step I’ve created scheduler job which will forward last archive timestamp parameter.

BEGIN
  DBMS_SCHEDULER.create_job (
  job_name => 'update_audit_last_archive_time',
  job_type => 'PLSQL_BLOCK',
  job_action => 'BEGIN
  DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP(DBMS_AUDIT_MGMT.AUDIT_TRAIL_OS, TRUNC(SYSTIMESTAMP)-90);
  END;',
  start_date => SYSTIMESTAMP,
  repeat_interval => 'freq=daily; byhour=0; byminute=0; bysecond=0;',
  end_date => NULL,
  enabled => TRUE,
  auto_drop => FALSE,
  comments => 'Move forward audit last archive time.');
END;
SQL> /

PL/SQL procedure successfully completed.

* Audit records written to syslog are not deleted. Cleanup is performed only  on *.aud files in directory specified by the AUDIT_FILE_DEST initialization parameter.

 

Regards,
Ivica

Read More

Just a quick short post in the late hours ( :) ) on how to check the cluster name.

You can find out the cluster name with $ORA_CRS_HOME/bin/cemutlo -n

[grid@oel-ocm-pc2 ~]$ . oraenv
ORACLE_SID = [+ASM2] ? +ASM2
The Oracle base remains unchanged with value /u01/app/grid
[grid@oel-ocm-pc2 ~]$
[grid@oel-ocm-pc2 ~]$ cemutlo -n
ocm-cluster
[grid@oel-ocm-pc2 ~]$

Further, if you want to check the clusterware version installed,
run $ORA_CRS_HOME/bin/crsctl query crs softwareversion

[grid@oel-ocm-pc2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [oel-ocm-pc2] is [11.2.0.3.0]
[grid@oel-ocm-pc2 ~]$

 

Regards,
Ivica

Read More

As you may already know result cache represents nice feature which is used to store results of query execution in the result cache area  (part of the SGA).

From 11g you can also cache pl/sql function results in the SGA and its available to all sessions which runs the application.
It’s easy to use, what you need to do is just mark the pl/sql function that its results should be cached.

You enable result caching for a function by adding the RESULT_CACHE clause in the function definition.
Also you can specify the optionally RELIES_ON clause which specifies tables/views on which the function results depends – since in this example the results depend on the HR.EMPLOYEES table I’ve added the optional keyword RELIES_ON

SQL> create function getEmpSalary(p_email varchar2)
  2  return number
  3  result_cache relies_on (employees)
  4  is
  5  v_salary employees.salary%type;
  6  begin
  7  select salary into v_salary from employees where lower(email) = lower(p_email);
  8  return v_salary;
  9  end;
 10  /

Function created.

Since this is test environment I flushed the cache and confirm that there are no cached information by querying the V$RESULT_CACHE_OBJECTS dictionary view.

Read More

Just a little experiment with the DUAL table.
Someone might find it useful.

[oracle@rac1edu ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Mon Jul 21 08:21:59 2014

Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>
SQL>
SQL>
SQL>
SQL>
SQL> select count(*) from dual;

 COUNT(*)
----------
 1

SQL> desc dual;
 Name Null? Type
 ----------------------------------------- -------- ----------------------------
 DUMMY VARCHAR2(1)

SQL> drop table dual;

Table dropped.

SQL> select * from dual;
select * from dual
 *
ERROR at line 1:
ORA-01775: looping chain of synonyms

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 849530880 bytes
Fixed Size 1348244 bytes
Variable Size 524291436 bytes
Database Buffers 318767104 bytes
Redo Buffers 5124096 bytes
Database mounted.
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-01775: looping chain of synonyms
Process ID: 27181
Session ID: 129 Serial number: 3

SQL>
Read More