Tag Archives: ORACLE

type=’UNKNOWN’ level=’16’ host_id=’-i’ , Oracle Enterprise Manager Express 12c configuration

I am trying to configure Oracle Enterprise Manager Express 12c in sun solaris. When I try to execute below commands to configure.

SQL> exec DBMS_XDB_CONFIG.SETHTTPSPORT(5500);

SQL> exec DBMS_XDB_CONFIG.SETHTTPPORT(5510);

I the listener is not getting configured to listen on this ports and configuration is not working even though I am getting successful message after the commands. When I check the error log of listener I have seen below messages.

<msg time='2018-05-02T00:34:11.819-04:00' org_id='oracle' comp_id='tnslsnr'
 type=’UNKNOWN’ level=’16’ host_id=’-i’
 host_addr=’UNKNOWN’>
 02-MAY-2018 00:34:11 * (CONNECT_DATA=(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=root))(SID=******)) * (ADDRESS=(PROTOCOL=tcp)(HOST=**.**.**.**)(PORT=44853)) * establish * M
5O11204 * 12505
 

<msg time='2018-05-02T00:34:11.820-04:00' org_id='oracle' comp_id='tnslsnr'
 type=’UNKNOWN’ level=’16’ host_id=’-i’
 host_addr=’UNKNOWN’>
 TNS-12505: TNS:listener does not currently know of SID given in connect descriptor
 

<msg time='2018-05-02T00:34:15.044-04:00' org_id='oracle' comp_id='tnslsnr'
 type=’UNKNOWN’ level=’16’ host_id=’-i’
 host_addr=’UNKNOWN’>
 02-MAY-2018 00:34:15 * service_update * M50Ora121 * 0
 

<msg time='2018-05-02T00:34:21.044-04:00' org_id='oracle' comp_id='tnslsnr'
 type=’UNKNOWN’ level=’16’ host_id=’-i’
 host_addr=’UNKNOWN’>
 02-MAY-2018 00:34:21 * service_update * M50Ora121 * 0
 


You can see the host_id is “-i” which is wrong. I have checked the hostname at os level it is showing “-i”.

I changed the hostname using below command

login as root

hostname

login as oracle

restart listener

lsnrctl stop
lsnrctl start

sqlplus / as sysdba

SQL> exec DBMS_XDB_CONFIG.SETHTTPSPORT(5500);

SQL> exec DBMS_XDB_CONFIG.SETHTTPPORT(5510);

Now the Oracle Enterprise Manager Express configured successfully.

Logon trigger to trace oracle session by a user

Below is the script to create a logon trigger to trace sessions of a particular user.

CREATE OR REPLACE TRIGGER USER_TRACE_TRG AFTER LOGON ON DATABASE
declare
SI int;
SE int;
FILE_NAME varchar2(200);
begin
IF USER = ‘CS16ORACLETS’ —- Give the user name here
THEN
select sid,serial# into SI,SE from v$session where audsid = userenv(‘SESSIONID’);
FILE_NAME:=’CSPERF_’||SI||’_’||SE;
execute immediate ‘ALTER SESSION SET tracefile_identifier = ‘||FILE_NAME;
execute immediate ‘alter session set sql_trace = true’;
execute immediate ‘ALTER SESSION SET EVENTS ”10046 trace name context forever, level 12”’;
END IF;
EXCEPTION
WHEN OTHERS THEN
NULL;
end;
/

This will trace only the sessions created after the execution of this script. If the session is already exist kill the session and create again to trace the session.

ORA-23539 ORA-06512

I have started the redefinition of the table after cancelling the previous one due to some error. But I was getting the below error.

BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE(
   uname        => user,
   orig_table   => ‘parent’,
   int_table    => ‘parent_interim’,
   col_mapping  => NULL,
   options_flag => DBMS_REDEFINITION.CONS_USE_ROWID
   );
END;
/  2    3    4    5    6    7    8    9   10
BEGIN
*
ERROR at line 1:
ORA-23539: table “TEST”.”PARENT” currently being redefined
ORA-06512: at “SYS.DBMS_REDEFINITION”, line 52
ORA-06512: at “SYS.DBMS_REDEFINITION”, line 1646
ORA-06512: at line 2

This means the previous processes was not canceled and it is running in the background.

Execute the below command , this will stop the process.

BEGIN
DBMS_REDEFINITION.abort_redef_table(
uname      => ‘TEST’,
orig_table => ‘parent’,
int_table  => ‘parent_interim’);
END;
/

ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION

I was moving data from my main table to history table using the below command.

alter table parent exchange partition SYS_P27396 with table parent_history UPDATE GLOBAL INDEXES;

But I was getting the below error.

SQL> alter table parent exchange partition SYS_P27396 with table parent_history UPDATE GLOBAL INDEXES;
alter table parent exchange partition SYS_P27396 with table parent_history UPDATE GLOBAL INDEXES
                                                            *
ERROR at line 1:
ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION

When I checked the tables description all the columns are same in main and history table.

SQL> desc parent
 Name   Null?          Type
 —————————————– ——– —————-
 ID   NOT NULL    NUMBER(38)
 NAME             VARCHAR2(2000)
 DOJ              DATE

SQL> desc parent_history
 Name   Null?    Type
 —————————————– ——– —————-
 ID               NUMBER(38)
 NAME         VARCHAR2(2000)
 DOJ          DATE


I have executed the below query and found that there is a hidden unused column in main table.

SQL> select col#, name , type#
from sys.col$
where obj# in
(select object_id from dba_objects where object_name = ‘PARENT’)
order by col#;  2    3    4    5  

      COL# NAME
———- ——————————
0 SYS_C00004_16110810:17:00$
1 ID
2 NAME

3 DOJ


Previously I have created the table using the “DBMS_REDEFINITION” which caused to create the hidden unused column.

Now the solution is drop this unused column and try again

1) Drop the hidden columns

SQL> ALTER TABLE PARENT DROP UNUSED COLUMNS;

Table altered.

SQL> alter table parent exchange partition SYS_P27396 with table parent_history UPDATE GLOBAL INDEXES;

Table altered.


Use SID to connect to PDB(Pluggable database) in 12c

Today I came across a situation , my client has an old application which can use only SID to connect to the database. But the database is 12c and the database he is trying to connect is PDB.

In 12c we can not connect to the PDB using the SID, we have to use the service name. But for the sake of this old applications oracle created a parameter “USE_SID_AS_SERVICE_LISTENER” by which oracle interprets the SID to SERVICENAME.

Add the parameter “USE_SID_AS_SERVICE_LISTENER” in listener.ora like below.

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = orcl.oracle.com) (PORT = 1521))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

USE_SID_AS_SERVICE_LISTENER=ON

Restart the listener
lsnrctl stop
lsnrctl start
Now we can connect to the PDB using the service name as SID.

IN vs EXISTS in Oracle

IN vs EXISTS in Oracle

This is really a typical question as the oracle optimizer is changing the plans in optimized version by version by rewriting the query, but still I have done some work on this.

I am using 11.2.0.1 version.

First create the below tables in the database.

CREATE TABLE T1 (
  C1 NUMBER,
  FILLER VARCHAR2(300),
  PRIMARY KEY (C1));

CREATE TABLE T2 (
  C1 NUMBER,
  FILLER VARCHAR2(300),
  PRIMARY KEY (C1));

INSERT INTO
  T1
SELECT
  ROWNUM,
  LPAD(‘A’,300,’A’)
FROM
  (SELECT
    ROWNUM NR
  FROM
    DUAL
  CONNECT BY
    LEVEL<=1000) V1,
  (SELECT
    ROWNUM NR
  FROM
    DUAL
  CONNECT BY
    LEVEL<=1000) V2;

INSERT INTO
  T2
SELECT
  ROWNUM*3,
  LPAD(‘A’,300,’A’)
FROM
  (SELECT
    ROWNUM NR
  FROM
    DUAL
  CONNECT BY
    LEVEL<=333) V1,
  (SELECT
    ROWNUM NR
  FROM
    DUAL
  CONNECT BY
    LEVEL<=10) V2;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER, TABNAME=>’T1′);
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER, TABNAME=>’T2′);


Now we have more number of rows in first table T1 and less number of rows in T2.

Now I am disabling the constraint for the tables. So that it will automatically remove the index.(disable).

alter table T1 disable constraint SYS_C0011053;
alter table T2 disable constraint SYS_C0011054;


explain plan for DELETE FROM T1 WHERE C1 IN (SELECT C1 FROM T2);

PLAN_TABLE_OUTPUT
——————————————————————–
Plan hash value: 3070334650
——————————————————————–
| Id  | Operation             | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
——————————————————————–
|   0 | DELETE STATEMENT      |      |   333K|  3251K|       | 17278   (1)| 00:03:28 |
|   1 |  DELETE               | T1   |       |       |       |            |          |
|*  2 |   HASH JOIN RIGHT SEMI|      |   333K|  3251K|  5536K| 17278   (1)| 00:03:28 |
|   3 |    TABLE ACCESS FULL  | T2   |   333K|  1625K|       |  4129   (1)| 00:00:50 |
|   4 |    TABLE ACCESS FULL  | T1   |  1000K|  4882K|       | 12058   (1)| 00:02:25 |
——————————————————————–

Predicate Information (identified by operation id):
—————————————————

   2 – access(“C1″=”C1”)

 
SQL> explain plan for DELETE FROM T1 WHERE exists (SELECT 1 FROM T2 where T2.C1=T1.C1);

Explained.

SQL> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
——————————————————————–
Plan hash value: 3070334650
——————————————————————–
| Id  | Operation             | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
——————————————————————–
|   0 | DELETE STATEMENT      |      |   333K|  3251K|       | 17278   (1)| 00:03:28 |
|   1 |  DELETE               | T1   |       |       |       |            |          |
|*  2 |   HASH JOIN RIGHT SEMI|      |   333K|  3251K|  5536K| 17278   (1)| 00:03:28 |
|   3 |    TABLE ACCESS FULL  | T2   |   333K|  1625K|       |  4129   (1)| 00:00:50 |
|   4 |    TABLE ACCESS FULL  | T1   |  1000K|  4882K|       | 12058   (1)| 00:02:25 |
——————————————————————–

Predicate Information (identified by operation id):
—————————————————

   2 – access(“T2″.”C1″=”T1”.”C1″)

16 rows selected.

SQL> explain plan for DELETE FROM T2 WHERE exists (SELECT 1 FROM T1 where T2.C1=T1.C1);

Explained.

SQL> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
——————————————————————–
Plan hash value: 3678400186
——————————————————————–
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
——————————————————————–
|   0 | DELETE STATEMENT    |      |   333K|  3251K|       | 17278   (1)| 00:03:28 |
|   1 |  DELETE             | T2   |       |       |       |            |          |
|*  2 |   HASH JOIN SEMI    |      |   333K|  3251K|  5536K| 17278   (1)| 00:03:28 |
|   3 |    TABLE ACCESS FULL| T2   |   333K|  1625K|       |  4129   (1)| 00:00:50 |
|   4 |    TABLE ACCESS FULL| T1   |  1000K|  4882K|       | 12058   (1)| 00:02:25 |
——————————————————————–

Predicate Information (identified by operation id):
—————————————————

   2 – access(“T2″.”C1″=”T1”.”C1″)

16 rows selected.

All the queries are having the same exection plan , doing full table scan on all the tables.

SQL> delete from t1 where c1 in (select c1 from t2);

333000 rows deleted.

Elapsed: 00:00:33.87

SQL> delete from t1 where exists (select 1 from t2 where t1.c1=t2.c1);

333000 rows deleted.

Elapsed: 00:00:33.14

“Both the execution times are same but little bit IN cluase is completed in less time than exists”.

Now I am enabling the constraint on the tables which automatically enable the index on the table.


SQL> explain plan for delete from t1 where c1 in (select c1 from t2);

Explained.

Elapsed: 00:00:00.02
SQL> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
——————————————————————–
Plan hash value: 55652975

——————————————————————–
| Id  | Operation           | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
——————————————————————–
|   0 | DELETE STATEMENT    |              |   333K|  3251K|  2238   (7)| 00:00:27 |
|   1 |  DELETE             | T1           |       |       |            |          |
|   2 |   NESTED LOOPS      |              |   333K|  3251K|  2238   (7)| 00:00:27 |
|   3 |    INDEX FULL SCAN  | SYS_C0011053 |  1000K|  4882K|  2104   (1)| 00:00:26 |
|*  4 |    INDEX UNIQUE SCAN| SYS_C0011054 |     1 |     5 |     0   (0)| 00:00:01 |
——————————————————————–

Predicate Information (identified by operation id):
—————————————————

   4 – access(“C1″=”C1”)

16 rows selected.


SQL> delete from t1 where c1 in (select c1 from t2);

333000 rows deleted.

Elapsed: 00:00:22.29

SQL> explain plan for delete from t1 where exists (select 1 from t2 where t1.c1=t2.c1);

Explained.

Elapsed: 00:00:00.05
SQL> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
——————————————————————–

Plan hash value: 337034320

——————————————————————–
| Id  | Operation           | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
——————————————————————–
|   0 | DELETE STATEMENT    |              |   333K|  3251K|  2238   (7)| 00:00:27 |
|   1 |  DELETE             | T1           |       |       |            |          |
|   2 |   NESTED LOOPS SEMI |              |   333K|  3251K|  2238   (7)| 00:00:27 |
|   3 |    INDEX FULL SCAN  | SYS_C0011053 |  1000K|  4882K|  2104   (1)| 00:00:26 |
|*  4 |    INDEX UNIQUE SCAN| SYS_C0011054 |   110K|   541K|     0   (0)| 00:00:01 |
——————————————————————–

Predicate Information (identified by operation id):
—————————————————

   4 – access(“T1″.”C1″=”T2”.”C1″)

16 rows selected.

SQL> delete from t1 where exists (select 1 from t2 where t1.c1=t2.c1);

333000 rows deleted.

Elapsed: 00:00:49.30


NOTE : If the sub query is retrieving less number of rows then “IN” is working better than “EXISTS”.

====================================================================

Now I am testing the deletion with sub query retrieving more rows in subquery.

SQL> explain plan for delete from t2 where c1 in (select c1 from t1);

Explained.

Elapsed: 00:00:00.01
SQL> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
——————————————————————–
Plan hash value: 2291474641
——————————————————————–
| Id  | Operation              | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
——————————————————————–
|   0 | DELETE STATEMENT       |              |   333K|  3251K|   710  (21)| 00:00:09 |
|   1 |  DELETE                | T2           |       |       |            |          |
|   2 |   NESTED LOOPS         |              |   333K|  3251K|   710  (21)| 00:00:09 |
|   3 |    INDEX FAST FULL SCAN| SYS_C0011053 |  1000K|  4882K|   576   (2)| 00:00:07 |
|*  4 |    INDEX UNIQUE SCAN   | SYS_C0011054 |     1 |     5 |     0   (0)| 00:00:01 |
——————————————————————–

Predicate Information (identified by operation id):
—————————————————

   4 – access(“C1″=”C1”)

16 rows selected.


SQL> delete from t2 where c1 in (select c1 from t1);

333000 rows deleted.

Elapsed: 00:00:25.28

SQL> select * from table(dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
——————————————————————–

SQL_ID  4nqvaj3zafkv1, child number 0
————————————-
delete from t2 where exists (select 1 from t1 where t1.c1=t2.c1)

Plan hash value: 3082207826

——————————————————————–
| Id  | Operation              | Name         | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
——————————————————————–
|   0 | DELETE STATEMENT       |              |       |       |       |  2368 (100)|          |
|   1 |  DELETE                | T2           |       |       |       |            |          |
|*  2 |   HASH JOIN SEMI       |              |   333K|  3251K|  5536K|  2368   (2)| 00:00:29 |
|   3 |    INDEX FULL SCAN     | SYS_C0011054 |   333K|  1625K|       |   701   (1)| 00:00:09 |
|   4 |    INDEX FAST FULL SCAN| SYS_C0011053 |  1000K|  4882K|       |   576   (2)| 00:00:07 |
——————————————————————–

Predicate Information (identified by operation id):
—————————————————

   2 – access(“T1″.”C1″=”T2”.”C1″)


SQL> delete from t2 where exists (select 1 from t1 where t1.c1=t2.c1);

333000 rows deleted.

Elapsed: 00:00:27.30

Here I have observed that again IN is performing better than the exists clause.

Now I have truncated the table t2 and inserted less number of rows  than existing and tested the same and found that exists is working is better than the IN clause.

I reduced the rows from 3 lac to 33 thousand.

SQL> select count(1) from t2;

  COUNT(1)
———-
     33300

Elapsed: 00:00:40.71
SQL>
SQL> exec dbms_stats.gather_table_stats(‘TEST’,’T2′);

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.22
SQL>
SQL> delete from t2 where exists (select 1 from t1 where t1.c1=t2.c1);

33300 rows deleted.

Elapsed: 00:00:00.78
SQL> rollback;

Rollback complete.

Elapsed: 00:00:00.26
SQL> alter system flush buffer_cache;

System altered.

Elapsed: 00:00:02.05
SQL> alter system flush shared_pool;

System altered.

Elapsed: 00:00:00.18
SQL> delete from t2 where c1 in  (select c1 from t1);

33300 rows deleted.


Elapsed: 00:00:02.04

This is entirely depends on the optimizer how it is generating the plans.

Finally the conclusion is :-

*) If we do not have any index on the column we are using to join there is no change in the execution time by IN or EXISTS.

If we have created index on the column we use in the query to join.


*) If the subquery is giving less number of rows and outer table has more number of rows “IN clause is working better than EXISTS”.


*) If the subquery is giving more rows and outer query has less number of rows “EXISTS is working better than IN”.

ORA-39083 ORA-01830 import 10g database using impdp

When I was importing 10g database to 12c , I got the below error

ORA-39083: Object type PROCOBJ:”SCHED_CLIENT”.”SCHED_REFRESH” failed to create with error:
ORA-01830: date format picture ends before converting entire input string

Failing sql is:
BEGIN
dbms_scheduler.create_job(‘”SCHED_REFRESH”‘,
job_type=>’STORED_PROCEDURE’, job_action=>
‘pa_sched.pr_refresh’
, number_of_arguments=>0,
start_date=>’29-FEB-12 10.52.16.089117 AM EUROPE/LONDON’, repeat_interval=>
‘freq=minutely;interval=15’
, end_date=>NULL,
job_class=>'”DEFAULT_JOB_CLASS”‘, enabled=>FALSE, auto_drop=>TRUE
The main reason for this error is , in 10G “start time” is accepted without any conversion using to_timestamp. but in 11G and 12c it do not accept the same, we need to convert that explicitly using conversion functions.
I took the DDL of all jobs using the below command
impdp dumpfile=full_dump_31052016.dmp INCLUDE=PROCOBJ sqlfile=PROCOBJ.sql directory=TEST
After that I changed the sql file manually and add the time stamp function like below
start_date=>TO_TIMESTAMP_TZ(’08-APR-11 11.51.16.341715 AM EUROPE/LONDON’,’DD-MON-RR HH:MI:SS.FF AM TZR’)

After that I executed the sql file manually.
Note : I have only 10 jobs so I edited them manually. Use some shell script to automate if you have more jobs.

Check the value of hidden parameter in Oracle 11G

I got this from some another blog. This is just for my reference.

SET linesize 200 col PARAMETER
FOR a30 col "Session Value"
FOR a10 col "Instance Value"
FOR a10 col "Default value"
FOR a5 col SESSMOD
FOR a7 col SYSMOD
FOR a9 col Description
FOR a45

SELECT a.ksppinm "Parameter",
b.ksppstvl "Session Value",
c.ksppstvl "Instance Value",
b.ksppstdf "Default value",
decode(bitand(a.ksppiflg/256,3),1, 'True', 'False') SESSMOD,
decode(bitand(a.ksppiflg/65536,3),1,'IMMEDIATE',2,'DEFERRED',3,'IMMEDIATE','FALSE') SYSMOD,
a.ksppdesc "Description"
FROM sys.x$ksppi a,
sys.x$ksppcv b,
sys.x$ksppsv c
WHERE a.indx = b.indx
AND a.indx = c.indx
AND substr(ksppinm,1,100) LIKE '_enable_shared_pool_%'
ORDER BY a.ksppinm;

clone database using direct nfs

CLONE Database using direct NFS

Before cloning the database we need to configure the direct NFS , below is the procedure for DNFS configuration.

Direct NFS

Direct NFS is one of the new feature in 11.2. This is used to improve the performance. To use Direct NFS Client, the NFS file systems must first be mounted and available over regular NFS mounts. The mount options used in mounting the file systems are not relevant, as Direct NFS Client manages the configuration after installation.

Below is  the link for the basic nfs configuration.

http://srinivasoguri.blogspot.in/2016/04/how-to-configure-nfs-in-redhat-linux.html

Direct NFS Client can use a new configuration file ‘oranfstab’ or the mount tab file (/etc/mtab on Linux) to determine the mount point settings for NFS storage devices.

Oracle first looks for the mount settings in $ORACLE_HOME/dbs/oranfstab, which specifies the Direct NFS Client settings for a single database. Next, Oracle looks for settings in /etc/oranfstab, which specifies the NFS mounts available to all Oracle databases on that host. Finally, Oracle reads the mount tab file (/etc/mtab on Linux) to identify available NFS mounts.


Step by step procedure to enable the DNFS

1) Install database software.

2) login as a root user

cd $ORACLE_HOME/bin

[root@node3 bin]# chown root:root oradism
[root@node3 bin]# ls -l oradism
-rwxr-x— 1 root root 68278 Aug 14  2009 oradism

[root@node3 bin]# chmod 4755 oradism
[root@node3 bin]# ls -l oradism
-rwsr-xr-x 1 root root 68278 Aug 14  2009 oradism

3) as a oracle user execute below commands.

cd $ORACLE_HOME/rdbms/lib

make -f ins_rdbms.mk dnfs_on

[oracle@node3 lib]$ make -f ins_rdbms.mk dnfs_on
rm -f /oradisk/oracle/app/oracle/product/11.2.0/dbhome_1/lib/libodm11.so; cp /oradisk/oracle/app/oracle/product/11.2.0/dbhome_1/lib/libnfsodm11.so /oradisk/oracle/app/oracle/product/11.2.0/dbhome_1/lib/libodm11.so

4) make below entry in oranfstab file.

server:node2.oracle.com — NFS SERVER
path:192.9.1.101 — IP ADDRESS OF NFS SERVER
export: /u01/BACKUP mount: /BACKUP — export is the directory in NFS server , mount is the mount point we created in NFS client

Now the DFS configuration is complete.

CloneDB using the directNFS :-

My Configuration :

Oracle version : 11.2.0.3
Source DB : PROD
Destination DB : TEST
Destination SID : TEST

Now direct NFS Client configuration is completed. We can clone the database using the backup of that database over the NFS. This clone db use the copy-on-write techique. It will use the backup files as  datafiles to read the data, it will take very minimal storage. If you do any DML or DDL operations then it will write the data into the datafiles files.

Refer below link for copy-on-write :-

https://en.wikipedia.org/wiki/Copy-on-write

Step-by-Step procedure for cloning the database.

1) Take a cold backup of the source database you want to clone, in the NFS filesystem(avilable for both source and destination).

configure controlfile autobackup off;

run
{
set nocfau;                        
backup as copy database format ‘/u01/BACKUP/%U’;
}

2) On destination server export the below variables.

export ORACLE_SID=TEST — SID of the database
export MASTER_COPY_DIR=/BKP — NFS mountpoint where the backup exists
export CLONE_FILE_CREATE_DEST=/DATA/TEST — Location to create datafiles and logfiles for the clonedb
export CLONEDB_NAME=TEST — Database name

3) There is a perl script in the metalink  Note 1210656.1, execute the script to create sql scripts to create the clone database.

perl /tmp/Clone.pl /oradisk/oracle/app/oracle/product/11.2.0.3/dbhome_1/dbs/initTEST.ora crtdb.sql dbren.sql

This will create two sql files. (crtdb.sql , dbren.sql)

4) Open the pfile created in the location “/DATA/TEST” and add the below parameter.

clonedb=TRUE

5) execute below commands

export ORACLE_SID=TEST
export ORACLE_HOME=/oradisk/oracle/app/oracle/product/11.2.0.3/dbhome_1

[oracle@node3 tmp]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Mon Apr 18 12:08:50 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> @crtdb
SQL> SET FEEDBACK 1
SQL> SET NUMWIDTH 10
SQL> SET LINESIZE 80
SQL> SET TRIMSPOOL ON
SQL> SET TAB OFF
SQL> SET PAGESIZE 100
SQL>
SQL> STARTUP NOMOUNT PFILE=/DATA/TEST/initTEST.ora
ORACLE instance started.

Total System Global Area  684822528 bytes
Fixed Size                  2231432 bytes
Variable Size             629146488 bytes
Database Buffers           50331648 bytes
Redo Buffers                3112960 bytes
SQL> CREATE CONTROLFILE REUSE SET DATABASE TEST RESETLOGS
  2      MAXLOGFILES 32
  3      MAXLOGMEMBERS 2
  4      MAXINSTANCES 1
  5      MAXLOGHISTORY 908
  6  LOGFILE
  7    GROUP 1 ‘/DATA/TEST/TEST_log1.log’ SIZE 100M BLOCKSIZE 512,
  8    GROUP 2 ‘/DATA/TEST/TEST_log2.log’ SIZE 100M BLOCKSIZE 512
  9  DATAFILE
 10  ‘/BKP/data_D-PROD_I-2820641880_TS-PROD_FNO-5_0mr3bdbg’,
 11  ‘/BKP/data_D-PROD_I-2820641880_TS-SYSAUX_FNO-2_0ir3bd9f’,
 12  ‘/BKP/data_D-PROD_I-2820641880_TS-SYSTEM_FNO-1_0hr3bd81’,
 13  ‘/BKP/data_D-PROD_I-2820641880_TS-UNDOTBS1_FNO-3_0jr3bdai’,
 14  ‘/BKP/data_D-PROD_I-2820641880_TS-USERS_FNO-4_0lr3bdbe’
 15  CHARACTER SET WE8DEC;

Control file created.

SQL> quit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@node3 tmp]$ vi dbren.sql
[oracle@node3 tmp]$
[oracle@node3 tmp]$
[oracle@node3 tmp]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Mon Apr 18 12:09:21 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> @dbren

PL/SQL procedure successfully completed.

No errors.

Database altered.

drop tablespace TEMP
*
ERROR at line 1:
ORA-12906: cannot drop default temporary tablespace

create temporary tablespace TEMP
*
ERROR at line 1:
ORA-01543: tablespace ‘TEMP’ already exists

SQL>
SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
——— ——————–
TEST      READ WRITE

To Check the DNFS server :-

SQL> select * from v$dnfs_servers;

        ID SVRNAME              DIRNAME                 MNTPORT    NFSPORT      WTMAX      RTMAX
———- ——————– ——————– ———- ———- ———- ———-
         1 192.9.1.101          /u01/BACKUP                 896       2049      32768      32768

Issues I faced :-

1) After executing the “dbren.sql”.

ERROR at line 1:
ORA-01511: error in renaming log/data files
ORA-17513: dNFS package call failed
ORA-06512: at “SYS.X$DBMS_DNFS”, line 10
ORA-06512: at line 3

The datafile and logfile location should also be a NFS file system. (/DATA/TEST is a NFS file system)

2) [oracle@node3 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Wed Apr 13 20:03:37 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile=/oradisk/DATA/TEST/initTEST.ora
ORA-00845: MEMORY_TARGET not supported on this system
SQL> quit

shared memory is less than memory_target.

3) crtdb.sql script got hanged. in the alert log below error message is coming.

Direct NFS: please check that oradism is setuid

Step 2 in the DNFS configuration is missed.

4) dbren.sql script got failed with below error.

c] ORA-01511: error in renaming log/data files
ORA-01141: error renaming data file x – new file ‘xxxxxxx’ not found
ORA-01110: data file x:xxxxxx
ORA-17515: Creation of clonedb failed using snapshot file xxxxxxxxx
ORA-06512: at “SYS.X$DBMS_DNFS”, line 10

clonedb=true is missing in the clonedb pfile.

This will cause the above errors .If the clonedb parameter is not updated ,it will link the backup source datafiles.So dont use “DROP DATABASE” to cleanup as this will cleanup the source backup files.

5) while executing the dbren.sql faced the below error.

[1349998404222404] kgnfscrechan:2954:  Failed to get root fsinfo on filer

If this message comes ,we need to set the “insecure” option in export file of the nfs server.

6) SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1:
‘/BKP/data_D-PROD_I-2820641880_TS-SYSTEM_FNO-1_0cr3bb7c’

Sol : We need to use the cold backup for this , or we need to make the archivelogs avilable for recovery.

References :-

Step by Step – Configure Direct NFS Client (DNFS) on Linux (11g) (Doc ID 762374.1)

ORA-00845: MEMORY_TARGET not supported on this system

I have created a pfile with memory_target=1G parameter and tried to start the database.

[oracle@node3 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Wed Apr 13 20:03:37 2016

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile=/oradisk/DATA/TEST/initTEST.ora
ORA-00845: MEMORY_TARGET not supported on this system
SQL> quit

This is because “/dev/shm” (tmpfs) file system is less than 1G of size. we should have more space than memory_target parameter.

Solution :-

Before increasing the space

[oracle@node3 ~]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda5             9.5G  649M  8.4G   8% /
/dev/sda9             2.9G   70M  2.7G   3% /opt
/dev/sda8             4.8G  139M  4.4G   4% /home
/dev/sda7             4.8G  2.7G  1.8G  60% /usr
/dev/sda10            1.9G   36M  1.8G   2% /usr/local
/dev/sda6             4.8G  209M  4.3G   5% /var
/dev/sda3             9.5G  153M  8.9G   2% /tmp
/dev/sda1             2.9G   77M  2.7G   3% /boot
tmpfs                1006M     0 1006M   0% /dev/shm
/dev/sda11             25G  8.2G   16G  36% /oradisk
/dev/hdc              3.4G  3.4G     0 100% /media/RHEL_5.4 x86_64 DVD
192.9.1.101:/u01/BACKUP
                       59G   18G   38G  33% /BKP

[root@node3 ~]# umount /dev/shm

[root@node3 ~]# mount -t tmpfs -o size=5g tmpfs /dev/shm

[root@node3 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda5             9.5G  649M  8.4G   8% /
/dev/sda9             2.9G   70M  2.7G   3% /opt
/dev/sda8             4.8G  139M  4.4G   4% /home
/dev/sda7             4.8G  2.7G  1.8G  60% /usr
/dev/sda10            1.9G   36M  1.8G   2% /usr/local
/dev/sda6             4.8G  209M  4.3G   5% /var
/dev/sda3             9.5G  153M  8.9G   2% /tmp
/dev/sda1             2.9G   77M  2.7G   3% /boot
/dev/sda11             25G  8.2G   16G  36% /oradisk
/dev/hdc              3.4G  3.4G     0 100% /media/RHEL_5.4 x86_64 DVD
192.9.1.101:/u01/BACKUP
                       59G   18G   38G  33% /BKP
tmpfs                 5.0G     0  5.0G   0% /dev/shm

« Older Entries