Archive for ‘Backup & Recovery’

September 10, 2012

RMAN-06024: No Backup Or Copy Of The Control File Found To Restore

A full backup of database plus archivelog and controlfile for standby has just be completed.

“alter system switch logfile” command  was issued after backup was completed as suggested in note

RMAN Duplicate For Standby Fails with Rman-06024: No Backup Or Copy Of The Control File Found To Restore [ID 466321.1]

But we still get RMAN-06024 error which is demonstrated below:

 RMAN> run {
 2> allocate auxiliary channel c1 type disk format '/BACKUP_STAGE/%u';
 3> duplicate target database for standby dorecover;
 4> }

released channel: ORA_DISK_1
 released channel: ORA_AUX_DISK_1
 allocated channel: c1
 channel c1: sid=648 devtype=DISK

Starting Duplicate Db at 2012-09-10 17:34:19

contents of Memory Script:
 {
 set until scn  8223255;
 restore clone standby controlfile;
 sql clone 'alter database mount standby database';
 }
 executing Memory Script

executing command: SET until clause

Starting restore at 2012-09-10 17:34:32

released channel: c1
 RMAN-00571: ===========================================================
 RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
 RMAN-00571: ===========================================================
 RMAN-03002: failure of Duplicate Db command at 09/10/2012 17:34:32
 RMAN-03015: error occurred in stored script Memory Script
 RMAN-06026: some targets not found - aborting restore
 RMAN-06024: no backup or copy of the control file found to restore

Solution

This corresponds to Rman Duplicate For Standby Failing with Rman-06024 [METALINK DOCID 881674.1]
The solution is to run RMAN without the dorecover option


RMAN> run {
 2> allocate auxiliary channel c1 type disk format '/BACKUP_STAGE/%u';
 3> duplicate target database for standby;
 4> }

After a success duplication of standby database and run


SQL> recover managed standby database using current logfile disconnect;

December 12, 2011

Export Bump on Logical standby database failed with error ORA-39068 and ORA-39097

ORA-39006, ORA-39068, ORA-16224, ORA-39097 Errors Raised During Datapump Export With Logical Standby Dataguard Guard_Status Set to ‘ALL’ [ID 1236184.1]

Modified 13-OCT-2010     Type PROBLEM     Status MODERATED

In this Document
Symptoms
Cause
Solution


This document is being delivered to you via Oracle Support’s Rapid Visibility (RaV) process and therefore has not been subject to an independent technical review.

Applies to:

Oracle Server – Enterprise Edition – Version: 10.2.0.4 and later   [Release: 10.2 and later ]
Information in this document applies to any platform.

Symptoms

The following errors are seen during execution of Datapump export:
ORA-39006: internal error
ORA-39068: invalid master table data in row with PROCESS_ORDER=-3
ORA-16224: Database Guard is enabled
ORA-39097: Data Pump job encountered unexpected error -16224

The logical standby Dataguard guard_status has a value of ‘ALL.’
This value can be confirmed by the following query:

     SQL > select guard_status from v$database;

Cause

This is expected behavior when Datapump export is executed on a standby database when Dataguard guard_status has a value of ‘ALL.’ Datapump execution must be able to create a Master Table to store metadata for the Datapump task. When the standby database guard_status is set to ‘ALL,’ no updates are allowed including the creation of the Master Table which is required for Datapump to execute.

Solution

Alter the database so that the Dataguard guard_status is set to a value of either ‘STANDBY’ or ‘NONE’ then repeat the Datapump export task.

September 3, 2011

ORA-15077 and ORA-15001

Problem ORA-15077 and ORA-15001
================================

RMAN> restore controlfile from ‘/app/oracle/oradata/rac10g/control01.ctl’;

Starting restore at 02-SEP-11
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 09/02/2011 18:05:34
ORA-19504: failed to create file “+DATA”
ORA-17502: ksfdcre:4 Failed to create file +DATA
ORA-15001: diskgroup “DATA” does not exist or is not mounted
ORA-15077: could not locate ASM instance serving a required diskgroup
ORA-19600: input file is control file  (/app/oracle/oradata/rac10g/control01.ctl)
ORA-19601: output file is control file  (+DATA)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Analysis
=========

1) check if CRS is running on the node

[oracle@guang ]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

2) check if ASM instance is running

sys@+asm> select status from v$instance;

STATUS
————
STARTED

ASM instance is always in “STARED” mode, that is different from normal instance in “OPEN” “MOUNT” and “STARTED” modes.

3) check diskgroup status

sys@+asm> select name, state from v$asm_diskgroup;

NAME                           STATE
—————————— ———–
DATA                           DISMOUNTED
FRA                            DISMOUNTED

Problem is here !!!

sys@+asm> show parameter asm_diskgroups;

NAME                                 TYPE        VALUE
———————————— ———– ——————————
asm_diskgroups                       string

Cause
======

The diskgroups are dismounted when trying to restore control file to them. Since the asm_diskgroups string is empty
so that diskgroups are not mounted automatically at instance startup.

Solution
=========

1) manually mount the diskgroups

alter diskgroup data mount;
alter diskgroup fra mount;

2) set asm_diskgroups string to “DATA, FRA” so that ASM instance will mount them automatically at next startp

alter system set asm_diskgroups=”DATA, FRA” sid=”*”;
shutdown immediate; — need to shutdown connected instance if any
startup
select name, state from v$asm_diskgroup;

September 2, 2011

ORA-01207: file is more recent than control file – old control file

Problem

SQL> alter database open;

ORA-00283: recovery session canceled due to errors
ORA-01122: database file 1 failed verification check
ORA-01110: data file 1: ‘/app/oracle/oradata/rac10g/system01.dbf’
ORA-01207: file is more recent than control file – old control file

Solution: recreate the control file.

1) dump controlfile to trace

SQL> alter database backup controlfile to trace as ‘/tmp/create_controlf.sql’;

2) startup database nomount

SQL> startup nomount

3) create the control file

copy the script to create control file from trace file

SQL> CREATE CONTROLFILE REUSE DATABASE “RAC10G” NORESETLOGS  NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 ‘/app/oracle/oradata/rac10g/redo01.log’  SIZE 50M,
GROUP 2 ‘/app/oracle/oradata/rac10g/redo02.log’  SIZE 50M,
GROUP 3 ‘/app/oracle/oradata/rac10g/redo03.log’  SIZE 50M
DATAFILE
‘/app/oracle/oradata/rac10g/system01.dbf’,
‘/app/oracle/oradata/rac10g/undotbs01.dbf’,
‘/app/oracle/oradata/rac10g/sysaux01.dbf’,
‘/app/oracle/oradata/rac10g/users01.dbf’
CHARACTER SET WE8ISO8859P1
;

Control file created.

4) recover database using backup controlfile

sys@rac10g> recover database using backup controlfile;

ORA-00279: change 460683 generated at 09/02/2011 19:50:43 needed for thread 1
ORA-00289: suggestion :
/app/oracle/flash_recovery_area/RAC10G/archivelog/2011_09_02/o1_mf_1_33_%u_.arc
ORA-00280: change 460683 for thread 1 is in sequence #33

———- Give the last redo log file to apply (get from v$logfile)

/app/oracle/oradata/rac10g/redo03.log
Log applied.
Media recovery complete.

5) open the database

SQL> alter database open resetlogs;
Database altered.

July 4, 2011

Check database block corruption

Use command ‘dbv’ to check single file block corruption as


dbv file=/app/oracle/oradata/DB11R2/undotbs01.dbf
DBVERIFY - Verification starting : FILE = /app/oracle/oradata/DB11R2/undotbs01.dbf
Page 11899 is marked corrupt
Corrupt block relative dba: 0x00c02e7b (file 3, block 11899)
Bad check value found during dbv:
Data in bad block:
type: 2 format: 2 rdba: 0x00c02e7b
last change scn: 0x0000.0017a337 seq: 0x2 flg: 0x04
spare1: 0x0 spare2: 0x0 spare3: 0x0
consistency value in tail: 0xa3370202
check value in block header: 0x92fe
computed block checksum: 0x1a

Page 35563 is marked corrupt
Corrupt block relative dba: 0x00c08aeb (file 3, block 35563)
Bad check value found during dbv:
Data in bad block:
type: 2 format: 2 rdba: 0x00c08aeb
last change scn: 0x0000.00162521 seq: 0x45 flg: 0x04
spare1: 0x0 spare2: 0x0 spare3: 0x0
consistency value in tail: 0x25210245
check value in block header: 0xe439
computed block checksum: 0xf2ab



DBVERIFY - Verification complete

Total Pages Examined         : 108800
Total Pages Processed (Data) : 0
Total Pages Failing   (Data) : 0
Total Pages Processed (Index): 0
Total Pages Failing   (Index): 0
Total Pages Processed (Other): 106906
Total Pages Processed (Seg)  : 10
Total Pages Failing   (Seg)  : 0
Total Pages Empty            : 1892
Total Pages Marked Corrupt   : 2
Total Pages Influx           : 0
Total Pages Encrypted        : 0
Highest block SCN            : 1552133 (0.1552133)

We can also use RMAN to check block corruptions of a single datafile or all database


RMAN> backup validate datafile 1;

RMAN> backup validate check logical database archivelog all;

and then query

SQL> select * from v$database_block_corruption;

June 9, 2011

RESTORE and RECOVER ERRORS

———————————————————————————————-

ORA-01248: file <num> was created in the future of incomplete recovery

———————————————————————————————-

========
Problem
========

ORA-01248: file 8 was created in the future of incomplete recovery
ORA-01110: data file 8: ‘/app/oracle/oradata/DB11R2/users03.dbf’

===========
Description
===========

Oracle Error :: ORA-01248

file string was created in the future of incomplete recovery

Cause

Attempting to do a RESETLOGS open with a file entry in the control file that was originally created after the UNTIL time of the incomplete recovery. Allowing such an entry may hide the version of the file that is needed at this time. The file number may be in use for a different file which would be lost if the RESETLOGS was allowed.
Action

If more recovery is desired then apply redo until the creation time of the file is reached. If the file is not wanted and the same file number is not in use at the stop time of the recovery, then the file can be taken offline with the FOR DROP option. Otherwise a different control file is needed to allow the RESETLOGS. Another backup can be restored and recovered, or a control file can be created via CREATE CONTROLFILE.

===========
Example
===========

RMAN> alter database open resetlogs;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of alter db command at 06/09/2011 09:59:09
ORA-01248: file 8 was created in the future of incomplete recovery
ORA-01110: data file 8: ‘/app/oracle/oradata/DB11R2/users03.dbf’

sys@db11r2> alter database datafile 8 offline for drop;

Database altered.

RMAN> alter database open resetlogs;

sys@rman11r2> select status from v$instance;

STATUS
————
OPEN

Done.

———————————————————————-

ORA-00275: media recovery has already been started

———————————————————————-

============

Description

============

Oracle Error :: ORA-00275

media recovery has already been started

Cause

An attempt was made to start a second media recovery operation in the same session.

Action

Complete or cancel the first media recovery session or start another session to perform media recovery.

========

Example

============


SQL> recover database cancel;

———————————

ORA-00289: suggestion
———————————

ORA-00289:    suggestion < string >

Cause:   

This message reports the next redo log filename that is needed, according to the initialization parameters LOG_ARCHIVE_DEST and LOG_ARCHIVE_FORMAT. This message assumes that LOG_ARCHIVE_DEST and LOG_ARCHIVE_FORMAT are the same now as when the required redo log file was archived.

Action:   

Consider using this filename for the next log needed for recovery.

Example

SQL> alter database recover datafile 24;
alter database recover datafile 24
*
ERROR at line 1:
ORA-00279: change 818332336 generated at 04/05/2007 05:15:16 needed for thread
1
ORA-00289: suggestion : J:\ARCHIVE_DEST2\ARCRECON_S358351594984777.ARC
ORA-00280: change 818332336 for thread 1 is in sequence #35835

After it proposes you an archived log either:
1/ accept it with: alter database recover continue;
2/ change the name with: alter database recover logfile '<name>';
3/ stop with: alter database recover cancel;

You can also say it continues automatically without asking you for each log file, answer:

4/ alter database recover continue default;

Or start with:

5/ alter database recover automatic datafile 24;

———————————————————————————————————————————————-
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence <num>
———————————————————————————————————————————————-

—————

Problem

————–

RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 29

Example


RMAN> recover database noredo; 
 SQL> alter database open resetlogs;
May 10, 2011

RMAN Backup and Recovery in NOARCHIVELOG Mode

It’ highly recommended to turn Archivelog mode in production.

Database in noarchivelog mode can NOT be made with HOT backups.

It can be backed up using OS commands when the database is closed; or using RMAN while database is in MOUNT mode.

Rman backup in Noarchivelog mode


-- turn on autobackup

<span style="font-size: small;">RMAN> configure controfile autobackup on</span>

$>shutdown immediate;

$>startup mount;

$>rman target / catalog rcat/ract@rcat

RMAN> backup database including current controlfile;

Rman recovery in Noarchivelog mode

if control file is available but datafiles are corrupted


shutdown immediate; (or shutdown abort)

startup mount;

rman target / catalog rcat/rcat@rcat

RMAN> restore database;

RMAN> recover database noredo;

RMAN> alter database open resetlogs;

if control file and data files are all lost


startup nomount

RMAN> set dbid <dbid>

RMAN> restore controfile from autobackup;

RMAN> alter database mount;

RMAN> restore database;

RMAN>recover database noredo;

RMAN> alter database open resetlogs;

March 21, 2011

Import DataPump: How to Import Table Data into a Table that has Different Name ?

Import DataPump: How to Import Table Data into a Table that has Different Name ? [ID 342314.1]

Modified 20-OCT-2010     Type HOWTO     Status PUBLISHED

In this Document
Goal
Solution


Applies to:

Enterprise Manager for RDBMS – Version: 10.1.0.2 to 11.2.0.2 – Release: 10.1 to 11.2
Oracle Server – Enterprise Edition – Version: 10.1.0.2 to 11.2.0.2   [Release: 10.1 to 11.2]
Oracle Server – Personal Edition – Version: 10.1.0.2 to 11.2.0.2   [Release: 10.1 to 11.2]
Oracle Server – Standard Edition – Version: 10.1.0.2 to 11.2.0.2   [Release: 10.1 to 11.2]
Information in this document applies to any platform.

Goal

Checked for relevance on 10-20-2010

How to import table data into a target table that has the same structure as the original source table, but has a different name?

Solution

For Import DataPump using Oracle 10g, you can use the REMAP_SCHEMA parameter to remap the schema name during the import (similar to the FROMUSER and TOUSER parameters in the original import utility). However, there is no parameter to remap table names. This means that Import DataPump can only import data into a table with the same name as the original table.

The examples below are based on:

A. The demo schema SCOTT that is created with script: $ORACLE_HOME/rdbms/admin/scott.sql

B. The directory object my_dir that refers to an existing directory on the server where the Oracle RDBMS is installed. Example:

Windows:
CONNECT system/manager
CREATE OR REPLACE DIRECTORY my_dir AS 'D:\export';
GRANT read,write ON DIRECTORY my_dir TO public;
Unix:
CONNECT system/manager
CREATE OR REPLACE DIRECTORY my_dir AS '/home/users/export';
GRANT read,write ON DIRECTORY my_dir TO public;

C. Two tables with identical structure, but different names:

CONNECT scott/tiger
CREATE TABLE emp2 AS SELECT * FROM emp WHERE 1=2;
SELECT COUNT(*) FROM emp;
  COUNT(*)
----------
        14
SELECT COUNT(*) FROM emp2;
  COUNT(*)
----------
         0

D. You have an export dumpfile of table SCOTT.EMP that was created with:

expdp system/manager directory=my_dir dumpfile=exp_emp.dmp logfile=exp_emp.log tables=scott.emp

OBJECTIVE: import the data from the export dumpfile into table SCOTT.EMP2
1. If the original source table can be temporary renamed:

-- temporary rename the two tables:
CONNECT scott/tiger
RENAME emp TO emp_temp;
RENAME emp2 TO emp;

-- import data into table EMP:
impdp system/manager directory=my_dir dumpfile=exp_emp.dmp logfile=imp_emp.log full=y
table_exists_action=append content=metadata_only

-- rename tables back to their original names:
CONNECT scott/tiger
RENAME emp TO emp2;
RENAME emp_temp TO emp;

2: If the original source table cannot be renamed:

-- create a temporary user:
CONNECT system/manager
CREATE USER tempuser IDENTIFIED BY tempuser DEFAULT TABLESPACE users;
ALTER USER tempuser QUOTA UNLIMITED ON users;
GRANT create session, create table TO tempuser;

-- import data into user TEMPUSER:
impdp system/manager directory=my_dir dumpfile=exp_emp.dmp logfile=imp_emp.log full=y
remap_schema=scott:tempuser

-- rename the table in the TEMPUSER schema:
CONNECT tempuser/tempuser
RENAME emp TO emp2;

-- export data again, now from (renamed) table TEMPUSER.EMP2:
expdp system/manager directory=my_dir dumpfile=exp_emp2.dmp logfile=exp_emp2.log
tables=tempuser.emp2 content=data_only

-- import data into SCOTT.EMP2:
impdp system/manager directory=my_dir dumpfile=exp_emp2.dmp logfile=imp_emp2.log full=y
remap_schema=tempuser:scott table_exists_action=append

-- check results, and cleanup:
CONNECT system/manager
SELECT COUNT(*) FROM scott.emp;
  COUNT(*)
----------
        14
SELECT COUNT(*) FROM scott.emp2;
  COUNT(*)
----------
        14
DROP USER tempuser CASCADE;

Remarks:

(1) Note that within the same database, this could also be achieved with a single INSERT statement:

SQL> insert into scott.emp2 (select * from scott.emp);

(2) Note that with the original (non-DataPump) export and import utilities there was another solution if the tablename in the export dumpfile did not exist in the schema in the target database. E.g. based on previous example:

-- export from source database:
exp system/manager file=exp_emp.dmp log=exp_emp.log tables=scott.emp

-- on target database: ensure that table EMP doesn't exist:
CONNECT scott/tiger
CREATE TABLE emp2 AS SELECT * FROM emp WHERE 1=2;
DROP TABLE emp;

-- create synonym for table EMP2:
CREATE SYNONYM emp FOR emp2;

-- import data from EMP into table EMP2:
imp system/manager file=exp_emp.dmp log=imp_emp.log full=y ignore=y indexes=n constraints=n

(3) Starting with Oracle 11g, DataPump provides a new command REMAP_TABLE to import the table data into a new table name on the target database.   See the 11g Oracle Database Utilities for more on this new feature.

— import data from EMPLOYEES into table EMPS:
impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp TABLES=hr.employees REMAP_TABLE=hr.employees:emps

Tags: ,
February 18, 2011

Datapump Export and Import Usage

I found this article on Datapump useful, just to record it here. Reference: http://www.datadisk.co.uk/html_docs/oracle/data_pump.htm

 

Data Pump Export and Import

Data Pump takes the old export and import utilities one step further, you can have total control over the job running (stop it, pause it, check it, restart it). Data pump is a server side technology and it can transfer large amounts of data very quickly using parallel streams to achieve maximum throughput, they can be 15-45% faster than the older import/export utilities. Advantages using data pump are

  • ability to estimate jobs times
  • ability to restart failed jobs
  • perform fine-grained object selection
  • monitor running jobs
  • directly load a database from a remote instance via the network
  • remapping capabilities
  • improved performance using parallel executions

A couple of notes is that you cannot export to a tape device only to disk, and the import will only work with version of oracle 10.1 or greater. Also remember that the expdp and impdp are command line tools and run from within the Operating System.

Data Pump Uses

You can use data pump for the following

  • migrating databases
  • copying databases
  • transferring oracle databases between different operating systems
  • backing up important tables before you change them
  • moving database objects from one tablespace to another
  • transporting tablespace’s between databases
  • reorganizing fragmented table data
  • extracting the DDL for tables and other objects such as stored procedures and packages

Data Pump components

Data pump technology consists of three major components

  • dbms_datapump – the main engine for driving data dictionary metadata loading and unloading
  • dbms_metadata – used to extract the appropriate metadata
  • command-line – expdp and impdp are the import/export equivalents

Data Access methods

Data pump has two methods for loading data, direct path or external table path you as a dba have no control with what data pump uses, normally simple structures such as heap tables without triggers will use direct path more complex tables will use the external path, oracle will always try and use the direct-path method.

Direct Path bypasses the database buffer cache and writes beyond the high water mark when finished adjusts the high water mark, No undo is generated and can switch off redo as well, minimal impact to users as does not use SGA. Must disable triggers on tables before use.
External Path Uses the database buffer cache acts as a SELECT statement into a dump file, during import reconstructs statements into INSERT statements, so whole process is like a normal SELECT/INSERT job. Both undo and redo are generated and uses a normal COMMIT just like a DML statement would.

In the following cases oracle will use the external path if any of the below are in use

  • clustered tables
  • active triggers in the table
  • a single partition in a table with a global index
  • referential integrity constraints
  • domain indexes on LOB columns
  • tables with fine-grained access control enabled in the insert mode
  • tables with BFILE or opaque type columns

Data Pump files

You will use three types’s of files when using data pump, all files will be created on the server.

  • dump files – holds the data and metadata
  • log files – the resulting output from the data pump command
  • sql files – contain the DDL statements describing the objects included in the job but can contain data
  • Master data pump tables – when using datapump it will create tables within the schema, this is used for controlling the datapump job, the table is removed when finished.

Data Pump privileges

In order to advance features of data pump you need exp_full_database and imp_full_database privileges.

How Data Pump works

The Master Control Process (MCP), has the process name DMnn, only one master job runs per job which controls the whole Data Pump job, it performs the following

  • create jobs and controls them
  • creates and manages the worker processes
  • monitors the jobs and logs the process
  • maintains the job state and restart information in the master table (create in the users schema running the job)
  • manages the necessary files including the dump file set

The master process creates a master table which contains job details (state, restart info), this table is created in the users schema who is running the Data Pump job. Once the job has finished it dumps the table contents into the data pump file and deletes the table. When you import the data pump file it re-creates the table and reads it to verify the correct sequence in which the it should import the various database objects.

The worker process is named DWnn and is the process that actually performs the work, you can have a number of worker process running on the same job (parallelism). The work process updates the master table with the various job status.

The shadow process is created when the client logs in to the oracle server it services data pump API requests, it creates the job consisting of the master table and the master process.

The client processes are the expdp and impdp commands.

Running Data Pump

You can either run via a command line specifying options or use a parameter file, there are many options to Data Pump so it would be best to check out the Oracle documentation, I have given a few examples below

Exporting
database expdp vallep/password directory=datapump full=y dumpfile=data.dmp filesize=2G parallel=2 logfile=full.log 

Note: increase the parallel option based on the number of CPU’s you have

schema expdp sys/password schemas=testuser dumpfile=data.dmp logfile=schema.log
table expdp vallep/password tables=accounts,employees dumpfile=data.dmp content=metadata_only
tablespace expdp vallep/password tablespaces=users dumpfile=data.dmp logfile=tablespace.log
Importing
database impdp system/password full=y dumpfile=data.dmp nologfile=y
schema change impdp system/password schemas=’HR’ remap_schema=’HR:HR_TEST’ content=data_only 

impdp system/passwd remap_schema=’TEST:TEST3’ tables=test log=… dumpfile=… directory=…

Other Options
directory specifies a oracle directory object
filesize split the dump file into specific sizes (could be used if filesystem has 2GB limit)
parfile specify the parameter file
content contents option can be ALL, METADATA_ONLY or DATA_ONLY
compression compression is used by default but you can stop it
exclude/include metadata filtering
query selectively export table data using a SQL statement
estimate Calculate job estimates where the vaild keywords are blocks and statistics
estimate_only Calculate job estimates without performing the export
network link you can perform a export across a network
encryption you can encrypt data within the data pump file
parallel increase worker processes to increase throughput, base it on number of CPU’s
remap_schema move objects from one schema to another
remap_datafile change the name of the datafile when moving across different systems
remap_tablespace move from one tablespace to another
Useful Views
DBA_DATAPUMP_JOBS summary information of all currently running data pump jobs
DBA_DATAPUMP_SESSIONS displays the user currently running data pump jobs
V$SESSION_LONGOPS display information like totalwork, sofar, units and opname
Privileges
IMP_FULL_DATABASE required if using advanced features
EXP_FULL_DATABASE required if using advanced features

DBMS_DATAPUMP package

The package dbms_datapump can be used for the following

  • starting/stopping/restarting a job
  • monitoring a job
  • detaching from a job
exporting declare 

d1 number;

begin

d1 := dbms_datapump.open(‘export’,’schema’,null, ‘test1’, ‘latest’);
dbms_datapump.add_file(d1, ‘test1.dmp’, ‘dmpdir’);
dbms_datapump.metadata_filter(d1, ‘schema_expr’,’in (”OE”)’);
dbms_datapump.start_job(d1);
dbms_datadump.detach(d1);

end;

importing declare 

d1 number;

begin

d1 := dbms_datapump.open(‘import’,’full’,null, ‘test1’);
dbms_datapump.add_file(d1, ‘test1.dmp’, ‘dmpdir’);
dbms_datapump.metadata_remap(d1, ‘remap_schema’, ‘oe’, ‘hr’);
dbms_datapump.start_job(d1);
dbms_datadump.detach(d1);

end;

Tags: ,
February 14, 2011

Refreshing development environment using expdp/impdp

Introduction

This note describes the procedure of refreshing a schema in development environment with latest production data.

Environment

Production Database

  • machine: 192.168.1.46
  • OS user: glporacle
  • DB name: GOLDPROD
  • user to perform expdp: expuser/expuser
  • export directory: /gold/GLP/archive/expdp

Development Database

  • Test machine: 192.168.1.42
  • OS user: gltoracle
  • DB name: GOLDTEST
  • user to perform impdp: expuser/expuser
  • import directory: /gold/GLT/archive/impdp

Export schema CENPROD from production using expdp

ssh 192.168.70.205
su - glporacle
cd /gold/GLP/archive/expdp

— Verify if there is enough space to hold dump files

The schema CENPROD is currently estimated to occupy10G; ensure there is enough space in current directory to save export dump files.


df -k /gold/GLP/archive

Filesystem    1024-blocks      Free %Used    Iused %Iused Mounted on
/dev/goldglparclv    73400320  <strong>46348548</strong> 37%      961     1% /gold/GLP/archive

— Verify if there is no existing any old dump files *.dmp

if there is any then delete them before executing export pump process.

rm *.dmp

— Run export pump as oracle user ‘expuser’

nohup expdp parfile==expdp_user_cenprod.par &

where parfile has the following contents:

userid="expuser/expuser"
schemas=CENPROD
directory=EXPDP_DIR
parallel=4
dumpfile=user_CENPROD_%U.dmp
logfile=expdp_CENPROD.log

If any errors or warnings shown up, refer to the log file “expdp_CENPROD.log” in current directory for details.

Move dump files to testing environment

Transfer all dump files to development machine.

Import into CENTEST on development database

ssh 192.168.1.42
su - gltoracle
cd /gold/GLT/archive/impdp

$ ls -l

total 19990928
-rw-r--r--    1 gltoracl gloinst         307 Feb 11 15:13 02_grant_java_permission_to_centest.sql
-rw-r--r--    1 gltoracl gloinst        1728 Feb 04 17:02 generate_recreate_script.sql
-rw-r--r--    1 gltoracl gloinst         207 Feb 04 17:31 impdp_user_CENTEST.par
-rw-r--r--    1 gltoracl gloinst         424 Feb 14 10:32 build_replace_synonyms_script.sql
-rw-r-----    1 gltoracl gloinst    23924736 Feb 04 16:56 user_CENPROD_01.dmp
-rw-r-----    1 gltoracl gloinst  3872714752 Feb 04 16:57 user_CENPROD_02.dmp
-rw-r-----    1 gltoracl gloinst  2738532352 Feb 04 16:57 user_CENPROD_03.dmp
-rw-r-----    1 gltoracl gloinst  3600035840 Feb 04 16:58 user_CENPROD_04.dmp

Generate recreation script of schema CENTEST

The file “build_recreate_centest_script.sql” is used to generates a recreation script

set echo off
define username='CENTEST'
set lines 132
set head off
set feedback off
set verify off
set termout on
set long 10000
REM pause "Please give the user that you're going to recreate"
ACCEPT user_name CHAR default &username prompt 'Insert user name:[&username]'

spool 01_recreate_user_&&username\.sql

prompt spool 01_recreate_user_&&username\.log

select 'create user '||USERNAME||' identified by values '''||PASSWORD||''''||chr(10)|| -
'temporary tablespace '||TEMPORARY_TABLESPACE||chr(10)|| -
'default tablespace '||DEFAULT_TABLESPACE||chr(10)|| -
'profile '||PROFILE||';' -
from dba_users where username='&&user_name';

select 'alter user '||username||' quota '||decode(MAX_BYTES,-1,'unlimited',MAX_BYTES)||' on '||tablespace_name||';'
from dba_ts_quotas where username='&&user_name';

select 'grant '||PRIVILEGE||' to '||GRANTEE||decode(ADMIN_OPTION,'YES',' with admin option',null)||';'
from dba_sys_privs where grantee='&&user_name';

select 'grant '||GRANTED_ROLE||' to '||GRANTEE||decode(ADMIN_OPTION,'YES',' with admin option',null)||';'
from dba_role_privs where grantee='&&user_name';

select 'grant '||PRIVILEGE||' on '||OWNER||'.'||TABLE_NAME||' to '||GRANTEE|| -
decode(GRANTABLE,'YES',' with grant option',null)||decode(HIERARCHY,'YES',' with hierarchy option',null)||';' -
from dba_tab_privs where grantee='&&user_name' -
 and table_name not in (select DIRECTORY_NAME from dba_directories);

select 'grant ' || PRIVILEGE || ' on directory ' || DIRECTORY_NAME || ' to CENTEST;' -
  from dba_tab_privs a,  -
       dba_directories b  -
 where a.grantee='CENTEST' -
   and a.TABLE_NAME=b.DIRECTORY_NAME;

prompt spool off
prompt quit
spool off

$sqlplus expuser/expuser @build_recreate_centest_script.sql

A script named 01_recreate_user_CENTEST.sql is generated in current directory;

drop user centest

sqlplus expuser/expuser
SQL> select sid, serial#, username, machine, program, status from v$session where username= 'CENTEST';

If it returns any sessions of ‘CENTEST’, kill them

SQL> alter system kill session 'sid, serial#';

Run the following command to drop user until there is no any session of CENTEST returned by the above query.

SQL> drop user centest cascade;

Recreate CENTEST user structure

SQL> @01_recreate_user_CENTEST.sql

Check log file “01_recreate_user_CENTEST.log” in current directory and if any errors correct them before proceeding ahead.

Import data into centest

parameter file “impdp_user_CENTEST.par” has the following contents

userid="expuser/expuser"
REMAP_SCHEMA=CENPROD:CENTEST
REMAP_TABLESPACE=CENPROD_DATA:CENTEST_DATA,CENPROD_IDX:CENTEST_IDX
DIRECTORY=IMPDP_DIR
EXCLUDE=GRANT
PARALLEL=4
DUMPFILE=user_CENPROD_%U.dmp
LOGFILE=impdp_CENPROD_to_CENTEST.log

Launch export pump import process:

nohup impdp expuser/expuser parfile=impdp_user_CENTEST.par &

Master table "EXPUSER"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "EXPUSER"."SYS_IMPORT_FULL_01":  expuser/******** parfile=impdp_user_CENTEST.par
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"CENTEST" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "CENTEST"."STOCOUCH"                        262.3 MB 1813316 rows
......

Check log file impdp_CENPROD_to_CENTEST.log in current directory and if any errors occurred, correct them before proceeding ahead.

Replace synonyms references

Run the following script as ‘expuser’ user to build a replacing script.

spool 03_replace_synonyms.sql
set lines 200 pages 0 echo off feedback off
prompt spool 03_replace_synonyms\.log
select 'CREATE OR REPLACE SYNONYM ' || owner ||'.' || SYNONYM_NAME || ' FOR ' ||
       decode(TABLE_OWNER,
              '&fromuser','&touser' || '.',
              null, '',
              TABLE_OWNER || '.') || TABLE_NAME || decode(TABLE_OWNER, null, '@'|| db_link || ';', ';')
from dba_synonyms where OWNER='STKTEST';
prompt spool off
prompt quit
spool off
quit

sqlplus expuser/expuser @build_replace_synonyms_script.sql

A script named “03_replace_synonyms.sql” is created in current directory. Run it as ‘expuser’ user:

sqlplus expuser/expuser @03_replace_synonyms.sql

Check log file “03_replace_synonyms.log ” for execution results.

Recompile invalid objects of centest

sqlplus expuser/expuser

SQL> exec dbms_utility.COMPILE_SCHEMA(‘CENTEST’);

After a compilation of schema CENTEST, verify if there is any remaining invalid object

select object_name, object_type, status from dba_objects where owner='CENTEST' and status ='INVALID';
Tags: ,