Archive for ‘exp/imp’

January 24, 2011

Export / Import Performance Tips

EXPORT:

  • Set the BUFFER parameter to a high value (e.g. 2Mb — entered as an integer “2000000”)
  • Set the RECORDLENGTH parameter to a high value (e.g. 64Kb — entered as an integer “64000”)
  • Use DIRECT=yes (direct mode export)
  • Stop unnecessary applications to free-up resources for your job.
  • If you run multiple export sessions, ensure they write to different physical disks.
  • DO NOT export to an NFS mounted filesystem. It will take forever.

IMPORT:

  • Create an indexfile so that you can create indexes AFTER you have imported data. Do this by setting INDEXFILE to a filename and then import. No data will be imported but a file containing index definitions will be created. You must edit this file afterwards and supply the passwords for the schemas on all CONNECT statements.
  • Place the file to be imported on a separate physical disk from the oracle data files
  • Stop redo log archiving if possible
  • Use COMMIT=N in the import parameter file if you can afford it
  • Use STATISTICS=NONE in the import parameter file to avoid time consuming to import the statistics
  • Remember to run the indexfile previously created
January 24, 2011

Estimate export dump size

1) if the database is 10g or upper, we can use “expdp ESTIMATE_ONLY=Y ESTIMATE={BLOCKS | STATISTICS}

expdp schema=OWNER estimate_only=Y estimate=BLOCKS

where
BLOCKS – The estimate is calculated by multiplying the number of database blocks used by the source objects, times the appropriate block sizes.
STATISTICS – The estimate is calculated using statistics for each table. For this method to be as accurate as possible, all tables should have been analyzed recently.

2) If the database version is 9i or 10g, we can estimate export dump file size by querying on “dba_segments”

e.g. to estimate size of exporting a schema

select sum(bytes)
  from dba_segments
where owner=''
  and segment_type not in (
        'CACHE',
        'ROLLBACK',
        'TYPE2',
        'UNDO'
   );

Caveat: the actual size of the dump file can be very different from what it’s estimated if tables are very fragmented.

3) If the database version is 9i or 10g, we can get a PRESICE dump size without generating it (refer to ). To achieve it, we need to create a pipe, use ‘dd’ to read from it, redirect exp to the pipe and at the end dd reports actual blocks passing through the pipe.

3.1) Create a pipe called exp.pipe in /tmp directory
(syntax may differ depending on platform)

% cd /tmp
% mknod exp.pipe p

3.2) Start reading from exp.pipe with dd,
dump output to bit bucket (/dev/null),
set blocksize to 1k and execute this process in background

% dd if=/tmp/exp.pipe of=/dev/null bs=1024 &

3.3) Start the export, setting file=/tmp/exp.pipe

% exp user/pwd schema=OWNER file=/tmp/exp.pipe &

3.4) At the end of exp, look for numbers of records written

Export terminated successfully without warnings.
1131864+0 records in.
1131864+0 records out.
[1] +  Done                    dd if=/tmp/exp.pipe of=/dev/null bs=1024 &

– ‘1131864+0 records out’ shows 1131864 records of 1024 bytes were written to the exp dumpfile.
– Step 2 specifies record size(bs) of 1024.
– Size of actual dumpfile would be 1024*1131864 = 1159028736 bytes
– Format of ‘records out’ is f+p, f=full blocks, p=partial block
– For example, if step 4 returns ‘1131864+1 records out’
Your actual dumpfile size will be between 1159028736 bytes(1024*1131864) and 1159029760 bytes(1024*(1131864+1))

Cavert: we have to go through a whole export process to get the a precise dump size.

Tags: ,