1) if the database is 10g or upper, we can use “expdp ESTIMATE_ONLY=Y ESTIMATE={BLOCKS | STATISTICS}
expdp schema=OWNER estimate_only=Y estimate=BLOCKS
where
BLOCKS – The estimate is calculated by multiplying the number of database blocks used by the source objects, times the appropriate block sizes.
STATISTICS – The estimate is calculated using statistics for each table. For this method to be as accurate as possible, all tables should have been analyzed recently.
2) If the database version is 9i or 10g, we can estimate export dump file size by querying on “dba_segments”
e.g. to estimate size of exporting a schema
select sum(bytes)
from dba_segments
where owner=''
and segment_type not in (
'CACHE',
'ROLLBACK',
'TYPE2',
'UNDO'
);
Caveat: the actual size of the dump file can be very different from what it’s estimated if tables are very fragmented.
3) If the database version is 9i or 10g, we can get a PRESICE dump size without generating it (refer to ). To achieve it, we need to create a pipe, use ‘dd’ to read from it, redirect exp to the pipe and at the end dd reports actual blocks passing through the pipe.
3.1) Create a pipe called exp.pipe in /tmp directory
(syntax may differ depending on platform)
% cd /tmp
% mknod exp.pipe p
3.2) Start reading from exp.pipe with dd,
dump output to bit bucket (/dev/null),
set blocksize to 1k and execute this process in background
% dd if=/tmp/exp.pipe of=/dev/null bs=1024 &
3.3) Start the export, setting file=/tmp/exp.pipe
% exp user/pwd schema=OWNER file=/tmp/exp.pipe &
3.4) At the end of exp, look for numbers of records written
Export terminated successfully without warnings.
1131864+0 records in.
1131864+0 records out.
[1] + Done dd if=/tmp/exp.pipe of=/dev/null bs=1024 &
– ‘1131864+0 records out’ shows 1131864 records of 1024 bytes were written to the exp dumpfile.
– Step 2 specifies record size(bs) of 1024.
– Size of actual dumpfile would be 1024*1131864 = 1159028736 bytes
– Format of ‘records out’ is f+p, f=full blocks, p=partial block
– For example, if step 4 returns ‘1131864+1 records out’
Your actual dumpfile size will be between 1159028736 bytes(1024*1131864) and 1159029760 bytes(1024*(1131864+1))
Cavert: we have to go through a whole export process to get the a precise dump size.