Export(exp) and import(imp) are Logical backup and Recovery. When exporting the database objects are dumped to a binary file which can then be imported into another Oracle database.
The Export and Import utilities provide a simple way for you to transfer data objects between Oracle databases, even if they reside on platforms with different hardware and software configurations.
When you run Export against an Oracle database, objects (such as tables) are extracted, followed by their related objects (such as indexes, comments, and grants), if any. The extracted data is written to an export dump file. The Import utility reads the object definitions and table data from the dump file. First, let us see export in detail.
Difference between exp/imp and expdp/impdp
Exp/imp is a Traditional export import,expdp/impdp is a Datapump export
import.
Traditional export import utility is starting with exp/imp,Datapump utility
is starting with expdp/impdp.
Datapump access files in the server (using ORACLE directories). Traditional export can access files in client and server both (not using ORACLE directories).
exp/imp (Traditional) is using conventional path , expdp /impdb (Datapump) is using Direct path.
Exp (Traditional) is byte mode ,Datapump is block mode.
Data Pump will recreate the user, whereas the old imp utility required the DBA to create the user ID before importing.
Datapump utility we can stop and Restart the Jobs.
Features of Datapump utility
Job Estimation can be Done in Datapump.
Data Remapping can be done using REMAPDATA parameter.
EXCLUDE and INCLUDE parameter allows the fine-grained object selection.
Failed export/import Jobs can be Restarted.
Export and import can be taken over the network using database links even without Generating the dump file using NETWORK_LINK parameter.
CONTENT parameter gives the freedom for what to export with options METADATA ONLY, DATA, BOTH.
You don’t need to specify the BUFFER size in datapump
Job estimated completion time can be monitored from v$session_longops view.
Dump file can be compressed with COMPRESSION parameter. In conventional exp/imp you have to compress the dumps using OS utilities.
Data encryption can be done in datapump.
DATAPUMP has interactive options like ADD_FILE, START_JOB, KILL_JOB, STOP_JOB.
REUSE_DUMPFILES parameter asks the confirmation/rewrite the existing dumpfile.
Thank you for giving your valuable time to read the above information.
If you want to be updated with all our articles send us the Invitation or Follow us:
In this blog, we will see how to upgrade Oracle Database 12c to 19c using AutoUpgrade Tool
What is the AutoUpgrade?
The Oracle Database AutoUpgrade utility is a small command-line tool that allows you to upgrade your databases very easily with very little interaction
The new AutoUpgrade utility in Oracle 19¢ performs almost 99% of the task by itself, we just have to provide inputs during the initial phase
So it performs Prechecks against multiple databases, upgrades multiple databases in one go
Also, it does Post upgrade, object recompilation, and time zone up-gradation
The only thing you need to provide is a contig file in text format
Which database releases are supported?
As a source, the minimum version is Oracle Database 11.2.0.4. onwards
Download the latest auto-upgrade jar file
Auto upgrade utility autoupgrade.jar file exists by default under $ORACLE_HOME/rdbms/admin directory of Oracle 19c Home
Oracle strongly recommends downloading the latest AutoUpgrade version before doing the upgrade
You download the most recent version from MOS Note: 2485487.1 AutoUpgrade Tool
Once you download this jar file transfer it to the Server and create a new directory and place this file in that directory
prddb_config.cfg file should have following entry which specifies source and target database home location and DB name and log locations and other information’s:
Auto upgrade Analyze mode checks your database to see if it is ready for the upgrade. This will reads data from the database and does not perform any updates.
Execute AutoUpgrade in analyze mode with the below syntax:
Once the upgrade process is started consider monitoring the logs and database alert logs to see the progress of the upgrade. Auto upgrade logs are available under
/home/oracle/auto_upgrade_19c/upg_logs/
Once the upgrade finishes crosscheck the below.
SELECT VERSION FROM V$TIMEZONE_FILE;
select name, open_mode, version, status from v$database, v$instance;
Post-upgrade task
Once the upgrade is successful and all testing is done, drop the restore point.
Drop the Guaranteed restore point
select name from v$restore_point;
drop restore point restorepoint_name;
Change the compatible parameter
After the upgrade, the database has to be tested properly before updating the compatible parameter. Once the parameter is updated database cannot be downgraded.
show parameter compatible;
alter system set compatible=’19.0.0′ scope=spfile;
shutdown immediate:
startup;
show parameter compatible;
Hope this blog was useful…
Please find out all of our articles send us the Invitation or Follow us:
Step:1 Check the database size in source. Step:2 check which tablespace holds the schema objects. Step:3 compile invalid objects in source. step:4 check the count of invalid dba_objects in the source. Step:5 create a directory for export purpose both in OS level and database level. Step:6 Estimate the size of dumpfile, so that we can know when will the export get completed. Step:7 export the database(TANSTAL). Step:8 create a fresh database for import. Step:9 Now create a directory in both OS level and database level for import purpose in the newly created database. Step:10 import the database.(ZHIGOMA) Step:11 Post upgrade steps in target database. 11.1 compile invalid objects in target database. 11.2 check whether any invalid objects present. 11.3 Run the query to check for currently installed database components.
Step:12 Verify the timezone of the upgraded database.
Step:13 Check the CONSTRAINTS count in both source & target.it is used to display the constraints that are defined in the database.
Step:1 Check the database size in source .
SQL> col "Database Size" format a20
col "Free space" format a20
col "Used space" format a20
select round(sum(used.bytes) / 1024 / 1024 / 1024 ) || ' GB' "Database Size"
, round(sum(used.bytes) / 1024 / 1024 / 1024 ) -
round(free.p / 1024 / 1024 / 1024) || ' GB' "Used space"
, round(free.p / 1024 / 1024 / 1024) || ' GB' "Free space"
from (select bytes
from v$datafile
union all
select bytes
from v$tempfile
union all
select bytes
from v$log) used
, (select sum(bytes) as p
from dba_free_space) free
group by free.p
Database Size Used space Free space
-------------------- -------------------- --------------------
2 GB 2 GB 0 GB
SQL>
Step:2 execute the following query to check which tablespace holds the schema objects.
set pagesize 130
break on Tablespace on Owner
column Objects format A20
select Tablespace_Name,Owner,COUNT(*)||’ tables’ Objects
from DBA_TABLES
group by Tablespace_Name,Owner
union
select Tablespace_Name, Owner, COUNT(*)||’ indexes’ Objects
from DBA_INDEXES
group by Tablespace_Name, Owner;
Step:3 compile invalid objects in source to reduce dependencies, so that we can find the best way to recompile it.
SQL> @?/rdbms/admin/utlrp.sql
COMP_TIMESTAMP UTLRP_BGN 2021-01-31 20:19:36
DOC> The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC> objects in the database. Recompilation time is proportional to the
DOC> number of invalid objects in the database, so this command may take
DOC> a long time to execute on a database with a large number of invalid
DOC> objects.
DOC>
DOC> Use the following queries to track recompilation progress:
DOC>
DOC> 1. Query returning the number of invalid objects remaining. This
DOC> number should decrease with time.
DOC> SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC> 2. Query returning the number of objects compiled so far. This number
DOC> should increase with time.
DOC> SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC> This script automatically chooses serial or parallel recompilation
DOC> based on the number of CPUs available (parameter cpu_count) multiplied
DOC> by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC> On RAC, this number is added across all RAC nodes.
DOC>
DOC> UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC> recompilation. Jobs are created without instance affinity so that they
DOC> can migrate across RAC nodes. Use the following queries to verify
DOC> whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC> 1. Query showing jobs created by UTL_RECOMP
DOC> SELECT job_name FROM dba_scheduler_jobs
DOC> WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC> 2. Query showing UTL_RECOMP jobs that are running
DOC> SELECT job_name FROM dba_scheduler_running_jobs
DOC> WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#
PL/SQL procedure successfully completed.
COMP_TIMESTAMP UTLRP_END 2021-01-31 20:19:39
DOC> The following query reports the number of invalid objects.
DOC>
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#
0
DOC> The following query reports the number of exceptions caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC> Note: Typical compilation errors (due to coding errors) are not
DOC> logged into this table: they go into DBA_ERRORS instead.
DOC>#
0
Function created.
PL/SQL procedure successfully completed.
Function dropped.
PL/SQL procedure successfully completed.
step:4 check the count of invalid dba_objects in the source.
SQL> select count(*) from dba_objects where status=’INVALID’;
COUNT(*)
----------
0
Step:5 create a directory for export purpose both in OS level and database level.
mkdir /u01/export
create directory export as '/u01/export';
Directory created
[oracle@orcldbs ~]$ mkdir export
[oracle@orcldbs ~]$ ls
12c.env Documents export oraprod.env Templates
19c.env Downloads Music Pictures utlrp.out
Desktop em13400_linux64-3.zip oradiag_oracle Public Videos
Create a table employee in the user : rahul.
Step:6 Estimate the size of dumpfile by this we are able to know the completion time of export activity.
Once the import is over, you can also review the log file.
13.1 compile invalid objects in target database.
SQL> @?/rdbms/admin/utlrp.sql
SQL> select count(*) from dba_objects where status=’INVALID’;
COUNT(*) --------
0
SQL> SELECT version FROM v$timezone_file;
VERSION ----------------- 32
13.3 Run the query to check for currently installed database components.
SQL> col COMP_ID for a10 col COMP_NAME for a40 col VERSION for a15 set lines 180 set pages 999 select COMP_ID,COMP_NAME,VERSION,STATUS from dba_registry;
13.4 Check the CONSTRAINTS count,it is used to display the constraints that are defined in the database.
SQL> SELECT constraint_type, count(*) AS num_constraints
FROM dba_constraints
GROUP BY constraint_type;
C NUM_CONSTRAINTS
C 5784
F 12
O 181
R 327
P 864
V 11
U 250
7 rows selected.
SQL>
yes now we can login to the zhigoma database and we can verify our user(rahul) and the table(employee) in that user is imported into this database or not.
We have successfully upgraded our database from 12 c to 19c using datapump !!!!