Oracle Wait Events And Their Solutions

Oracle Wait Events And Their Solutions.

Buffer Busy Wait:

This wait event happens when a session tries to access a block in the buffer cache but it can't because the buffer is busy, that is another session is modifying the block and the contents of the block are in flux.

Buffer Busy Wait:

SQL> SELECT s.sql_hash_value, sw.p1 file#, sw.p2 block#, sw.p3 reason
FROM v$session_wait sw, v$session s
WHERE sw.event = 'buffer busy waits'
AND sw.sid = s.sid; 

no rows selected
SQL> SELECT  owner , segment_name , segment_type
FROM  dba_extents
WHERE  file_id = &FileNumber
AND  &BlockNumber BETWEEN block_id AND block_id + blocks -1;  2    3    4
Enter value for filenumber: 1
old   3: WHERE  file_id = &FileNumber
new   3: WHERE  file_id = 1
Enter value for blocknumber: 2
old   4: AND  &BlockNumber BETWEEN block_id AND block_id + blocks -1
new   4: AND  2 BETWEEN block_id AND block_id + blocks -1

no rows selected

Another query that can be very useful is finding the objects in the entire Oracle database that are suffering from "buffer busy waits". The following query gives the top 10 segments:
SQL> SELECT * FROM (
   SELECT owner, object_name, subobject_name, object_type,
          tablespace_name, value
   FROM v$segment_statistics
   WHERE statistic_name='buffer busy waits' and owner not like '%SYS%'
   ORDER BY value DESC)
WHERE ROWNUM <=10;

OWNER                OBJECT_NAME                    SUBOBJECT_NAME                 OBJECT_TYPE        TABLESPACE_NAME                  VALUE
-------------------- ------------------------------ ------------------------------ ------------------ ------------------------------ ----------
GSMADMIN_INTERNAL    DDLID$                                                        TABLE              SYSAUX                               0
XDB                  XDB$ROOT_INFO                                                 TABLE              SYSAUX                               0
XDB                  XDB$SCHEMA_URL                                                INDEX              SYSAUX                               0

DB File Sequential Read

The db file sequential read wait event has three parameters:
file#, first block#, and block count.
In Oracle Database 11g, this wait event falls under the User I/O wait class. 


The Oracle process wants a block that is currently not in the SGA, and it is waiting for the database block to be read into the SGA from disk.

The two important numbers to look for are the TIME_WAITED and AVERAGE_WAIT by individual sessions.

Significant db file sequential read wait time is most likely an application issue.


This event occurs when a user tries to perform a Physical I/O while waiting for sequential reads from the Buffer cache. This type of situation usually occurs when the data on the table is accessed by using index, not full table scan, as a result of single block reading.
If this event occurs,  possible reasons are wrong index usage, index fragmentation, excessive I/O traffic on specific disks. To Solve this problem, Query should use Right index and fragmented indexes should be defragmented with Rebuild Index operation.
When you encounter this wait event, which appears very frequently in AWR and ADDM reports, we cannot always say that there is a problem. However, if this wait event takes place, if the database have ‘Enqueue’ and Latch Free and they are spending too much time, then database should be monitored.

 DB File Scattered Read

This wait event occurs getting multiblock of physical blocks that are not physically close to each other (neighbors) into buffer cache Scattered, or during a full scan to the buffer cache. So Db file scattered read is to read multiple blocks I/O during the fast full scan.

A scattered read is usually a multiblock read. It can occur for a fast full scan (of an index) in addition to a full table scan. The db file scattered read wait event identifies that a full scan is occurring. When performing a full scan into the buffer cache, the blocks read are read into memory locations that are not physically adjacent to each other.

Multiblock (up to DB_FILE_MULTIBLOCK_READ_COUNT blocks) reads due to full scans into the buffer cache show up as waits for 'db file scattered read'.

Direct path Read

 This event occurs when Oracle Instance query data from the Datafiles asynchronously and puts this data into PGA  instead of Buffer Cache in SGA.
This type of event usually occurs during the use of Temporary ( Temp ) Tablespace in the Sorting operations, during the creation of Lob segments, and when multiple sessions Full table scan in parallel.
In order to solve this problem, the memory should be increased, parallel operations should not be done unless required, and pay attention to Lob segments reads.

 DB CPU

This event represents the total time spent of the users’ queries on the CPU.  Oracle’s Background processes (SMON, PMON ..) are not included in this total time.
If this value is high, it means that the Oracle instance spends most of the time on the CPU. To reduce this wait event, the SQLs in the SQL ordered by CPU section in the AWR report must be TUNE.

Logfile sync

 This event is known as the time lost as a result of the LGWR process waiting while users initiate a Transaction Commit or Rollback.
If this wait event is available continuously, I/O performance of the LGWR process is probably poor, or Commit is coming too often by the application. The solution to this problem is not to commit too much, if necessary, and to examine the I/O performance of the disk on which the Redo log files are located, and to use a high performance disk such as an SSD disk if necessary.

Enq: TX – row lock contention

 row lock contention:  This type of event occurs when a user session is trying to update or delete a row held by another session, which is an application design problem. Normally, when a transaction is finished, commit or rollback must be executed to release related rows.
The solution to this problem is that if the session that holds the row is active, then execute commit statement, if it is not active, kill the session or execute rollback the session.

ARCH wait on SENDREQ

This wait event is the total time taken by the Archiver Processes to archive the Standby in the Dataguard and to write these archives to the local disks.
The main reason why this value is high is that the archives sent to the Standby side arrive late due to the network. To solve this problem, it is necessary to optimize the Network and set the DEFAULT_SDU_SIZE parameter in the sqlnet.ora file to an optimized value (32767).

 Gc current block busy

 This wait event occurs between the nodes of the Cluster database ( Real Application Cluster ). When a transaction requests a block, that request sent to the master instance. Normally, this request is performed by a cache fusion.
However, in some cases, this block transfer is delayed because the corresponding instance is held by the other instance or because the corresponding transaction records cannot be written to the redo logs immediately, in which case this wait event is triggered.
This can be solved by tune the wait event Log Writer process or Solving network problem between Cluster nodes.

 Gc cr block busy-wait

ifference is that while the above event is running in current mode, this wait event runs in CR mode. This can be solved by tune the wait event Log Writer process.

Read by Other Session

When a session waits on the "read by other session" event, it indicates a wait for another session to read the data from disk into the Oracle buffer cache. If this happens too often the performance of the query or the entire database can suffer. Typically this is caused by contention for "hot" blocks or objects so it is imperative to find out which data is being contended for. Once that is known, there are several alternative methods for solving the issue.
When information is requested from the database, Oracle will first read the data from disk into the database buffer cache. If two or more sessions request the same information, the first session will read the data into the buffer cache while other sessions wait. In previous versions this wait was classified under the "buffer busy waits" event. However, in Oracle 10.1 and higher this wait time is now broken out into the "read by other session" wait event. Excessive waits for this event are typically due to several processes repeatedly reading the same blocks, e.g. many sessions scanning the same index or performing full table scans on the same table. Tuning this issue is a matter of finding and eliminating this contention.

Finding the contentions :
When a session is waiting on the "read by other session" event, an entry will be seen in the v$session_wait system view, which will give more information on the blocks being waited for: SELECT p1 "file#", p2 "block#", p3 "class#" FROM v$session_wait WHERE event = 'read by other session'; If information collected from the above query repeatedly shows that the same block (or range of blocks) is experiencing waits, this indicates a "hot" block or object. The following query will give the name and type of the object: SELECT relative_fno, owner, segment_name, segment_type FROM dba_extents WHERE file_id = &file AND &block BETWEEN block_id AND block_id + blocks - 1;
Eliminating contentions: Depending on the Oracle database environment and specific performance situation the following variety of methods can be used to eliminate contention: Tune inefficient queries - This is one of those events you need to "catch in the act" through the v$session_wait view as prescribed above. Then, since this is a disk operating system issue, take the associated system process identifier (c.spid) and see what information you can obtain from the operating system. Redistribute data from the hot blocks - Deleting and reinserting the hot rows will often move them to a new data block. This will help decrease contention for the hot block and increase performance. More information about the data residing within the hot blocks can be retrieved with queries similar to the following: SELECT data_object_id FROM dba_objects WHERE owner='&owner' AND object_name='&object'; SELECT dbms_rowid.rowid_create(1,<data_object_id>,<relative_fno>,<block>,0) start_rowid FROM dual; --rowid for the first row in the block SELECT dbms_rowid.rowid_create(1,<data_object_id>,<relative_fno>,<block>,500) end_rowid FROM dual; --rowid for the 500th row in the block SELECT <column_list> FROM <owner>.<segment_name> WHERE rowid BETWEEN <start_rowid> AND <end_rowid>

Thank you for giving your valuable time to read the above information.

If you want to be updated with all our articles send us the Invitation or Follow us:

Ramkumar’s LinkedIn: https://www.linkedin.com/in/ramkumardba/
LinkedIn Group: https://www.linkedin.com/in/ramkumar-m-0061a0204/
Facebook Page: https://www.facebook.com/Oracleagent-344577549964301
Ramkumar’s Twitter: https://twitter.com/ramkuma02877110
Ramkumar’s Telegram: https://t.me/oracleageant
Ramkumar’s Facebook: https://www.facebook.com/ramkumarram8

Data Guard Manual Reinstate Using Flashback

Data Guard Manual Reinstate Using Flashback

Description:
          In this blog, we are going to see step by step manual data guard reinstate process using the flashback database method.

Data Guard Reinstate:
          After done failover to your Standby database so it becomes the new Primary. the old primary database can be reinstated as a standby database. this convert happens only in the database in flashback mode.

Overall Steps:

  1. Check the flashback enabled.
  2. Get the SCN number from the new primary database.
  3. Bounce the old primary database open in the mount stage.
  4. Flashback the database in SCN number of the new primary database.
  5. Convert the database to physical standby.
  6. Bounce the database and check the database role.

Step 1:- Check the flashback enabled after failover.
SELECT FLASHBACK_ON FROM V$DATABASE;

Step 2:- Get SCN number from the new primary database.
SQL> select to_char(standby_became_primary_scn) from v$database;

Step 3:- Bounce the Old primary database and open as mount stage.

SQL> shut immediate
SQL> startup nomount;
SQL> alter database mount;
Database altered.

Step 4:- Flashback the database in SCN number of the new primary database.

flashback database to scn 2370985;

flashback complete.

Step 5:- Convert the database to physical standby.

SQL> Alter Database Convert To Physical Standby;

Database altered.

Step 6:- Bounce the database and check the database role.

SQL> shut immediate

SQL> startup
ORACLE instance started.
Database mounted.
Database opened.
SQL> select name,open_mode,database_role from v$database;

NAME OPEN_MODE DATABASE_ROLE
——— ——————– —————-
SYSTEM READ ONLY PHYSICAL STANDBY

*******************************************************************************

 

Thank you for giving your valuable time to read the above information.

If you want to be updated with all our articles send us the Invitation or Follow us:

Ramkumar’s LinkedIn: https://www.linkedin.com/in/ramkumardba/
LinkedIn Group: https://www.linkedin.com/in/ramkumar-m-0061a0204/
Facebook Page: https://www.facebook.com/Oracleagent-344577549964301
Ramkumar’s Twitter: https://twitter.com/ramkuma02877110
Ramkumar’s Telegram: https://t.me/oracleageant
Ramkumar’s Facebook: https://www.facebook.com/ramkumarram8

RUNNING  GATHERSTATS

RUNNING  GATHERSTATS

Description:
         In this blog, we are going to see what has gathered stats and various levels of gather stats.

What is GATHERSTATS:
          To gather stats in oracle we require to use the DBMS_STATS package. It will collect the statistics in parallel with collecting the global statistics for partitioned objects. The DBMS_STATS package is specially used only for optimizer statistics. As I explained in the first paragraph the dbms_stats is very vital for good SQL performance. We require to gather the stats before adjusting or setting up any optimizer parameters in oracle.

         The less the query cost the execution time of the query is fast. We must have to gather the statistics on regular basis for database objects to give the best information to the Oracle optimizer to run queries in the best possible time. Using the analysis statement is the traditional way of checking the cost of the query. But nowadays to gather stats in oracle we need to use the DBMS_STATS package.

Gather STATS:

CASCADE => TRUE: Gather statistics on the indexes as well. If not used 
Oracle will determine whether to collect it or not.

DEGREE => 4: Degree of parallelism.
ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE : (DEFAULT) Auto set
the sample size % for skew(distinct) values (accurate and faster than
setting a manual sample size).

METHOD_OPT=> : For gathering Histograms:
FOR COLUMNS SIZE AUTO: You can specify one column between “”
instead of all columns.

FOR ALL COLUMNS SIZE REPEAT: Prevent deletion of histograms and collect
it only for columns that already have histograms.

FOR ALL COLUMNS: Collect histograms on all columns.
FOR ALL COLUMNS SIZE SKEWONLY: Collect histograms for columns that have
skewed values should test skewness first

FOR ALL INDEXED COLUMNS: Collect histograms for columns that
have indexes only.

DATABASE Level:

      Gathering statistics for all objects in the database, the cascade 
will include indexes  


SQL> exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;

PL/SQL procedure successfully completed.

SQL> select OWNER,TABLE_NAME,LAST_ANALYZED,STALE_STATS from
DBA_TAB_STATISTICS where STALE_STATS='YES';


SQL> exec dbms_stats.gather_database_stats(cascade=>TRUE,method_opt =>'FOR
ALL COLUMNS SIZE AUTO');


PL/SQL procedure successfully completed.

SCHEMA level:

      Gathering statistics for all objects in a schema, the cascade
will include indexes.If not used Oracle will determine whether to
collect it or not.


SQL> exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;

PL/SQL procedure successfully completed.

SQL> exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;

select OWNER,TABLE_NAME,LAST_ANALYZED,STALE_STATS from DBA_TAB_STATISTICS
where STALE_STATS='YES' and OWNER='&owner;


PL/SQL procedure successfully completed.

SQL> set timing on

SQL> exec dbms_stats.gather_schema_stats(ownname=>'&schema_name',
CASCADE=>TRUE,ESTIMATE_PERCENT=>dbms_stats.auto_sample_size,degree =>4);


Enter value for schema_name: vbt

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.19

SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS ('&schema_name');

Enter value for schema_name: vbt

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.09


TABLE Level:

          The CASCADE parameter determines whether or not statistics are
gathered for the indexes on a table.


SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS ('&schema_name');

Enter value for schema_name: vbt

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.09

SQL> SELECT OWNER,TABLE_NAME,LAST_ANALYZED,STALE_STATS from
DBA_TAB_STATISTICS WHERE TABLE_NAME='&TNAME';


Enter value for tname: agent

old   1: SELECT OWNER,TABLE_NAME,LAST_ANALYZED,STALE_STATS from
DBA_TAB_STATISTICS WHERE TABLE_NAME='&TNAME'


new   1: SELECT OWNER,TABLE_NAME,LAST_ANALYZED,STALE_STATS from
DBA_TAB_STATISTICSWHERE TABLE_NAME='agent'


no rows selected

Index Statistics:

SQL> CREATE TABLE sam AS SELECT * FROM dba_tables ORDER BY table_name;

Table created.

SQL> CREATE INDEX idsam ON sam(table_name, num_rows);

Index created.


SQL> EXEC dbms_stats.gather_table_stats(ownname=>'SYS', tabname=>'sam');

PL/SQL procedure successfully completed.

SQL> SELECT t.table_name, i.index_name, t.blocks, t.num_rows,
i.clustering_factor
FROM dba_tables t, dba_indexes i
WHERE t.table_name = i.table_name AND i.index_name='idsam';
TABLE_NAME INDEX_NAME BLOCKS NUM_ROWS CLUSTERING_FACTOR
------------------ ------------------- ---------- -----------------
sam idsam 46 1705 46

exec DBMS_STATS.GATHER_INDEX_STATS(ownname => '&OWNER',
indname =>'&INDEX_NAME',estimate_percent =>DBMS_STATS.AUTO_SAMPLE_SIZE);

 

Thank you for giving your valuable time to read the above information.

If you want to be updated with all our articles send us the Invitation or Follow us:

Ramkumar’s LinkedIn: https://www.linkedin.com/in/ramkumardba/
LinkedIn Group: https://www.linkedin.com/in/ramkumar-m-0061a0204/
Facebook Page: https://www.facebook.com/Oracleagent-344577549964301
Ramkumar’s Twitter: https://twitter.com/ramkuma02877110
Ramkumar’s Telegram: https://t.me/oracleageant
Ramkumar’s Facebook: https://www.facebook.com/ramkumarram8

DATABASE INCARNATION  USING RMAN

ORACLE DATABASE INCARNATION USING RMAN

Description:-
          In this blog, we are going to see the Oracle database incarnation using RMAN backup with the demo.

INCARNATION:-
          The current online redo logs are archived, the log sequence number is reset to 1, a new database incarnation is created, and the online redo logs are given a new timestamp and SCN.

Database incarnation falls into the following category:-
Current, Parent, Ancestor, and Sibling:
i) Current Incarnation: The database incarnation in which the database is currently generating redo.

ii) Parent Incarnation: The database incarnation from which the current incarnation branched following an OPEN RESETLOGS operation.
iii) Ancestor Incarnation: The parent of the parent incarnation is an ancestor incarnation. Any parent of an ancestor incarnation is also an ancestor incarnation.
iv) Sibling Incarnation: Two incarnations that share a common ancestor are sibling incarnations if neither one is an ancestor of the other.

Overall steps:-

  1. Backup full database using RMAN
  2. After backup create a table in the database note the SCN number.
  3. Delete and Drop table take SCN number.
  4. Do log switching and check incarnation status.
  5. Bounce the database started in the mount stage.
  6. Connect RMAN and recover the database mention the sequence number
    where the table record was deleted.
  7. After the recovery is completed perform resetlogs to open the database.
  8. Connect a database to check incarnation also check the table can be retrieved.
  9. Shut the database and start the mount stage again.
  10. Reset the database incarnation 2 to retrieve the table records mention the
    scn where the table was created.
  11. Check record can be retrieved.

Step 1: Backup full database:-

[oracle@test ~]$ . livedb.env
[oracle@test ~]$ rman target /
connected to target database: LIVEDB (DBID=3038906043)
RMAN> backup database;
Starting backup at 15-SEP-21
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=82 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/LIVEDB/datafile/
o1_mf_system_jmz2qqys_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/LIVEDB/datafile/
o1_mf_undotbs1_jmz2snrh_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/LIVEDB/datafile/
o1_mf_sysaux_jmz2rvhs_.dbf
input datafile file number=00007 name=/u01/app/oracle/oradata/LIVEDB/datafile/
o1_mf_users_jmz2sovr_.dbf
channel ORA_DISK_1: starting piece 1 at 15-SEP-21
channel ORA_DISK_1: finished piece 1 at 15-SEP-21
piece handle=/u01/app/oracle/fast_recovery_area/LIVEDB/backupset/2021_09_15/
o1_mf_nnndf_TAG20210915T035943_jn28lqm2_.bkp tag=TAG20210915T035943 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:06
Finished backup at 15-SEP-21

Starting Control File and SPFILE Autobackup at 15-SEP-21
piece handle=/u01/app/oracle/fast_recovery_area/LIVEDB/autobackup/2021_09_15/
o1_mf_s_1083297650_jn28ntoh_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 15-SEP-21
RMAN> exit

Step 2: After backup create a table in the database note SCN number :-

 [oracle@test ~]$ . livedb.env
[oracle@vtest ~]$ sqlplus / as sysdba
SQL> create table sample as select * from all_objects;
Table created.
SQL> select current_scn from v$database;
CURRENT_SCN
———–
———–
    2146558

Step 3: Delete and Drop table take SCN number:-
Delete the rows in the sample table:
SQL> delete from the sample;
71297 rows deleted.

Get the scn number also,
SQL> select current_scn from v$database;
CURRENT_SCN
———-
———–
    2146578

Drop the table structure:
SQL> drop table sample;
Table dropped.
SQL> commit;
Commit complete.
SQL> select current_scn from v$database;
CURRENT_SCN
———
———–
    2146601

Step 4: Do log switching and check incarnation status:-

SQL> alter system switch logfile;
System altered.
Check the incarnation using v$database_incarnation:
SQL> select incarnation#, resetlogs_change# from v$database_incarnation;
INCARNATION# RESETLOGS_CHANGE#
———— —————–
               1                        1
               2              1920977

Step 5: Bounce the database start in mount stage:-

SQL>  shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.
Total System Global Area 1694495520 bytes
Fixed Size                       8897312 bytes
Variable Size                 402653184 bytes
Database Buffers        1275068416 bytes
R
edo Buffers                  7876608 bytes
Database mounted.

Step 6: Connect RMAN and recover the database mention the sequence number where the table record was deleted:-

[oracle@test ~]$ rman target /
connected to target database: LIVEDB (DBID=3038906043, not open)
RMAN> run{
set until scn=2146578;
restore database;
recover database;
}
executing command: SET until clause
Starting restore at 15-SEP-21
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=45 device type=DISK
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/
LIVEDB/datafile/o1_mf_system_jmz2qqys_.dbf
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/
LIVEDB/datafile/o1_mf_sysaux_jmz2rvhs_.dbf
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/
LIVEDB/datafile/o1_mf_undotbs1_jmz2snrh_.dbf
channel ORA_DISK_1: restoring datafile 00007 to /u01/app/oracle/oradata/
LIVEDB/datafile/o1_mf_users_jmz2sovr_.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/
LIVEDB/backupset/2021_09_15/o1_mf_nnndf_TAG20210915T035943_jn28lqm2_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/LIVEDB/
backupset/2021_09_15/o1_mf_nnndf_TAG20210915T035943_jn28lqm2_.bkp 
tag=TAG20210915T035943

channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:56
Finished restore at 15-SEP-21

Starting recover at 15-SEP-21
using channel ORA_DISK_1
starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 15-SEP-21

Step 7: After the recovery is completed perform resetlogs to open the database:-

RMAN> alter database open resetlogs;
Statement processed
RMAN> exit                                                                

Step 8 : Connect database to check incarnation also check the table can be retrieved:-

[oracle@test ~]$ sqlplus / as sysdba
SQL>  select incarnation#, resetlogs_change# from v$database_incarnation;
INCARNATION# RESETLOGS_CHANGE#
————          —————–
               1                        1
               2              1920977
               3              2146579
The table structure is retrieved but the record can be deleted from the scn number stage.
SQL>  select * from the sample;
no rows selected

Step 9: Shut the database and start the mount stage again:-

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.
Total System Global Area 1694495520 bytes
Fixed Size                       8897312 bytes
Variable Size                 402653184 bytes
Database Buffers        1275068416 bytes
Redo Buffers                  7876608 bytes
Database mounted.
SQL> exit

Step 10: Reset the database incarnation 2 to retrieve the table records mention the scn where table created:-

RMAN> reset database to incarnation 2;
using target database control file instead of recovery catalog
database reset to incarnation 2

RMAN> run{
set until scn=2146558;
restore database;
recover database;
}
executing command: SET until clause
Starting restore at 15-SEP-21
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=44 device type=DISK
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/
LIVEDB/datafile/o1_mf_system_jmz2qqys_.dbf
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/
LIVEDB/datafile/o1_mf_sysaux_jmz2rvhs_.dbf
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/
LIVEDB/datafile/o1_mf_undotbs1_jmz2snrh_.dbf
channel ORA_DISK_1: restoring datafile 00007 to /u01/app/oracle/oradata/
LIVEDB/datafile/o1_mf_users_jmz2sovr_.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/
LIVEDB/backupset/2021_09_15/o1_mf_nnndf_TAG20210915T035943_jn28lqm2_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/
LIVEDB/backupset/2021_09_15/o1_mf_nnndf_TAG20210915T035943_jn28lqm2_.bkp 
tag=TAG20210915T035943
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:55
Finished restore at 15-SEP-21

Starting recover at 15-SEP-21
using channel ORA_DISK_1
starting media recovery
archived log for thread 1 with sequence 18 is already on disk as file 
/u01/app/oracle/fast_recovery_area/LIVEDB/archivelog/2021_09_15/
o1_mf_1_18_jn28r8l8_.arc
archived log file name=/u01/app/oracle/fast_recovery_area/LIVEDB/archivelog/
2021_09_15/o1_mf_1_18_jn28r8l8_.arc thread=1 sequence=18
media recovery complete, elapsed time: 00:00:02
Finished recover at 15-SEP-21

Perform reset logs method:

RMAN> alter database open resetlogs;
Statement processed
RMAN> exit

Step 11: Check the record can be retrieved from the table:-

[oracle@test ~]$ sqlplus / as sysdba

SQL> select count(1) from sample;

  COUNT(1)
    ———-
     71297
SQL>

Thank you for giving your valuable time to read the above information.

If you want to be updated with all our articles send us the Invitation or Follow us:

Ramkumar’s LinkedIn: https://www.linkedin.com/in/ramkumardba/
LinkedIn Group: https://www.linkedin.com/in/ramkumar-m-0061a0204/
Facebook Page: https://www.facebook.com/Oracleagent-344577549964301
Ramkumar’s Twitter: https://twitter.com/ramkuma02877110
Ramkumar’s Telegram: https://t.me/oracleageant
Ramkumar’s Facebook: https://www.facebook.com/ramkumarram8

Data Guard Configuration Parameters

DATA GUARD CONFIGURATION PARAMETERS

 

Description :
       In this blog, we are going to see oracle Data Guard Configuration parameters. and how to view and set the parameters.

Types of parameter configuration.


Independent parameters.
Primary database parameters.
Standby database parameters.

Independent parameters:
DB_UNIQUE_NAME:
            DB_UNIQUE_NAME specifies a globally unique name for the database. must have a unique DB_UNIQUE_NAME for every database.it can be up to 30 characters and is case insensitive. The following characters are valid in a database name: alphanumeric characters, underscore (_), the number sign (#), and dollar sign ($).
db_unique_name=’unique_name’

 SQL> show parameter db_unique_name
NAME                                 TYPE        VALUE
—————————–       ———– —————
db_unique_name              string      orcl

LOG_ARCHIVE_CONFIG: 
          LOG_ARCHIVE_CONFIG enables or disables the sending of redo logs to remote destinations and the receipt of remote redo logs, and specifies the unique database names (DB_UNIQUE_NAME) for each database in the Data Guard configuration.
          We can add send,nosend,receive,noreceive,dgconfig,nodgconfig to this log_archive_parameter.
log_archive_config=’db_config=(Primary,Standby)’
SQL> show parameter LOG_ARCHIVE_CONFIG
NAME                                 TYPE        VALUE
——————————— ———– ——————-
log_archive_config            string      dg_config=(orcl,stand)
change value to log_archive_config:
SQL> ALTER SYSTEM SET log_archive_config=’dg_config=(orcl,stand)’ SCOPE=both;
System altered.

LOG_ARCHIVE_MAX_PROCESSES:
          LOG_ARCHIVE_MAX_PROCESSES specifies the number of archiver background processes (ARC0 through ARC9) Oracle initially invokes.
log_archivemax_processes=’4′

DB_CREATE_FILE_DEST:
          DB_CREATE_FILE_DEST specifies the default location for Oracle-managed datafiles.
db_create_file_dest=’+DATA’
SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST=’/u02/oradata’;

Primary database parameters:
LOG_ARCHIVE_DEST_n:
The LOG_ARCHIVE_DEST_n initialization parameter defines up to ten destinations, each of which must specify either the LOCATION or the SERVICE attribute to specify where to archive the redo data. Location and service attributes are mandatory for the configuration.
SERVICE – it can be a tnsnames entry to identify the database.
SYNC – synchronous redo transmission.
ASYNC – Asynchronous redo transmission.

LOGARCHIVE_DEST_STATE_n:
          Specify the availability state of the corresponding destination.
enabled:
Specifies that a valid log archive destination can be used for a subsequent archiving operation
defer:
Specifies that valid destination information and attributes are preserved, but the destination is excluded from archiving operations until re-enabled.
alternate:
Specifies that a log archive destination is not enabled but will become enabled if communications to another destination fail.

Standby Database Parameters:
DB_FILE_NAME_CONVERT:
            It converts the filename of a new data file on the primary database to a filename on the standby database.
SQL> show parameter DB_FILE_NAME_CONVERT
NAME                                            TYPE         VALUE
———————————              ———–   ——————————
db_file_name_convert                 string      /u01/app/oracle/oradata/ORCL/
                                                                         , /u01/app/oracle/oradata/STAND/

LOG_FILE_NAME_CONVERT:
          LOG_FILE_NAME_CONVERT converts the filename of a new log file on the primary database to the filename of a log file on the standby database.
SQL> show parameter LOG_FILE_NAME_CONVERT
NAME                                                     TYPE         VALUE
———————————                        ——–        ————
log_file_name_convert                       string          /u01/app/oracle/oradata/ORCL/
                                                                         , /u01/app/oracle/oradata/STAND
alter system set db_file_name_convert=’/u02/oradata/orcl/datafile’ scope=spfile;
alter system set log_file_name_convert=’orcl’,’stand’ scope=spfile;

FAL_SERVER:
      FAL means fetch archive log. It clearly indicates the name that fetches the archive log.
FAL_SERVER is used to fetch an archive log server for a standby database.
Value in FAL_SERVER parameter act as Oracle Net Service name which points to the standby database.
ALTER SYSTEM SET fal_server=’stand ‘SCOPE=both;
ALTER SYSTEM SET fal_client=’orcl ‘SCOPE=both;

FAL_CLIENT:
FAL_CLIENT specifies the FAL(fetch archive log) client name that is used by the FAL service.
It is configured for point the FAL Client.
Value in FAL_CLIENT is also an Oracle Net Service Name.
ALTER SYSTEM SET fal_server=’orcl ‘SCOPE=both;
ALTER SYSTEM SET fal_client=’stand’SCOPE=both;

STANDBY_FILE_MANAGEMENT:
          STANDBY_FILE_MANAGEMENT enables or disables automatic standby file management. When automatic standby file management is enabled, operating system file additions and deletions on the primary database are replicated on the standby database.
MANUAL:
D
isables automatic standby file management.
AUTO:
E
nables automatic standby file management.
SQL> ALTER SYSTEM SET standby_file_management=’AUTO’ SCOPE=both;


Thank you for giving your valuable time to read the above information.

If you want to be updated with all our articles send us the Invitation or Follow us:

Ramkumar’s LinkedIn: https://www.linkedin.com/in/ramkumardba/
LinkedIn Group: https://www.linkedin.com/in/ramkumar-m-0061a0204/
Facebook Page: https://www.facebook.com/Oracleagent-344577549964301
Ramkumar’s Twitter: https://twitter.com/ramkuma02877110
Ramkumar’s Telegram: https://t.me/oracleageant
Ramkumar’s Facebook: https://www.facebook.com/ramkumarram8

 

Data Guard Architecture

Data Guard Architecture

Description:
       In this blog, we are going to see oracle Data Guard architecture and a clear description of the data guard overall process.

What is Oracle Data Guard:

  • Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data.
  • Data Guard maintains these standby databases as transactionally consistent copies of the production database.
  • If the production database becomes unavailable, Data Guard can switch any standby database to the production role.

Data Guard configuration pre-system checks:
Hardware :
       Hardware can be different for the standby and primary databases. Like CPU, hard disk capacity.
Operating system :
       The operating system should be the same as both the primary and standby databases, if you use Linux on the primary side the same os will be the standby. OS versions can be different but the platform should be the same.
Software :
         The same release of oracle database version in primary and standby side.
         Standby database enables archive and force logging method.
          Sysdba privilege required.

ORACLE DATA GUARD ARCHITECTURE:
Primary Database Process:

LGWR(log writer)
            Log writer collects transaction log information and updates to the online redo logs.
      LGWR modes:
            SYNC– in synchronous mode, it sends redo information directly to the RFS(remote file server) process on the standby database its waits for the confirmation before proceeding.
            ASYNC – in asynchronous mode its also sends redo information directly it does not wait before proceeding.
             In ASYNC mode LGWR submits the network I/O request to the network server (LNSn) process for the destination.

Archiver process(ARCn):
            ARCn or a SQL session performing an archival operation, create a copy of online redo logs locally for primary database recovery.
            ARCn also send simultaneously online logs to the RFS(remote file server)
            It’s also responsible for GAP solving in the standby database.

FAL (Fetch Archive Logs):
            FAL provides a client/server mechanism for resolving gaps detected in the range of archive logs generated in the primary database and received to the standby database.

Standby Database Process:

RFS (remote file server process):
            RFS it receives redo information from the primary database. RFS writes redo to standby redo logs or archived logs
            Each primary LSN, ARCn has their own RFS

ARC archiver process;
            ARCn process archives all standby redo logs.

Managed Recovery Process(MRP):
            For the physical standby database, only MRP applies archived log information to the physical standby database.
            MRP process commands:
                        Alter database recover managed standby database disconnect no delay.
                        Alter database recover managed standby database disconnect from the session.
                        Alter database recover managed standby database cancel.

Logical standby database:
            LSP is controlling the applies of archived redo log information to the logical standby database.

Standby redo log files:
            Standby redo log files are used only when the database is in the standby role to store redo data received from the primary database.
            Standby redo logs from a separate pool of log filegroup.

********************************************************************************

Thank you for giving your valuable time to read the above information.

If you want to be updated with all our articles send us the Invitation or Follow us:

Ramkumar’s LinkedIn: https://www.linkedin.com/in/ramkumardba/
LinkedIn Group: https://www.linkedin.com/in/ramkumar-m-0061a0204/
Facebook Page: https://www.facebook.com/Oracleagent-344577549964301
Ramkumar’s Twitter: https://twitter.com/ramkuma02877110
Ramkumar’s Telegram: https://t.me/oracleageant
Ramkumar’s Facebook: https://www.facebook.com/ramkumarram8

Dataguard Broker Configuration

DATA GUARD BROKER CONFIGURATION

Data Guard Broker:
         An Oracle Data Guard Broker configuration is a logical definition of an Oracle Data Guard configuration allowing for centralized management and configuration of the physical resources involved in the configuration. 

Step 1: Connect to both Databases (primary and standby) and issue the following command:-

SQL>alter system set dg_broker_start=true;
System altered.

Step 2: On the primary server, issue the following command to Register the Primary Server with the Broker.

[oratest@oracle dbs]$ dgmgrl sys/oracle@orcl
DGMGRL for Linux: Release 19.0.0.0.0 – Production on Mon Nov 8 17:14:30 2021
Version 19.9.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

Welcome to DGMGRL, type “help” for information.
Connected to “orcl”
Connected as SYSDBA.

DGMGRL> create configuration 'dg_my_config' as primary database is 'orcl' connect 
identifier is orcl;

Configuration “dg_my_config” created with primary database “orcl”

Step 3: Now add the standby database:-

DGMGRL> add database stand as connect identifier is stand maintained as physical;

Database "stand" added

Step 4: Now we enable the new configuration;-

DGMGRL> enable configuration;

Enabled.
DGMGRL>

Step 5 ; The following commands show how to check the configuration and status of the databases from the broker:-

DGMGRL> show configuration;

Configuration - my_dg_config
Protection Mode: MaxPerformance
Members:
orcl     - Primary database
stand- Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS   (status updated 26 seconds ago)
DGMGRL> show database orcl;

   Database - orcl
  
   Role:       PRIMARY
   
  Intended State:  TRANSPORT-ON
  
   Instance(s):    orcl
   
   Database Status:
  
   SUCCESS

  DGMGRL> show database stand;

  Database - stand

  Role:               PHYSICAL STANDBY

  Intended State:     APPLY-ON

  Transport Lag:      0 seconds (computed 1 second ago)

  Apply Lag:          0 seconds (computed 1 second ago)

  Average Apply Rate: 5.00 KByte/s

  Real Time Query:    OFF

 Instance(s):
    orcl
  Database Status:
   SUCCESS

Step 6: Stop/Start Managed Recovery:-

Stop managed recovery.

SQL> alter database recover managed standby database cancel;

Start managed recovery.

SQL> alter database recover managed standby database disconnect;

 

Thank you for giving your valuable time to read the above information.

If you want to be updated with all our articles send us the Invitation or Follow us:

Ramkumar’s LinkedIn: https://www.linkedin.com/in/ramkumardba/
LinkedIn Group: https://www.linkedin.com/in/ramkumar-m-0061a0204/
Facebook Page: https://www.facebook.com/Oracleagent-344577549964301
Ramkumar’s Twitter: https://twitter.com/ramkuma02877110
Ramkumar’s Telegram: https://t.me/oracleageant
Ramkumar’s Facebook: https://www.facebook.com/ramkumarram8

 

ORACLE RELEASE PATCHING IN ORACLE 19C -Opatch update

patch

Description

In this article, we are going to see steps used to apply the latest Oracle 19c Database Release Update 19.9.0.0.201020 (Patch 31771877) The Environment is a single instance database.

Review readme file on Patch 31771877 – Database Release Update 19.9.0.0.201020

Download patch p31

771877_190000_Linux-x86-64.zip

Make sure the opatch version is minimum 12.2.0.1.19


Step:-1 Download the Patch from oracle support

p31771877_190000_Linux-x86-64.zip

p6880880_200000_Linux-x86-64.zip

Step:-2 Copied the patch to DB Server

[oratest@oracle ~]$ mkdir patch

[oratest@oracle ~]$ ls

[oratest@oracle ~]$ cd patch/

[oratest@oracle ~]$ ls -lrth
-rwxrwxr-x. 1 oratest oratest 116M Oct 21 10:55
-x86-64.zip
-rwxrwxr-x. 1 oratest oratest 1.6G Oct 21 11:23 p6880880_210000_Linux-x86-64.zip


Step:-3 Upgrade Opatch Tool from 12.2.0.1.17 to 12.2.0.1.

[oratest@oracle ~]$ cd $ORACLE_HOME
[oratest@oracle ~]$ ls
[oratest@oracle ~]$ ./opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded

[oratest@oracle ~]$ cd /home/oratest/patch 

[oratest@oracle ~]$  ls

[oratest@oracle ~]$cp p6880880_200000_Linux-x86-64.zip
/u01/app/oracle/product/19.0.0/dbhome_1

[oratest@oracle ~]$ cd /app/oracle/product/19.0.0/dbhome_1

[oratest@oracle ~]$pwd
/u01/app/oracle/product/19.0.0/dbhome_1
[oratest@oracle ~]$ mv OPatch/ OPatch_BKP

[oratest@oracle ~]$ unzip p6880880_200000_Linux-x86-64.zip
Archive: p6880880_200000_Linux-x86-64.zip
creating: OPatch/
inflating: OPatch/emdpatch.pl
creating: OPatch/oracle_common/
creating: OPatch/oracle_common/modules/
inflating: OPatch/oracle_common/modules/com.oracle.glcm.common-logging_1.6.5.0.jar
inflating: OPatch/oracle_common/modules/common-logging-config.jar
creating: OPatch/oplan/

[oratest@oracle]$ cd OPatch
[oratest@oracle OPatch]$ ./opatch version
OPatch Version: 12.2.0.1.27
OPatch succeeded.

Step:-4 Check Patches status before applying using the below query

SET LINESIZE 500
SET PAGESIZE 1000
SET SERVEROUT ON
SET LONG 2000000
COLUMN action_time FORMAT A12
COLUMN action FORMAT A10
COLUMN comments FORMAT A30
COLUMN description FORMAT A60
COLUMN namespace FORMAT A20
COLUMN status FORMAT A10

SELECT TO_CHAR(action_time, ‘YYYY-MM-DD’) AS action_time,action,status,
description,patch_id FROM sys.dba_registry_sqlpatch ORDER by action_time;

Identifying Invalid Objects before patching

SQL> select owner,object_type,object_name,status from dba_objects 
where status='INVALID'; no row selected.

Step:-5  Apply RU patch on ORACLE_HOME 19c

[oracle@oraem01 soft]$ unzip p31771877_190000_Linux-x86-64.zip
[oracle@oraem01 soft]$ 31771877
[oratest@oracle 31771877]$ export ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1

[oratest@oracle 31771877]$ 
export PATH=/u01/app/oracle/product/19.0.0/dbhome_1/OPatch:$PATH

Step – 6:- Apply opatch

[oratest@oracle 31771877]$  opatch apply
Oracle Interim Patch Installer version 12.2.0.1.27
Copyright (c) 2021, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/19.0.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/19.0.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.27
OUI version       : 12.2.0.7.0
Log file location : /u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/
opatch/opatch2021-10-21_12-48-42PM_1.log Verifying environment and performing prerequisite checks... OPatch continues with these patches: 31771877 Do you want to proceed? [y|n] y User Responded with: Y All checks passed. Please shutdown Oracle instances running out of this ORACLE_HOME on the local system. (Oracle Home = '/u01/app/oracle/product/19.0.0/dbhome_1') Is the local system ready for patching? [y|n] y User Responded with: Y Backing up files... Applying interim patch '31771877' to OH '/u01/app/oracle/product/19.0.0/dbhome_1' ApplySession: Optional component(s) [ oracle.network.gsm, 19.0.0.0.0 ] ,
[ oracle.rdbms.ic, 19.0.0.0.0 ] , [ oracle.rdbms.tg4db2, 19.0.0.0.0 ] ,
[ oracle.tfa, 19.0.0.0.0 ] , [ oracle.options.olap.awm, 19.0.0.0.0 ] ,
[ oracle.net.cman, 19.0.0.0.0 ] , [ oracle.xdk.companion, 19.0.0.0.0 ] ,
[ oracle.oraolap.mgmt, 19.0.0.0.0 ] , [ oracle.network.cman, 19.0.0.0.0 ] ,
[ oracle.options.olap, 19.0.0.0.0 ] , [ oracle.assistants.usm, 19.0.0.0.0 ] ,
[ oracle.assistants.asm, 19.0.0.0.0 ] , [ oracle.sqlj, 19.0.0.0.0 ] ,
[oracle.jdk, 1.8.0.191.0] not present in the Oracle Home or a higher version is found. Patching component oracle.rdbms.rsf, 19.0.0.0.0... Patching component oracle.rdbms, 19.0.0.0.0... Patching component oracle.rdbms.util, 19.0.0.0.0... Patching component oracle.assistants.acf, 19.0.0.0.0... Patching component oracle.assistants.deconfig, 19.0.0.0.0... Patching component oracle.assistants.server, 19.0.0.0.0... Patching component oracle.buildtools.rsf, 19.0.0.0.0... Patching component oracle.ctx, 19.0.0.0.0... Patching component oracle.dbjava.ic, 19.0.0.0.0... Patching component oracle.dbjava.jdbc, 19.0.0.0.0... Patching component oracle.dbjava.ucp, 19.0.0.0.0... Patching component oracle.dbtoolslistener, 19.0.0.0.0... Patching component oracle.ldap.rsf, 19.0.0.0.0... Patching component oracle.network.rsf, 19.0.0.0.0... Patching component oracle.oracore.rsf, 19.0.0.0.0... Patching component oracle.rdbms.dbscripts, 19.0.0.0.0... Patching component oracle.rdbms.deconfig, 19.0.0.0.0... Patching component oracle.sdo, 19.0.0.0.0... Patching component oracle.sdo.locator.jrf, 19.0.0.0.0... Patching component oracle.sqlplus, 19.0.0.0.0... Patching component oracle.usm.deconfig, 19.0.0.0.0... Patching component oracle.rdbms.crs, 19.0.0.0.0... Patching component oracle.oraolap, 19.0.0.0.0... Patching component oracle.precomp.rsf, 19.0.0.0.0... Patching component oracle.javavm.client, 19.0.0.0.0... Patching component oracle.precomp.common.core, 19.0.0.0.0... Patching component oracle.network.client, 19.0.0.0.0... Patching component oracle.rdbms.scheduler, 19.0.0.0.0... Patching component oracle.ctx.atg, 19.0.0.0.0... Patching component oracle.marvel, 19.0.0.0.0... Patching component oracle.oraolap.dbscripts, 19.0.0.0.0... Patching component oracle.ovm, 19.0.0.0.0... Patching component oracle.bali.ice, 11.1.1.7.0... Patching component oracle.ctx.rsf, 19.0.0.0.0... Patching component oracle.rdbms.lbac, 19.0.0.0.0... Patching component oracle.rdbms.oci, 19.0.0.0.0... Patching component oracle.odbc, 19.0.0.0.0... Patching component oracle.rdbms.rman, 19.0.0.0.0... Patching component oracle.nlsrtl.rsf, 19.0.0.0.0... Patching component oracle.rdbms.install.plugins, 19.0.0.0.0... Patching component oracle.rdbms.drdaas, 19.0.0.0.0... Patching component oracle.ldap.rsf.ic, 19.0.0.0.0... Patching component oracle.ldap.security.osdt, 19.0.0.0 Patching component oracle.dbdev, 19.0.0.0.0... Patching component oracle.rdbms.install.common, 19.0.0.0.0... Patching component oracle.sdo.locator, 19.0.0.0.0... Patching component oracle.duma, 19.0.0.0.0... Patching component oracle.sqlplus.ic, 19.0.0.0.0... Patching component oracle.xdk.rsf, 19.0.0.0.0... Patching component oracle.xdk.parser.java, 19.0.0.0.0... Patching component oracle.rdbms.rsf.ic, 19.0.0.0.0... Patching component oracle.precomp.common, 19.0.0.0.0... Patching component oracle.precomp.lang, 19.0.0.0.0... Patching component oracle.jdk, 1.8.0.201.0... Patch 31771877 successfully applied. Sub-set patch [29517242] has become inactive due to the application of a super-set
patch [31771877]. Please refer to Doc ID 2161861.1 for any possible further required actions. Log file location: /u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/opatch/
opatch2021-10-21_12-48-42PM_1.log OPatch succeeded.

Step – 7 – Run the Below Commands

SQL> select owner from all_objects where status='INVALID';
no rows selected
SQL> @?/rdbms/admin/utlrp.sql
Session altered.
TIMESTAMP
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN              2021-10-21 13:20:37
DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC>   objects in the database. Recompilation time is proportional to the
DOC>   number of invalid objects in the database, so this command may take
DOC>   a long time to execute on a database with a large number of invalid
DOC>   objects.
DOC>
DOC>   Use the following queries to track recompilation progress:
DOC>
DOC>   1. Query returning the number of invalid objects remaining. This
DOC>      number should decrease with time.
DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC>   2. Query returning the number of objects compiled so far. This number
DOC>      should increase with time.
DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC>   This script automatically chooses serial or parallel recompilation
DOC>   based on the number of CPUs available (parameter cpu_count) multiplied
DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC>   On RAC, this number is added across all RAC nodes.
DOC>
DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC>   recompilation. Jobs are created without instance affinity so that they
DOC>   can migrate across RAC nodes. Use the following queries to verify
DOC>   whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC>   1. Query showing jobs created by UTL_RECOMP
DOC>         SELECT job_name FROM dba_scheduler_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC>   2. Query showing UTL_RECOMP jobs that are running
DOC>         SELECT job_name FROM dba_scheduler_running_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#
PL/SQL procedure successfully completed.
TIMESTAMP
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END              2021-10-21 13:20:41
DOC> The following query reports the number of invalid objects.
DOC>
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#
OBJECTS WITH ERRORS
-------------------
                  0
DOC> The following query reports the number of exceptions caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC> Note: Typical compilation errors (due to coding errors) are not
DOC>       logged into this table: they go into DBA_ERRORS instead.
DOC>#
ERRORS DURING RECOMPILATION
---------------------------
                          0
Function created.
PL/SQL procedure successfully completed.
Function dropped.
PL/SQL procedure successfully completed.


SQL> select owner from all_objects where status='INVALID';
no rows selected
                         

Thank you for giving your valuable time to read the above information.

If you want to be updated with all our articles send us the Invitation or Follow us:

Ramkumar’s LinkedIn: https://www.linkedin.com/in/ramkumardba/
LinkedIn Group: https://www.linkedin.com/in/ramkumar-m-0061a0204/
Facebook Page: https://www.facebook.com/Oracleagent-344577549964301
Ramkumar’s Twitter: https://twitter.com/ramkuma02877110
Ramkumar’s Telegram: https://t.me/oracleageant
Ramkumar’s Facebook: https://www.facebook.com/ramkumarram8

DATA GUARD CONFIGURATION

Step by Step Data Guard Configuration oracle 19c

Primary Server-side Configurations

Step – 1 – Enable Archive log mode and Flashback on

SQL>shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1543500144 bytes
Fixed Size                     8896880 bytes
Variable Size                1006632960 bytes
Database Buffers         520093696 bytes
Redo Buffers                7876608 bytes
Database mounted.

SQL> alter database archivelog;
Database altered.

SQL> alter database flashback on;
Database altered.

SQL> alter database open;
Database altered.

step -2 – Adding redo log file for standby logfile (in the primary database)

sql>alter database add standby logfile group 4
     '/u01/app/oracle/oradata/ORCL/redo04.log' size 50m;

sql>alter database add standby logfile group 5
     '/u01/app/oracle/oradata/ORCL/redo05.log' size 50m;

sql>alter database add standby logfile group 6
      '/u01/app/oracle/oradata/ORCL/redo06.log' size 50m;

sql>alter database add standby logfile group 7
     '/u01/app/oracle/oradata/ORCL/redo07.log' size 50m;
SELECT GROUP#,THREAD#,SEQUENCE#,ARCHIVED,STATUS FROM
V$STANDBY_LOG;
GROUP#     THREAD#     SEQUENCE# ARC  STATUS
---------- ---------- ---------- --- ----------
4             0         0    YES      UNASSIGNED
5             0         0    YES      UNASSIGNED
6             0         0    YES      UNASSIGNED
7             0         0    YES      UNASSIGNED

step -3: – Ping Listener And Tnsnames Both Server……like output below,,

[oratest@oracle admin]$ tnsping orcl
TNS Ping Utility for Linux: Version 19.0.0.0.0 - Production on 24-OCT-2021 14:53:31
Copyright (c) 1997, 2020, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/19.0.0/dbhome_1/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =
192.168.1.24)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED)
(SERVICE_NAME = orcl.localdomain)))
OK (0 msec)

[oratest@oracle admin]$ tnsping stand
TNS Ping Utility for Linux: Version 19.0.0.0.0 - Production on 24-OCT-2021 14:53:36
Copyright (c) 1997, 2020, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/19.0.0/dbhome_1/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =
192.168.1.20)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED)
(SERVICE_NAME = oracle)))
OK (10 msec)

step 4: – In the primary Database server (192.168.1.24)

SQL>ALTER SYSTEM SET log_archive_config=’dg_config=(orcl,stand)’ SCOPE=both;
System altered.

SQL>ALTER SYSTEM SET log_archive_dest_1=’location=use_db_recovery_file_dest
valid_for=(all_logfiles,all_roles) db_unique_name=orcl' SCOPE=both;
System altered.

SQL>alter system set log_archive_dest_2='service=stand async
valid_for=(online_logfiles,primary_role) db_unique_name=stand' scope=both;
System altered.

SQL>ALTER SYSTEM SET fal_server='stand' SCOPE=both;
System altered.

SQL>ALTER SYSTEM SET fal_client='orcl' SCOPE=both;
System altered.

SQL>ALTER SYSTEM SET standby_file_management='AUTO' SCOPE=both;
System altered.

step -5: In Standby Server Side

1.. Copy password file in primary Database

in primary server,

[oratest@oracle dbs]$ scp orapworcl
 [email protected]:/u01/app/oracle/product/19.0.0/dbhome_1/dbs

[email protected]'s password:
orapworcl
                100% 2048 2.0KB/s 00:00

After completed scp in standby server side change the password file name, 
like name below, [oracle@oracletest dbs]$ mv orapworcl orapwstand

2..Changing Parameters in Standby Database

In the $ORACLE_HOME/dbs directory of the standby system, create an initialization
parameter file named  initstand.ora
Containing a single parameter: DB_NAME=orcl

[oracle@oracletest dbs]$ cat initstand.ora
db_name=orcl
[oracle@oracletest dbs]$

3:- Create Directory Structure in Standby Database

[oracle@oracletest dbs]$ cd $ORACLE_BASE/admin/
[oracle@oracletest admin]$ mkdir stand
[oracle@oracletest admin]$ cd stand/
[oracle@oracletest stand]$ mkdir adump
[oracle@oracletest stand]$ mkdir -p /u01/app/oracle/oradata/stand

4:- Start the Standby Database using Pfile

[oracle@oracletest TEST]$ export ORACLE_SID=stand
[oracle@oracletest TEST]$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Mon Oct 25 10:05:00 2021
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Connected to an idle instance.

SQL> startup pfile='$ORACLE_HOME/dbs/initstand.ora' nomount;

ORACLE instance started.
Total System Global Area 243268216 bytes
Fixed Size 8895096 bytes
Variable Size 180355072 bytes
Database Buffers 50331648 bytes
Redo Buffers 3686400 bytes

5-–> Connect RMAN for Primary Database in Standby server

[oracle@oracletest stand]$ export ORACLE_SID=orcl
[oracle@oracletest stand]$ rman target sys/oracle@orcl auxiliary sys/oracle@stand
Recovery Manager: Release 19.0.0.0.0 - Production on Mon Nov 8 16:29:37 2021
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
connected to target database: ORCL (DBID=1613336523)
connected to auxiliary database: ORCL (not mounted)

6—–> Run the below command,

RMAN> run
{
allocate channel p1 type disk;
allocate channel p2 type disk;
allocate channel p3 type disk;
allocate channel p4 type disk;
allocate auxiliary channel s1 type disk;
duplicate target database for standby from active database
spfile
parameter_value_convert 'orcl','stand'
set db_name='orcl'
set db_unique_name='stand'
set
db_file_name_convert='/u01/app/oracle/oradata/ORCL/','/u01/app/oracle/oradat
a/
stand/'
set
log_file_name_convert='/u01/app/oracle/oradata/ORCL/','/u01/app/oracle/orada
ta/stand/'
set control_files='/u01/app/oracle/oradata/stand/standby1.ctl'
set log_archive_max_processes='5'
set fal_client='stand'
set fal_server='orcl'
set standby_file_management='auto'
set log_archive_config='dg_config=(orcl,stand)'
set compatible='19.0.0'
set memory_target='1200m'
nofilenamecheck;
}

Output like below that,

using target database control file instead of recovery catalog
allocated channel: p1
channel p1: SID=62 device type=DISK
allocated channel: p2
channel p2: SID=24 device type=DISK
allocated channel: p3
channel p3: SID=78 device type=DISK
allocated channel: p4
channel p4: SID=61 device type=DISK
allocated channel: s1
channel s1: SID=37 device type=DISK
Starting Duplicate Db at 08-NOV-21
contents of Memory Script:
{
backup as copy reuse
passwordfile auxiliary format '/u01/app/oracle/product/19.0.0/dbhome_1/dbs/orapwstand'
targetfile
'/u01/app/oracle/product/19.0.0/dbhome_1/dbs/spfileorcl.ora' auxiliary format
'/u01/app/oracle/product/19.0.0/dbhome_1/dbs/spfilestand.ora' ;
sql clone "alter system set spfile= ''/u01/app/oracle/product/19.0.0/dbhome_1/dbs/spfilestand.ora''";
}
executing Memory Script
Starting backup at 08-NOV-21
Finished backup at 08-NOV-21
sql statement: alter system set spfile=
''/u01/app/oracle/product/19.0.0/dbhome_1/dbs/spfilestand.ora''
contents of Memory Script:
{
sql clone "alter system set audit_file_dest =
''/u01/app/oracle/admin/stand/adump'' comment=
'''' scope=spfile";
sql clone "alter system set dispatchers =
''(PROTOCOL=TCP) (SERVICE=standXDB)'' comment=
'''' scope=spfile";
sql clone "alter system set db_name =
''orcl'' comment=
'''' scope=spfile";
sql clone "alter system set db_unique_name =
''stand'' comment=
'''' scope=spfile";
sql clone "alter system set db_file_name_convert =
''/u01/app/oracle/oradata/ORCL/'', ''/u01/app/oracle/oradata/stand/'' comment=
'''' scope=spfile";
sql clone "alter system set log_file_name_convert =
''/u01/app/oracle/oradata/ORCL/'', ''/u01/app/oracle/oradata/stand/'' comment=
'''' scope=spfile";
sql clone "alter system set control_files =
''/u01/app/oracle/oradata/stand/standby1.ctl'' comment=
'''' scope=spfile";
sql clone "alter system set log_archive_max_processes =
5 comment=
'''' scope=spfile";
sql clone "alter system set fal_client =
''stand'' comment=
'''' scope=spfile";
sql clone "alter system set fal_server =
''orcl'' comment=
'''' scope=spfile";
sql clone "alter system set standby_file_management =
''auto'' comment=
'''' scope=spfile";
sql clone "alter system set log_archive_config =
''dg_config=(orcl,stand)'' comment=
'''' scope=spfile";
sql clone "alter system set compatible =
''19.0.0'' comment=
'''' scope=spfile";
sql clone "alter system set memory_target =
1200m comment=
'''' scope=spfile";
shutdown clone immediate;
startup clone nomount;
}
executing Memory Script
sql statement: alter system set audit_file_dest = ''/u01/app/oracle/admin/stand/adump'' comment= ''''
scope=spfile
sql statement: alter system set dispatchers = ''(PROTOCOL=TCP) (SERVICE=standXDB)'' comment= ''''
scope=spfile
sql statement: alter system set db_name = ''orcl'' comment= '''' scope=spfile
sql statement: alter system set db_unique_name = ''stand'' comment= '''' scope=spfile
sql statement: alter system set db_file_name_convert = ''/u01/app/oracle/oradata/ORCL/'',
''/u01/app/oracle/oradata/stand/'' comment= '''' scope=spfile
sql statement: alter system set log_file_name_convert = ''/u01/app/oracle/oradata/ORCL/'',
''/u01/app/oracle/oradata/stand/'' comment= '''' scope=spfile
sql statement: alter system set control_files = ''/u01/app/oracle/oradata/stand/standby1.ctl''
comment= '''' scope=spfile
sql statement: alter system set log_archive_max_processes = 5 comment= '''' scope=spfile
sql statement: alter system set fal_client = ''stand'' comment= '''' scope=spfile
sql statement: alter system set fal_server = ''orcl'' comment= '''' scope=spfile
sql statement: alter system set standby_file_management = ''auto'' comment= '''' scope=spfile
sql statement: alter system set log_archive_config = ''dg_config=(orcl,stand)'' comment= '''' scope=spfile
sql statement: alter system set compatible = ''19.0.0'' comment= '''' scope=spfile
sql statement: alter system set memory_target = 1200m comment= '''' scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area 1258290752 bytes
Fixed Size 8896064 bytes
Variable Size 738197504 bytes
Database Buffers 503316480 bytes
Redo Buffers 7880704 bytes
allocated channel: s1
channel s1: SID=35 device type=DISK
contents of Memory Script:
{
backup as copy current controlfile for standby auxiliary format
'/u01/app/oracle/oradata/stand/standby1.ctl';
}
executing Memory Script
Starting backup at 08-NOV-21
channel p1: starting datafile copy
copying standby control file
output file name=/u01/app/oracle/product/19.0.0/dbhome_1/dbs/snapcf_orcl.f
tag=TAG20211108T163937
channel p1: datafile copy complete, elapsed time: 00:00:07
Finished backup at 08-NOV-21
contents of Memory Script:
{
sql clone 'alter database mount standby database';
}
executing Memory Script
sql statement: alter database mount standby database
contents of Memory Script:
{
set newname for tempfile 1 to
"/u01/app/oracle/oradata/stand/temp01.dbf";
set newname for tempfile 2 to
"/u01/app/oracle/oradata/stand/temp02.dbf";
switch clone tempfile all;
set newname for datafile 1 to
"/u01/app/oracle/oradata/stand/system01.dbf";
set newname for datafile 3 to
"/u01/app/oracle/oradata/stand/sysaux01.dbf";
set newname for datafile 4 to
"/u01/app/oracle/oradata/stand/undotbs01.dbf";
set newname for datafile 7 to
"/u01/app/oracle/oradata/stand/users01.dbf";
backup as copy reuse
datafile 1 auxiliary format
"/u01/app/oracle/oradata/stand/system01.dbf" datafile
3 auxiliary format
"/u01/app/oracle/oradata/stand/sysaux01.dbf" datafile
4 auxiliary format
"/u01/app/oracle/oradata/stand/undotbs01.dbf" datafile
7 auxiliary format
"/u01/app/oracle/oradata/stand/users01.dbf" ;
sql 'alter system archive log current';
}
executing Memory Script
executing command: SET NEWNAME
executing command: SET NEWNAME
renamed tempfile 1 to /u01/app/oracle/oradata/stand/temp01.dbf in control file
renamed tempfile 2 to /u01/app/oracle/oradata/stand/temp02.dbf in control file
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting backup at 08-NOV-21
channel p1: starting datafile copy
input datafile file number=00003 name=/u01/app/oracle/oradata/ORCL/sysaux01.dbf
channel p2: starting datafile copy
input datafile file number=00001 name=/u01/app/oracle/oradata/ORCL/system01.dbf
channel p3: starting datafile copy
input datafile file number=00004 name=/u01/app/oracle/oradata/ORCL/undotbs01.dbf
channel p4: starting datafile copy
input datafile file number=00007 name=/u01/app/oracle/oradata/ORCL/users01.dbf
output file name=/u01/app/oracle/oradata/stand/users01.dbf tag=TAG20211108T163951
channel p4: datafile copy complete, elapsed time: 00:00:56
output file name=/u01/app/oracle/oradata/stand/undotbs01.dbf tag=TAG20211108T163951
channel p3: datafile copy complete, elapsed time: 00:06:33
output file name=/u01/app/oracle/oradata/stand/system01.dbf tag=TAG20211108T163951
channel p2: datafile copy complete, elapsed time: 00:13:49
output file name=/u01/app/oracle/oradata/stand/sysaux01.dbf tag=TAG20211108T163951
channel p1: datafile copy complete, elapsed time: 00:14:10
Finished backup at 08-NOV-21
sql statement: alter system archive log current
contents of Memory Script:
{
switch clone datafile all;
}
executing Memory Script
datafile 1 switched to datafile copy
input datafile copy RECID=1 STAMP=1088096063 file
name=/u01/app/oracle/oradata/stand/system01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=1088096063 file
name=/u01/app/oracle/oradata/stand/sysaux01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=1088096063 file
name=/u01/app/oracle/oradata/stand/undotbs01.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=4 STAMP=1088096063 file
name=/u01/app/oracle/oradata/stand/users01.dbf
Finished Duplicate Db at 08-NOV-21
released channel: p1
released channel: p2
released channel: p3
released channel: p4
released channel: s1
RMAN>

step -6 : connect to the standby Database

[oracle@oracletest stand]$ export ORACLE_SID=stand
[oracle@oracletest stand]$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Mon Nov 8 17:03:57 2021
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL> alter database recover managed standby database disconnect nodelay;
Database altered.
SQL> select NAME,OPEN_MODE,DATABASE_ROLE from v$database;

NAME      OPEN_MODE            DATABASE_ROLE
--------- -------------------- ----------------
ORCL      MOUNTED              PHYSICAL STANDBY

SQL> select DB_UNIQUE_NAME from v$database;

DB_UNIQUE_NAME
------------------------------
stand

Thank you for giving your valuable time to read the above information.

If you want to be updated with all our articles send us the Invitation or Follow us:

Ramkumar’s LinkedIn: https://www.linkedin.com/in/ramkumardba/
LinkedIn Group: https://www.linkedin.com/in/ramkumar-m-0061a0204/
Facebook Page: https://www.facebook.com/Oracleagent-344577549964301
Ramkumar’s Twitter: https://twitter.com/ramkuma02877110
Ramkumar’s Telegram: https://t.me/oracleageant
Ramkumar’s Facebook: https://www.facebook.com/ramkumarram8

Data Guard Protection Modes

Oracle Data Guard Protection Modes

Description: 
       In this blog, we are going to see oracle data guard protection modes and how to change one mode to another mode.

What is protection mode:
       Production mode is based on the RTO(recovery time objective) and RPO(recovery point objective values and to select the most appropriate model for their data guard configuration.
       The primary database always syncs with the standby database, the primary database is not active for the end-users for some natural disasters, hardware errors occur the standby can be changed as primary.

Why it is Required:
       In that situation some of the applications required maximum database performance at all times, so we can configure the standby database with appropriate protection mode. 

Types of protection modes:

MAXIMUM PERFORMANCE
MAXIMUM PROTECTION
MAXIMUM AVAILABILITY

MAXIMUM AVAILABILITY:
       Maximum availability protection mode provides the highest level of protection, which is possible without compromising the availability of the primary database.
       Whatever changes happen in the primary database it can be reflected in the standby database, the standby database sends acknowledgment then it will send again the logs, otherwise, it ignores the availability mode move on to the maximum performance mode.
       This mode ensures ZERO data loss in the case of certain double faults.

MAXIMUM PERFORMANCE:
       Maximum performance is a default protection mode.
       It is a high-level protection mode without affecting the primary database.
       The primary database sends logs to the standby database it’s not waiting for an acknowledgment it will perform asynchronous behavior. 
       Less amount of data loss happens in maximum performance mode.

MAXIMUM PROTECTION:
       Maximum protection mode ensures that no data loss occurs if the primary database fails.
       The primary database sends logs to standby the RFS acknowledges to LNS at least one redo log can be transferred or it will shut down the primary database.

SUMMARY OF PROTECTION MODES:

How to view and change protection mode:
View protection mode:
SQL> select protection_mode from v$database;
Data sync and async configuration:
SQL> ALTER SYSTEM SET log_archive_dest_2=’service=data sync valid_for=(online_logfiles,primary_role) db_unique_name=stand’ SCOPE=both;
System altered.

How to change protection mode:

SQL> alter database set standby database to maximize PERFORMANCE;
SQL> alter database set standby database to maximize PROTECTION;
SQL> alter database set standby database to maximize AVAILABILITY;
SQL> select name,protection_mode from v$database;

Thank you for giving your valuable time to read the above information.

If you want to be updated with all our articles send us the Invitation or Follow us:

Ramkumar’s LinkedIn: https://www.linkedin.com/in/ramkumardba/
LinkedIn Group: https://www.linkedin.com/in/ramkumar-m-0061a0204/
Facebook Page: https://www.facebook.com/Oracleagent-344577549964301
Ramkumar’s Twitter: https://twitter.com/ramkuma02877110
Ramkumar’s Telegram: https://t.me/oracleageant
Ramkumar’s Facebook: https://www.facebook.com/ramkumarram8