Description : In this blog, we are going to see the Oracle database restore point and its types with the demo.Normal Restore Point:
A normal restore point enables you to flash the database back to a restore point within the time determined by DB_FLASHBACK_RETENTION_TARGET initialization parameter setting.
A normal restore point can be dropped explicitly.
The control files stores name of the restore point and the SCN.
Guaranteed restore point:
A guaranteed restore point enables you to flash the database back to the restore point regardless of DB_FLASHBACK_RETENTION_TARGET initialization parameter setting.
Guaranteed restore point must be dropped explicitly by the user using the DROP RESTORE POINT command.
Guaranteed restore point never ages out.
Normal Restore Point Demo:Must DB in archive log mode.SQL> archive log list;
Database log mode Archive Mode
Automatic archival EnabledCheck available restore point:SQL> select name from v$restore_point;
no rows selectedCreate a normal restore point:SQL> create restore point res_test;
Restore point created.View the created restore point:SQL> select name from v$restore_point;
NAME
——————–
RES_TESTAfter the creation of the restore point creates a table.SQL> create table test as select * from dba_users;
Table created.Check the RVWR process status. In normal restore point no background process are started.
[oracle@primary ~]$ ps -ef |grep -i rvwr
oracle 16948 14963 0 09:40 pts/2 00:00:00 grep –color=auto -i rvwrFlashback the restore point:SQL> flashback database to restore point res_test;
Flashback complete.Guaranteed Restore Point Demo:Create guarantee restore point:SQL> create restore point res_test1 guarantee flashback database;
Restore point created.RVWR back ground process can be started automatically.[oracle@primary ~]$ ps -ef |grep -i rvwr
oracle 17014 1 0 09:41 ? 00:00:00 ora_rvwr_orcl
oracle 17060 14963 0 09:42 pts/2 00:00:00 grep –color=auto -i rvwr
[oracle@primary ~]$Check the flashback status:SQL> select flashback_on from v$database;
FLASHBACK_ON
——————
RESTORE POINT ONLYAutomatically create flashback logs:[oracle@primary ORCL]$ cd flashback/
[oracle@primary flashback]$ ls
o1_mf_jtotn2c3_.flb o1_mf_jtotn8d4_.flbDrop restore point the logs will be removed.SQL> drop restore point res_test1;
Restore point dropped.
[oracle@primary ~]$ cd /u01/app/oracle/fast_recovery_area/ORCL/flashback
[oracle@primary flashback]$ ls
[oracle@primary flashback]$Bounce database and Flashback database to restore point:SQL> flashback database to restore point res_test2;
Flashback complete.Reset the logs:SQL> alter database open resetlogs;
Database altered.SQL> select count(*) from test1;
COUNT(*)
———-
45 Thank you for giving your valuable time to read the above information.If you want to be updated with all our articles send us the Invitation or Follow us:Ramkumar’s LinkedIn: https://www.linkedin.com/in/ramkumardba/
LinkedIn Group: https://www.linkedin.com/in/ramkumar-m-0061a0204/
Facebook Page: https://www.facebook.com/Oracleagent-344577549964301
Ramkumar’s Twitter: https://twitter.com/ramkuma02877110
Ramkumar’s Telegram: https://t.me/oracleageant
Ramkumar’s Facebook: https://www.facebook.com/ramkumarram8
This wait event happens when a session tries to access a block in the buffer cache but it can't because the buffer is busy, that is another session is modifying the block and the contents of the block are in flux.
Buffer Busy Wait:
SQL> SELECT s.sql_hash_value, sw.p1 file#, sw.p2 block#, sw.p3 reason
FROM v$session_wait sw, v$session s
WHERE sw.event = 'buffer busy waits'
AND sw.sid = s.sid;
no rows selected
SQL> SELECT owner , segment_name , segment_type
FROM dba_extents
WHERE file_id = &FileNumber
AND &BlockNumber BETWEEN block_id AND block_id + blocks -1; 2 3 4
Enter value for filenumber: 1
old 3: WHERE file_id = &FileNumber
new 3: WHERE file_id = 1
Enter value for blocknumber: 2
old 4: AND &BlockNumber BETWEEN block_id AND block_id + blocks -1
new 4: AND 2 BETWEEN block_id AND block_id + blocks -1
no rows selected
Another query that can be very useful is finding the objects in the entire Oracle database that are suffering from "buffer busy waits". The following query gives the top 10 segments:
SQL> SELECT * FROM (
SELECT owner, object_name, subobject_name, object_type,
tablespace_name, value
FROM v$segment_statistics
WHERE statistic_name='buffer busy waits' and owner not like '%SYS%'
ORDER BY value DESC)
WHERE ROWNUM <=10;
OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE TABLESPACE_NAME VALUE
-------------------- ------------------------------ ------------------------------ ------------------ ------------------------------ ----------
GSMADMIN_INTERNAL DDLID$ TABLE SYSAUX 0
XDB XDB$ROOT_INFO TABLE SYSAUX 0
XDB XDB$SCHEMA_URL INDEX SYSAUX 0
DB File Sequential Read
The db file sequential read wait event has three parameters:
file#, first block#, and block count.
In Oracle Database 11g, this wait event falls under the User I/O wait class.
The Oracle process wants a block that is currently not in the SGA, and it is waiting for the database block to be read into the SGA from disk.
The two important numbers to look for are the TIME_WAITED and AVERAGE_WAIT by individual sessions.
Significant db file sequential read wait time is most likely an application issue.
This event occurs when a user tries to perform a Physical I/O while waiting for sequential reads from the Buffer cache. This type of situation usually occurs when the data on the table is accessed by using index, not full table scan, as a result of single block reading.
If this event occurs, possible reasons are wrong index usage, index fragmentation, excessive I/O traffic on specific disks. To Solve this problem, Query should use Right index and fragmented indexes should be defragmented with Rebuild Index operation.
When you encounter this wait event, which appears very frequently in AWR and ADDM reports, we cannot always say that there is a problem. However, if this wait event takes place, if the database have ‘Enqueue’ and Latch Free and they are spending too much time, then database should be monitored.
DB File Scattered Read
This wait event occurs getting multiblock of physical blocks that are not physically close to each other (neighbors) into buffer cache Scattered, or during a full scan to the buffer cache. So Db file scattered read is to read multiple blocks I/O during the fast full scan.
A scattered read is usually a multiblock read. It can occur for a fast full scan (of an index) in addition to a full table scan. The db file scattered read wait event identifies that a full scan is occurring. When performing a full scan into the buffer cache, the blocks read are read into memory locations that are not physically adjacent to each other.
Multiblock (up to DB_FILE_MULTIBLOCK_READ_COUNT blocks) reads due to full scans into the buffer cache show up as waits for 'db file scattered read'.
Direct path Read
This event occurs when Oracle Instance query data from the Datafiles asynchronously and puts this data into PGA instead of Buffer Cache in SGA.
This type of event usually occurs during the use of Temporary ( Temp ) Tablespace in the Sorting operations, during the creation of Lob segments, and when multiple sessions Full table scan in parallel.
In order to solve this problem, the memory should be increased, parallel operations should not be done unless required, and pay attention to Lob segments reads.
DB CPU
This event represents the total time spent of the users’ queries on the CPU. Oracle’s Background processes (SMON, PMON ..) are not included in this total time.
If this value is high, it means that the Oracle instance spends most of the time on the CPU. To reduce this wait event, the SQLs in the SQL ordered by CPU section in the AWR report must be TUNE.
Logfile sync
This event is known as the time lost as a result of the LGWR process waiting while users initiate a Transaction Commit or Rollback.
If this wait event is available continuously, I/O performance of the LGWR process is probably poor, or Commit is coming too often by the application. The solution to this problem is not to commit too much, if necessary, and to examine the I/O performance of the disk on which the Redo log files are located, and to use a high performance disk such as an SSD disk if necessary.
Enq: TX – row lock contention
row lock contention: This type of event occurs when a user session is trying to update or delete a row held by another session, which is an application design problem. Normally, when a transaction is finished, commit or rollback must be executed to release related rows.
The solution to this problem is that if the session that holds the row is active, then execute commit statement, if it is not active, kill the session or execute rollback the session.
ARCH wait on SENDREQ
This wait event is the total time taken by the Archiver Processes to archive the Standby in the Dataguard and to write these archives to the local disks.
The main reason why this value is high is that the archives sent to the Standby side arrive late due to the network. To solve this problem, it is necessary to optimize the Network and set the DEFAULT_SDU_SIZE parameter in the sqlnet.ora file to an optimized value (32767).
Gc current block busy
This wait event occurs between the nodes of the Cluster database ( Real Application Cluster ). When a transaction requests a block, that request sent to the master instance. Normally, this request is performed by a cache fusion.
However, in some cases, this block transfer is delayed because the corresponding instance is held by the other instance or because the corresponding transaction records cannot be written to the redo logs immediately, in which case this wait event is triggered.
This can be solved by tune the wait event Log Writer process or Solving network problem between Cluster nodes.
Gc cr block busy-wait
ifference is that while the above event is running in current mode, this wait event runs in CR mode. This can be solved by tune the wait event Log Writer process.
Read by Other Session
When a session waits on the "read by other session" event, it indicates a wait for another session to read the data from disk into the Oracle buffer cache. If this happens too often the performance of the query or the entire database can suffer. Typically this is caused by contention for "hot" blocks or objects so it is imperative to find out which data is being contended for. Once that is known, there are several alternative methods for solving the issue.
When information is requested from the database, Oracle will first read the data from disk into the database buffer cache. If two or more sessions request the same information, the first session will read the data into the buffer cache while other sessions wait. In previous versions this wait was classified under the "buffer busy waits" event. However, in Oracle 10.1 and higher this wait time is now broken out into the "read by other session" wait event. Excessive waits for this event are typically due to several processes repeatedly reading the same blocks, e.g. many sessions scanning the same index or performing full table scans on the same table. Tuning this issue is a matter of finding and eliminating this contention.
Finding the contentions :
When a session is waiting on the "read by other session" event, an entry will be seen in the v$session_wait system view, which will give more information on the blocks being waited for:
SELECT p1 "file#", p2 "block#", p3 "class#"
FROM v$session_wait
WHERE event = 'read by other session';
If information collected from the above query repeatedly shows that the same block (or range of blocks) is experiencing waits, this indicates a "hot" block or object. The following query will give the name and type of the object:
SELECT relative_fno, owner, segment_name, segment_type
FROM dba_extents
WHERE file_id = &file
AND &block BETWEEN block_id AND block_id + blocks - 1; Eliminating contentions:
Depending on the Oracle database environment and specific performance situation the following variety of methods can be used to eliminate contention:
Tune inefficient queries - This is one of those events you need to "catch in the act" through the v$session_wait view as prescribed above. Then, since this is a disk operating system issue, take the associated system process identifier (c.spid) and see what information you can obtain from the operating system.
Redistribute data from the hot blocks - Deleting and reinserting the hot rows will often move them to a new data block. This will help decrease contention for the hot block and increase performance. More information about the data residing within the hot blocks can be retrieved with queries similar to the following:
SELECT data_object_id
FROM dba_objects
WHERE owner='&owner' AND object_name='&object';
SELECT dbms_rowid.rowid_create(1,<data_object_id>,<relative_fno>,<block>,0) start_rowid
FROM dual;
--rowid for the first row in the block
SELECT dbms_rowid.rowid_create(1,<data_object_id>,<relative_fno>,<block>,500) end_rowid FROM dual;
--rowid for the 500th row in the block
SELECT <column_list>
FROM <owner>.<segment_name>
WHERE rowid BETWEEN <start_rowid> AND <end_rowid>
Thank you for giving your valuable time to read the above information.
If you want to be updated with all our articles send us the Invitation or Follow us:
Standby Out Off syncDescription: In this blog, we are going to see standby log out off sync how to check and sync the primary and standby logs.
Standby Log Sync: The primary database logs are synced with the standby database in case any issue delay happens in log sync we restart the MRP process and defer enable the log_dest_2 location in the primary database.
Overall Steps:-
1. Bring Down the standby database. 2. Create a user and do multiple log switches in the primary database. 3. Start the standby database, and check the log difference. 4. primary database defers and enables the dest 2. 5. Restart the MRP process in the standby database. 6. Check the log Sync status.
Step 2:- Create user and do multiple log switches in the primary database.
Check current archive log status: SQL> archive log list; Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 9 Next log sequence to archive 11 Current log sequence 11
Create a user and insert the table. SQL> create user agent identified by agent; SQL> alter user agent default tablespace users quota unlimited on users; SQL> grant connect, resource to agent; SQL> conn agent/agent Connected. SQL> show user USER is “AGENT” SQL> create table agent1 as select * from user_tables; SQL> conn / as sysdba Connected.
Do multiple log switches. SQL> alter system switch logfile;
System altered.
SQL> /
System altered.
Check the archive log status. SQL> archive log list; Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 12 Next log sequence to archive 14 Current log sequence 14 SQL>
Step 3:- Start the standby database, and check the log difference.
Total System Global Area 1258290752 bytes Fixed Size 8896064 bytes Variable Size 738197504 bytes Database Buffers 503316480 bytes Redo Buffers 7880704 bytes Database mounted. Database opened.
Check the log sync different. SQL> SELECT ARCH.THREAD# “Thread”, ARCH.SEQUENCE# “Last Sequence Received”, APPL.SEQUENCE# “Last Sequence Applied”, (ARCH.SEQUENCE# – APPL.SEQUENCE#) “Difference” FROM(SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH, (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;
Step 4:-Primary database defer and enable the dest 2.
SQL> alter system set log_archive_dest_state_2=defer;
System altered.
SQL> alter system set log_archive_dest_state_2=enable;
System altered.
Step 5:- Restart the MRP process in the standby database.
SQL> alter database recover managed standby database disconnect nodelay;
Database altered.
Step 6:- Check the log sync status.
SQL> SELECT ARCH.THREAD# “Thread”, ARCH.SEQUENCE# “Last Sequence Received”, APPL.SEQUENCE# “Last Sequence Applied”, (ARCH.SEQUENCE# – APPL.SEQUENCE#) “Difference” FROM(SELECT THREAD# , SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#, FIRST_TIME ) IN (SELECT THREAD#, MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH, (SELECT THREAD# , SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#, FIRST_TIME ) IN (SELECT THREAD#, MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1; 2 3 4
Description: In this blog, we are going to see step by step manual data guard reinstate process using the flashback database method.
Data Guard Reinstate: After done failover to your Standby database so it becomes the new Primary. the old primary database can be reinstated as a standby database. this convert happens only in the database in flashback mode.
Overall Steps:
Check the flashback enabled.
Get the SCN number from the new primary database.
Bounce the old primary database open in the mount stage.
Flashback the database in SCN number of the new primary database.
Convert the database to physical standby.
Bounce the database and check the database role.
Step 1:- Check the flashback enabled after failover. SELECT FLASHBACK_ON FROM V$DATABASE;
Step 2:- Get SCN number from the new primary database. SQL> select to_char(standby_became_primary_scn) from v$database;
Step 3:- Bounce the Old primary database and open as mount stage.
Top 10 by Buffer Gets
=====================
set linesize 1000
set pagesize 1000
col sql for a70
set long 5000
col hash_value for 9999999999999999999999
SELECT * FROM
(SELECT substr(sql_text,1,40) sql,
buffer_gets, executions, buffer_gets/executions “Gets/Exec”,
hash_value,address
FROM GV$SQLAREA
WHERE buffer_gets > 10000
ORDER BY buffer_gets DESC)
WHERE rownum <=10
;
==================================================================================================
Top 10 by Physical Reads
========================
set linesize 100
set pagesize 100
SELECT * FROM
(SELECT substr(sql_text,1,40) sql,
disk_reads, executions, disk_reads/executions “Reads/Exec”,
hash_value,address
FROM V$SQLAREA
WHERE disk_reads > 1000
ORDER BY disk_reads DESC)
WHERE rownum <=10;
==================================================================================================
Top 10 by Executions
====================
set linesize 1000
set pagesize 1000
col sql for a70
SELECT * FROM
(SELECT substr(sql_text,1,40) sql,
executions, rows_processed, rows_processed/executions “Rows/Exec”,
hash_value,address
FROM GV$SQLAREA
WHERE executions > 100
ORDER BY executions DESC)
WHERE rownum <=10;
==================================================================================================
Top 10 by Parse Calls:
========================
set linesize 100
set pagesize 100
SELECT * FROM
(SELECT substr(sql_text,1,40) sql,
parse_calls, executions, hash_value,address
FROM V$SQLAREA
WHERE parse_calls > 1000
ORDER BY parse_calls DESC)
WHERE rownum <=10
;
=================================================================================================
Top 10 by Sharable Memory:
========================
set linesize 100
set pagesize 100
SELECT * FROM
(SELECT substr(sql_text,1,40) sql,
sharable_mem, executions, hash_value,address
FROM V$SQLAREA
WHERE sharable_mem > 1048576
ORDER BY sharable_mem DESC)
WHERE rownum <=10
;
==================================================================================================
Top 10 by Version Count:
========================
set linesize 100
set pagesize 100
SELECT * FROM
(SELECT substr(sql_text,1,40) sql,
version_count, executions, hash_value,address
FROM V$SQLAREA
WHERE version_count > 20
ORDER BY version_count DESC)
WHERE rownum <=10
;
Thank you for giving your valuable time to read the above information.
If you want to be updated with all our articles send us the Invitation or Follow us:
Description: In this blog, we are going to see what has gathered stats and various levels of gather stats.
What is GATHERSTATS: To gather stats in oracle we require to use the DBMS_STATS package. It will collect the statistics in parallel with collecting the global statistics for partitioned objects. The DBMS_STATS package is specially used only for optimizer statistics. As I explained in the first paragraph the dbms_stats is very vital for good SQL performance. We require to gather the stats before adjusting or setting up any optimizer parameters in oracle.
The less the query cost the execution time of the query is fast. We must have to gather the statistics on regular basis for database objects to give the best information to the Oracle optimizer to run queries in the best possible time. Using the analysis statement is the traditional way of checking the cost of the query. But nowadays to gather stats in oracle we need to use the DBMS_STATS package.
Gather STATS:
CASCADE => TRUE: Gather statistics on the indexes as well. If not used Oracle will determine whether to collect it or not. DEGREE => 4: Degree of parallelism. ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE : (DEFAULT) Auto set the sample size % for skew(distinct) values (accurate and faster than setting a manual sample size). METHOD_OPT=> : For gathering Histograms: FOR COLUMNS SIZE AUTO: You can specify one column between “” instead of all columns. FOR ALL COLUMNS SIZE REPEAT: Prevent deletion of histograms and collect it only for columns that already have histograms. FOR ALL COLUMNS: Collect histograms on all columns. FOR ALL COLUMNS SIZE SKEWONLY: Collect histograms for columns that have skewed values should test skewness first FOR ALL INDEXED COLUMNS: Collect histograms for columns that have indexes only.
DATABASE Level:
Gathering statistics for all objects in the database, the cascade will include indexes
Description : In this blog, we are going to see oracle Data Guard Configuration parameters. and how to view and set the parameters.
Types of parameter configuration.Independent parameters.Primary database parameters.Standby database parameters.
Independent parameters: DB_UNIQUE_NAME: DB_UNIQUE_NAME specifies a globally unique name for the database. must have a unique DB_UNIQUE_NAME for every database.it can be up to 30 characters and is case insensitive. The following characters are valid in a database name: alphanumeric characters, underscore (_), the number sign (#), and dollar sign ($). db_unique_name=’unique_name’
SQL> show parameter db_unique_name NAME TYPE VALUE —————————– ———– ————— db_unique_name string orcl
LOG_ARCHIVE_CONFIG: LOG_ARCHIVE_CONFIG enables or disables the sending of redo logs to remote destinations and the receipt of remote redo logs, and specifies the unique database names (DB_UNIQUE_NAME) for each database in the Data Guard configuration. We can add send,nosend,receive,noreceive,dgconfig,nodgconfig to this log_archive_parameter. log_archive_config=’db_config=(Primary,Standby)’ SQL> show parameter LOG_ARCHIVE_CONFIG NAME TYPE VALUE ——————————— ———– ——————- log_archive_config string dg_config=(orcl,stand) change value to log_archive_config: SQL> ALTER SYSTEM SET log_archive_config=’dg_config=(orcl,stand)’ SCOPE=both; System altered.
LOG_ARCHIVE_MAX_PROCESSES: LOG_ARCHIVE_MAX_PROCESSES specifies the number of archiver background processes (ARC0 through ARC9) Oracle initially invokes. log_archivemax_processes=’4′
DB_CREATE_FILE_DEST: DB_CREATE_FILE_DEST specifies the default location for Oracle-managed datafiles. db_create_file_dest=’+DATA’ SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST=’/u02/oradata’;
Primary database parameters: LOG_ARCHIVE_DEST_n: The LOG_ARCHIVE_DEST_n initialization parameter defines up to ten destinations, each of which must specify either the LOCATION or the SERVICE attribute to specify where to archive the redo data. Location and service attributes are mandatory for the configuration. SERVICE – it can be a tnsnames entry to identify the database. SYNC – synchronous redo transmission. ASYNC – Asynchronous redo transmission.
LOGARCHIVE_DEST_STATE_n: Specify the availability state of the corresponding destination. enabled: Specifies that a valid log archive destination can be used for a subsequent archiving operation defer: Specifies that valid destination information and attributes are preserved, but the destination is excluded from archiving operations until re-enabled. alternate: Specifies that a log archive destination is not enabled but will become enabled if communications to another destination fail.
Standby Database Parameters: DB_FILE_NAME_CONVERT: It converts the filename of a new data file on the primary database to a filename on the standby database. SQL> show parameter DB_FILE_NAME_CONVERT NAME TYPE VALUE ——————————— ———– —————————— db_file_name_convert string /u01/app/oracle/oradata/ORCL/ , /u01/app/oracle/oradata/STAND/
LOG_FILE_NAME_CONVERT: LOG_FILE_NAME_CONVERT converts the filename of a new log file on the primary database to the filename of a log file on the standby database. SQL> show parameter LOG_FILE_NAME_CONVERT NAME TYPE VALUE ——————————— ——– ———— log_file_name_convert string /u01/app/oracle/oradata/ORCL/ , /u01/app/oracle/oradata/STAND alter system set db_file_name_convert=’/u02/oradata/orcl/datafile’ scope=spfile; alter system set log_file_name_convert=’orcl’,’stand’ scope=spfile;
FAL_SERVER: FAL means fetch archive log. It clearly indicates the name that fetches the archive log. FAL_SERVER is used to fetch an archive log server for a standby database. Value in FAL_SERVER parameter act as Oracle Net Service name which points to the standby database. ALTER SYSTEM SET fal_server=’stand ‘SCOPE=both; ALTER SYSTEM SET fal_client=’orcl ‘SCOPE=both;
FAL_CLIENT: FAL_CLIENT specifies the FAL(fetch archive log) client name that is used by the FAL service. It is configured for point the FAL Client. Value in FAL_CLIENT is also an Oracle Net Service Name. ALTER SYSTEM SET fal_server=’orcl ‘SCOPE=both; ALTER SYSTEM SET fal_client=’stand’SCOPE=both;
STANDBY_FILE_MANAGEMENT: STANDBY_FILE_MANAGEMENT enables or disables automatic standby file management. When automatic standby file management is enabled, operating system file additions and deletions on the primary database are replicated on the standby database. MANUAL:
Disables automatic standby file management. AUTO:
Enables automatic standby file management. SQL> ALTER SYSTEM SET standby_file_management=’AUTO’ SCOPE=both;
Thank you for giving your valuable time to read the above information.
If you want to be updated with all our articles send us the Invitation or Follow us:
Description: In this blog, we are going to see oracle Data Guard architecture and a clear description of the data guard overall process.
What is Oracle Data Guard:
Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data.
Data Guard maintains these standby databases as transactionally consistent copies of the production database.
If the production database becomes unavailable, Data Guard can switch any standby database to the production role.
Data Guard configuration pre-system checks: Hardware : Hardware can be different for the standby and primary databases. Like CPU, hard disk capacity. Operating system : The operating system should be the same as both the primary and standby databases, if you use Linux on the primary side the same os will be the standby. OS versions can be different but the platform should be the same. Software : The same release of oracle database version in primary and standby side. Standby database enables archive and force logging method. Sysdba privilege required.
ORACLE DATA GUARD ARCHITECTURE: Primary Database Process:
LGWR(log writer) Log writer collects transaction log information and updates to the online redo logs. LGWR modes: SYNC– in synchronous mode, it sends redo information directly to the RFS(remote file server) process on the standby database its waits for the confirmation before proceeding. ASYNC – in asynchronous mode its also sends redo information directly it does not wait before proceeding. In ASYNC mode LGWR submits the network I/O request to the network server (LNSn) process for the destination.
Archiver process(ARCn): ARCn or a SQL session performing an archival operation, create a copy of online redo logs locally for primary database recovery. ARCn also send simultaneously online logs to the RFS(remote file server) It’s also responsible for GAP solving in the standby database.
FAL (Fetch Archive Logs): FAL provides a client/server mechanism for resolving gaps detected in the range of archive logs generated in the primary database and received to the standby database.
Standby Database Process:
RFS (remote file server process): RFS it receives redo information from the primary database. RFS writes redo to standby redo logs or archived logs Each primary LSN, ARCn has their own RFS
ARC archiver process; ARCn process archives all standby redo logs.
Managed Recovery Process(MRP): For the physical standby database, only MRP applies archived log information to the physical standby database. MRP process commands: Alter database recover managed standby database disconnect no delay. Alter database recover managed standby database disconnect from the session. Alter database recover managed standby database cancel.
Logical standby database: LSP is controlling the applies of archived redo log information to the logical standby database.
Standby redo log files: Standby redo log files are used only when the database is in the standby role to store redo data received from the primary database. Standby redo logs from a separate pool of log filegroup.
Data Guard Broker: An Oracle Data Guard Broker configuration is a logical definition of an Oracle Data Guard configuration allowing for centralized management and configuration of the physical resources involved in the configuration.
Step 1: Connect to both Databases (primary and standby) and issue the following command:-
SQL>alter system set dg_broker_start=true; System altered.
Step 2: On the primary server, issue the following command to Register the Primary Server with the Broker.
[oratest@oracle dbs]$ dgmgrl sys/oracle@orcl DGMGRL for Linux: Release 19.0.0.0.0 – Production on Mon Nov 8 17:14:30 2021 Version 19.9.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
Welcome to DGMGRL, type “help” for information. Connected to “orcl” Connected as SYSDBA.
DGMGRL> create configuration 'dg_my_config' as primary database is 'orcl' connect
identifier is orcl;
Configuration “dg_my_config” created with primary database “orcl”
Step 3: Now add the standby database:-
DGMGRL> add database stand as connect identifier is stand maintained as physical;Database "stand" added
Step 4: Now we enable the new configuration;-
DGMGRL> enable configuration;Enabled.DGMGRL>
Step 5 ; The following commands show how to check the configuration and status of the databases from the broker:-
DGMGRL> show database stand; Database - stand Role: PHYSICAL STANDBY Intended State: APPLY-ON Transport Lag: 0 seconds (computed 1 second ago) Apply Lag: 0 seconds (computed 1 second ago) Average Apply Rate: 5.00 KByte/s Real Time Query: OFF Instance(s):
orcl
Database Status: SUCCESS
Step 6: Stop/Start Managed Recovery:-
Stop managed recovery.SQL> alter database recover managed standby database cancel;Start managed recovery.SQL> alter database recover managed standby database disconnect;
Thank you for giving your valuable time to read the above information.
If you want to be updated with all our articles send us the Invitation or Follow us:
In this article, we are going to see steps used to apply the latest Oracle 19c Database Release Update 19.9.0.0.201020 (Patch 31771877) The Environment is a single instance database.
Review readme file on Patch 31771877 – Database Release Update 19.9.0.0.201020Download patch p31771877_190000_Linux-x86-64.zipMake sure the opatch version is minimum 12.2.0.1.19
Step:-1 Download the Patch from oracle support
p31771877_190000_Linux-x86-64.zip
p6880880_200000_Linux-x86-64.zip
Step:-2 Copied the patch to DB Server
[oratest@oracle ~]$ mkdir patch[oratest@oracle ~]$ ls[oratest@oracle ~]$ cd patch/[oratest@oracle ~]$ ls -lrth -rwxrwxr-x. 1 oratest oratest 116M Oct 21 10:55-x86-64.zip -rwxrwxr-x. 1 oratest oratest 1.6G Oct 21 11:23 p6880880_210000_Linux-x86-64.zip
Step:-3 Upgrade Opatch Tool from 12.2.0.1.17 to 12.2.0.1.
[oratest@oracle ~]$ cd $ORACLE_HOME
[oratest@oracle ~]$ ls
[oratest@oracle ~]$ ./opatch version
OPatch Version: 12.2.0.1.17
OPatch succeeded
[oratest@oracle ~]$ cd /home/oratest/patch
[oratest@oracle ~]$ ls
[oratest@oracle ~]$cp p6880880_200000_Linux-x86-64.zip
/u01/app/oracle/product/19.0.0/dbhome_1
[oratest@oracle ~]$ cd /app/oracle/product/19.0.0/dbhome_1
[oratest@oracle ~]$pwd
/u01/app/oracle/product/19.0.0/dbhome_1
Step:-4 Check Patches status before applying using the below query
SET LINESIZE 500
SET PAGESIZE 1000
SET SERVEROUT ON
SET LONG 2000000
COLUMN action_time FORMAT A12
COLUMN action FORMAT A10
COLUMN comments FORMAT A30
COLUMN description FORMAT A60
COLUMN namespace FORMAT A20
COLUMN status FORMAT A10
SELECT TO_CHAR(action_time, ‘YYYY-MM-DD’) AS action_time,action,status,
description,patch_id FROM sys.dba_registry_sqlpatch ORDER by action_time;
Identifying Invalid Objects before patching
SQL> select owner,object_type,object_name,status from dba_objects where status='INVALID';
no row selected.
[oratest@oracle 31771877]$ opatch apply
Oracle Interim Patch Installer version 12.2.0.1.27
Copyright (c) 2021, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/oracle/product/19.0.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/product/19.0.0/dbhome_1/oraInst.loc
OPatch version : 12.2.0.1.27
OUI version : 12.2.0.7.0
Log file location : /u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/ opatch/opatch2021-10-21_12-48-42PM_1.log
Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 31771877
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/19.0.0/dbhome_1')
Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '31771877' to OH '/u01/app/oracle/product/19.0.0/dbhome_1'
ApplySession: Optional component(s) [ oracle.network.gsm, 19.0.0.0.0 ] , [ oracle.rdbms.ic, 19.0.0.0.0 ] , [ oracle.rdbms.tg4db2, 19.0.0.0.0 ] , [ oracle.tfa, 19.0.0.0.0 ] , [ oracle.options.olap.awm, 19.0.0.0.0 ] , [ oracle.net.cman, 19.0.0.0.0 ] , [ oracle.xdk.companion, 19.0.0.0.0 ] , [ oracle.oraolap.mgmt, 19.0.0.0.0 ] , [ oracle.network.cman, 19.0.0.0.0 ] , [ oracle.options.olap, 19.0.0.0.0 ] , [ oracle.assistants.usm, 19.0.0.0.0 ] , [ oracle.assistants.asm, 19.0.0.0.0 ] , [ oracle.sqlj, 19.0.0.0.0 ] , [oracle.jdk, 1.8.0.191.0] not present in the Oracle Home or a higher version is found.
Patching component oracle.rdbms.rsf, 19.0.0.0.0...
Patching component oracle.rdbms, 19.0.0.0.0...
Patching component oracle.rdbms.util, 19.0.0.0.0...
Patching component oracle.assistants.acf, 19.0.0.0.0...
Patching component oracle.assistants.deconfig, 19.0.0.0.0...
Patching component oracle.assistants.server, 19.0.0.0.0...
Patching component oracle.buildtools.rsf, 19.0.0.0.0...
Patching component oracle.ctx, 19.0.0.0.0...
Patching component oracle.dbjava.ic, 19.0.0.0.0...
Patching component oracle.dbjava.jdbc, 19.0.0.0.0...
Patching component oracle.dbjava.ucp, 19.0.0.0.0...
Patching component oracle.dbtoolslistener, 19.0.0.0.0...
Patching component oracle.ldap.rsf, 19.0.0.0.0...
Patching component oracle.network.rsf, 19.0.0.0.0...
Patching component oracle.oracore.rsf, 19.0.0.0.0...
Patching component oracle.rdbms.dbscripts, 19.0.0.0.0...
Patching component oracle.rdbms.deconfig, 19.0.0.0.0...
Patching component oracle.sdo, 19.0.0.0.0...
Patching component oracle.sdo.locator.jrf, 19.0.0.0.0...
Patching component oracle.sqlplus, 19.0.0.0.0...
Patching component oracle.usm.deconfig, 19.0.0.0.0...
Patching component oracle.rdbms.crs, 19.0.0.0.0...
Patching component oracle.oraolap, 19.0.0.0.0...
Patching component oracle.precomp.rsf, 19.0.0.0.0...
Patching component oracle.javavm.client, 19.0.0.0.0...
Patching component oracle.precomp.common.core, 19.0.0.0.0...
Patching component oracle.network.client, 19.0.0.0.0...
Patching component oracle.rdbms.scheduler, 19.0.0.0.0...
Patching component oracle.ctx.atg, 19.0.0.0.0...
Patching component oracle.marvel, 19.0.0.0.0...
Patching component oracle.oraolap.dbscripts, 19.0.0.0.0...
Patching component oracle.ovm, 19.0.0.0.0...
Patching component oracle.bali.ice, 11.1.1.7.0...
Patching component oracle.ctx.rsf, 19.0.0.0.0...
Patching component oracle.rdbms.lbac, 19.0.0.0.0...
Patching component oracle.rdbms.oci, 19.0.0.0.0...
Patching component oracle.odbc, 19.0.0.0.0...
Patching component oracle.rdbms.rman, 19.0.0.0.0...
Patching component oracle.nlsrtl.rsf, 19.0.0.0.0...
Patching component oracle.rdbms.install.plugins, 19.0.0.0.0...
Patching component oracle.rdbms.drdaas, 19.0.0.0.0...
Patching component oracle.ldap.rsf.ic, 19.0.0.0.0...
Patching component oracle.ldap.security.osdt, 19.0.0.0
Patching component oracle.dbdev, 19.0.0.0.0...
Patching component oracle.rdbms.install.common, 19.0.0.0.0...
Patching component oracle.sdo.locator, 19.0.0.0.0...
Patching component oracle.duma, 19.0.0.0.0...
Patching component oracle.sqlplus.ic, 19.0.0.0.0...
Patching component oracle.xdk.rsf, 19.0.0.0.0...
Patching component oracle.xdk.parser.java, 19.0.0.0.0...
Patching component oracle.rdbms.rsf.ic, 19.0.0.0.0...
Patching component oracle.precomp.common, 19.0.0.0.0...
Patching component oracle.precomp.lang, 19.0.0.0.0...
Patching component oracle.jdk, 1.8.0.201.0...
Patch 31771877 successfully applied.
Sub-set patch [29517242] has become inactive due to the application of a super-set patch [31771877].
Please refer to Doc ID 2161861.1 for any possible further required actions.
Log file location: /u01/app/oracle/product/19.0.0/dbhome_1/cfgtoollogs/opatch/ opatch2021-10-21_12-48-42PM_1.log
OPatch succeeded.
Step – 7 – Run the Below Commands
SQL> select owner from all_objects where status='INVALID';
no rows selected
SQL> @?/rdbms/admin/utlrp.sql
Session altered.
TIMESTAMP
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN 2021-10-21 13:20:37
DOC> The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC> objects in the database. Recompilation time is proportional to the
DOC> number of invalid objects in the database, so this command may take
DOC> a long time to execute on a database with a large number of invalid
DOC> objects.
DOC>
DOC> Use the following queries to track recompilation progress:
DOC>
DOC> 1. Query returning the number of invalid objects remaining. This
DOC> number should decrease with time.
DOC> SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC> 2. Query returning the number of objects compiled so far. This number
DOC> should increase with time.
DOC> SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC> This script automatically chooses serial or parallel recompilation
DOC> based on the number of CPUs available (parameter cpu_count) multiplied
DOC> by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC> On RAC, this number is added across all RAC nodes.
DOC>
DOC> UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC> recompilation. Jobs are created without instance affinity so that they
DOC> can migrate across RAC nodes. Use the following queries to verify
DOC> whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC> 1. Query showing jobs created by UTL_RECOMP
DOC> SELECT job_name FROM dba_scheduler_jobs
DOC> WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC> 2. Query showing UTL_RECOMP jobs that are running
DOC> SELECT job_name FROM dba_scheduler_running_jobs
DOC> WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#
PL/SQL procedure successfully completed.
TIMESTAMP
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END 2021-10-21 13:20:41
DOC> The following query reports the number of invalid objects.
DOC>
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#
OBJECTS WITH ERRORS
-------------------
0
DOC> The following query reports the number of exceptions caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC> Note: Typical compilation errors (due to coding errors) are not
DOC> logged into this table: they go into DBA_ERRORS instead.
DOC>#
ERRORS DURING RECOMPILATION
---------------------------
0
Function created.
PL/SQL procedure successfully completed.
Function dropped.
PL/SQL procedure successfully completed.
SQL> select owner from all_objects where status='INVALID';
no rows selected
Thank you for giving your valuable time to read the above information.
If you want to be updated with all our articles send us the Invitation or Follow us: