Showing posts with label Oracle Database. Show all posts
Showing posts with label Oracle Database. Show all posts

Friday, June 28, 2019

Oracle Home inventory is corrupted LsInventorySession failed: OracleHomeInventory

Oracle Home inventory is corrupted LsInventorySession failed: OracleHome Inventory

[oratest@slctest01 OPatch]$ opatch lsinventory
Oracle Interim Patch Installer version 11.2.0.3.3
Copyright (c) 2012, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/oracle/testdb/11.2.0
Central Inventory : /u01/oraInventory
   from           : /u01/oracle/testdb/11.2.0/oraInst.loc
OPatch version    : 11.2.0.3.3
OUI version       : 11.2.0.3.0
Log file location : /u01/oracle/testdb/11.2.0/cfgtoollogs/opatch/opatch2013-05-15_10-27-12AM_1.log

List of Homes on this system:

Inventory load failed... OPatch cannot load inventory for the given Oracle Home.
Possible causes are:
   Oracle Home dir. path does not exist in Central Inventory
   Oracle Home is a symbolic link
   Oracle Home inventory is corrupted
LsInventorySession failed: OracleHomeInventory gets null oracleHomeInfo

OPatch failed with error code 73


Solution

I was able to solve this issue; Follow the steps below:

 1. Login to the server.
 2. cd $ORACLE_HOME/oui/bin
 $ ./attachHome.sh
 Starting Oracle Universal Installer...

 Checking swap space: must be greater than 500 MB. Actual 196608 MB Passed
 Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-06-27_03-07-48AM. Please wait ...
 3. Re-run the same command and it is going to work.

 [oratest@slctest01 ~]$ opatch lsinventory

Thursday, June 13, 2019

Creating ACL for UTL_SMTP

SELECT *
FROM dba_network_acls;

-- Create ACL and privilege
begin
  dbms_network_acl_admin.create_acl (
    acl         => 'utl_mail.xml',
    description => 'Allow mail to be send',
    principal   => 'APPS',
    is_grant    => TRUE,
    privilege   => 'connect'
    );
    commit;
end;

--Add Privilege

begin
  dbms_network_acl_admin.add_privilege (
  acl       => 'utl_mail.xml',
  principal => 'APPS',
  is_grant  => TRUE,
  privilege => 'resolve'
  );
  commit;
end


--Test1

begin
  dbms_network_acl_admin.assign_acl(
  acl  => 'utl_mail.xml',
  host => 'mail.company.com.sa'
  );
  commit;
end;

--Test2
begin
  utl_mail.send(
  sender     => 'fromuser@company.com',
  recipients => 'myname@company.com,user2@company.com, user3@company.com',
  message    => 'Test E-mail from ERP Database'
  );
  commit;
end;

Concurrent Manager Cleanup Script


====================================================================

REM
REM FILENAME
REM   cmclean.sql
REM DESCRIPTION
REM   Clean out the concurrent manager tables
REM NOTES
REM   Usage: sqlplus <apps_user/apps_passwd> @cmclean
REM
REM
REM   $Id: cmclean.sql,v 1.4 2001/04/07 15:55:07 pferguso Exp $
REM
REM
REM +======================================================================+


set verify off;
set head off;
set timing off
set pagesize 1000

column manager format a20 heading 'Manager short name'
column pid heading 'Process id'
column pscode format a12 heading 'Status code'
column ccode format a12 heading 'Control code'
column request heading 'Request ID'
column pcode format a6 heading 'Phase'
column scode format a6 heading 'Status'


WHENEVER SQLERROR EXIT ROLLBACK;

DOCUMENT

   WARNING : Do not run this script without explicit instructions
             from Oracle Support


   *** Make sure that the managers are shut down     ***
   *** before running this script                    ***

   *** If the concurrent managers are NOT shut down, ***
   *** exit this script now !!                       ***

#

accept answer prompt 'If you wish to continue type the word ''dual'': '

set feed off
select null from &answer;
set feed on


REM     Update process status codes to TERMINATED

prompt
prompt  ------------------------------------------------------------------------

prompt  -- Updating invalid process status codes in FND_CONCURRENT_PROCESSES
set feedback off
set head on
break on manager

SELECT  concurrent_queue_name manager,
        concurrent_process_id pid,
        process_status_code pscode
FROM    fnd_concurrent_queues fcq, fnd_concurrent_processes fcp
WHERE   process_status_code not in ('K', 'S')
AND     fcq.concurrent_queue_id = fcp.concurrent_queue_id
AND     fcq.application_id = fcp.queue_application_id;

set head off
set feedback on
UPDATE  fnd_concurrent_processes
SET     process_status_code = 'K'
WHERE   process_status_code not in ('K', 'S');



REM     Set all managers to 0 processes

prompt
prompt  ------------------------------------------------------------------------

prompt  -- Updating running processes in FND_CONCURRENT_QUEUES
prompt  -- Setting running_processes = 0 and max_processes = 0 for all managers

UPDATE  fnd_concurrent_queues
SET     running_processes = 0, max_processes = 0;




REM     Reset control codes

prompt
prompt  ------------------------------------------------------------------------

prompt  -- Updating invalid control_codes in FND_CONCURRENT_QUEUES
set feedback off
set head on
SELECT  concurrent_queue_name manager,
        control_code ccode
FROM    fnd_concurrent_queues
WHERE   control_code not in ('E', 'R', 'X')
AND     control_code IS NOT NULL;

set feedback on
set head off
UPDATE  fnd_concurrent_queues
SET     control_code = NULL
WHERE   control_code not in ('E', 'R', 'X')
AND     control_code IS NOT NULL;

REM     Also null out target_node for all managers
UPDATE  fnd_concurrent_queues
SET     target_node = null;


REM     Set all 'Terminating' requests to Completed/Error
REM     Also set Running requests to completed, since the managers are down

prompt
prompt  ------------------------------------------------------------------------

prompt  -- Updating any Running or Terminating requests to Completed/Error
set feedback off
set head on
SELECT  request_id request,
        phase_code pcode,
        status_code scode
FROM    fnd_concurrent_requests
WHERE   status_code = 'T' OR phase_code = 'R'
ORDER BY request_id;

set feedback on
set head off
UPDATE  fnd_concurrent_requests
SET     phase_code = 'C', status_code = 'E'
WHERE   status_code ='T' OR phase_code = 'R';





REM     Set all Runalone flags to 'N'
REM     This has to be done differently for Release 10

prompt
prompt  ------------------------------------------------------------------------

prompt  -- Updating any Runalone flags to 'N'
prompt
set serveroutput on
set feedback off
declare
        c         pls_integer := dbms_sql.open_cursor;
        upd_rows  pls_integer;
        vers      varchar2(50);
        tbl       varchar2(50);
        col       varchar2(50);
        statement varchar2(255);
begin

        select substr(release_name, 1, 2)
        into   vers
        from fnd_product_groups;

        if vers >= 11 then
           tbl := 'fnd_conflicts_domain';
           col := 'runalone_flag';
        else
           tbl := 'fnd_concurrent_conflict_sets';
           col := 'run_alone_flag';
        end if;


        statement := 'update ' || tbl || ' set ' || col || '=''N'' where ' || col || ' = ''Y''';
        dbms_sql.parse(c, statement, dbms_sql.native);
        upd_rows := dbms_sql.execute(c);
        dbms_sql.close_cursor(c);
        dbms_output.put_line('Updated ' || upd_rows || ' rows of ' || col || ' in ' || tbl || ' to ''N''');
end;
/



prompt

prompt  ------------------------------------------------------------------------

prompt  Updates complete.
prompt  Type commit now to commit these updates, or rollback to cancel.
prompt  ------------------------------------------------------------------------

prompt

set feedback on

REM  <= Last REM statment -----------------------------------------------------



Check Database Locks

--To check database locks----[Kill Inactive Session] ----------------------------------------------------
SELECT * FROM dba_locks WHERE blocking_others='Blocking';

select LOCK_TYPE,SESSION_ID,BLOCKING_OTHERS from dba_locks where BLOCKING_OTHERS !='Not Blocking';

select process,sid, blocking_session from v$session where blocking_session is not null;

--Locked tables
SELECT l.session_id sid, o.object_name, o.object_type
FROM v$locked_object l, all_objects o
WHERE l.object_id = o.object_id

select SID,SERIAL#,MODULE,ACTION,CLIENT_IDENTIFIER,BLOCKING_SESSION_STATUS,STATE from v$session
where status='INACTIVE'
--where CLIENT_IDENTIFIER='%MONIR%'
AND MODULE like '%BOM%'
AND STATE='WAITING'
AND CLIENT_IDENTIFIER !='SYSADMIN'


--get SERIAL# where SID is from the above script
select * from v$session where SID=6563


--To kill the session with seesion id------------------------------------------------------------------
ALTER SYSTEM KILL SESSION '1351,993';

select * from v$session where process='144'



SELECT B.Owner, B.Object_Name, A.Oracle_Username, A.OS_User_Name 
FROM V$Locked_Object A, All_Objects B
WHERE A.Object_ID = B.Object_ID


select a.session_id,a.oracle_username, a.os_user_name, b.owner "OBJECT OWNER", b.object_name,b.object_type,a.locked_mode from
(select object_id, SESSION_ID, ORACLE_USERNAME, OS_USER_NAME, LOCKED_MODE from v$locked_object) a,
(select object_id, owner, object_name,object_type from dba_objects) b
where a.object_id=b.object_id;

SELECT a.object_id, a.session_id, substr(b.object_name, 1, 40)
FROM v$locked_object a, dba_objects b
WHERE a.object_id = b.object_id
AND b.object_name like 'BOM%'
ORDER BY b.object_name;




SELECT l.*, o.owner object_owner, o.object_name
FROM SYS.all_objects o, v$lock l
WHERE l.TYPE = 'TM'
AND o.object_id = l.id1
AND o.object_name in ('AP_INVOICES_ALL', 'AP_INVOICE_LINES_ALL', 'AP_INVOICE_DISTRIBUTIONS_ALL');




SELECT SID, SERIAL#
FROM v$session
WHERE SID = 960;



--R12: APXPAWKB Cannot Select this Payment Document Because it is in use By Another Single Payment (Doc ID 1322570.1)
--In the instance where user can reproduce the issue run the following queries:

1)

select * from dba_objects where object_name like 'CE_PAYMENT_DOCUMENTS'
and owner = 'CE';

2)

select * from v$locked_object where object_id in (select object_id from
dba_objects where object_name like 'CE_PAYMENT_DOCUMENTS'and owner = 'CE');

3)

select * from v$session where sid in (select session_id from
v$locked_object where object_id in (select object_id from dba_objects where
object_name like 'CE_PAYMENT_DOCUMENTS'and owner = 'CE'));

4)

select * from dba_locks where session_id in (select session_id from
v$locked_object where object_id in (select object_id from dba_objects where
object_name like ‘CE_PAYMENT_DOCUMENTS’and owner = ‘CE’));

Query result 3 provides the session which is having the lock of payment document.

Kill the session which is locking the 'CE_PAYMENT_DOCUMENTS' table

alter system kill session 'session,serial';

Archive Log Statastics


select * from V$LOG_HISTORY

select * from V$ARCHIVE_DEST


select trunc(COMPLETION_TIME) TIME, SUM(BLOCKS * BLOCK_SIZE)/1024/1024 SIZE_MB from V$ARCHIVED_LOG group by trunc (COMPLETION_TIME) order by 1;


select trunc(COMPLETION_TIME) TIME, SUM(BLOCKS * BLOCK_SIZE)/1024/1024 SIZE_MB from V$ARCHIVED_LOG group by trunc (COMPLETION_TIME) order by 1;

select trunc(COMPLETION_TIME,'HH24') TIME, SUM(BLOCKS * BLOCK_SIZE)/1024/1024 SIZE_MB from V$ARCHIVED_LOG group by trunc (COMPLETION_TIME,'HH24') order by 1;

SELECT TRUNC(COMPLETION_TIME) ARCHIVED_DATE,
           SUM(BLOCKS * BLOCK_SIZE) / 1024 / 1024 SIZE_IN_MB
      FROM V$ARCHIVED_LOG
     GROUP BY TRUNC(COMPLETION_TIME)
     ORDER BY 1;



select sum((blocks*block_size)/(1024*1024))SIZEinMB,sequence#,name
from v$archived_log
group by sequence#,name
order by sequence#


SELECT trunc(first_time) DAY,
     count(*) NB_SWITCHS,
     trunc(count(*)*log_size/1024) TOTAL_SIZE_KB,
     to_char(count(*)/24,'9999.9') AVG_SWITCHS_PER_HOUR
FROM v$loghist,(select avg(bytes) log_size from v$log)
GROUP BY trunc(first_time),log_size
/


-- Size of the archive log files each hour
alter session set nls_date_format = 'YYYY-MM-DD HH24';

SELECT TRUNC(COMPLETION_TIME, 'HH') ARCHIVED_DATE_HOUR,
            SUM(BLOCKS * BLOCK_SIZE) / 1024 / 1024 SIZE_IN_MB
       FROM V$ARCHIVED_LOG
      GROUP BY TRUNC(COMPLETION_TIME, 'HH')
      ORDER BY 1;


SELECT
            TRUNC(COMPLETION_TIME) ARCHIVED_DATE,
            THREAD#,
            SUM(BLOCKS * BLOCK_SIZE) / 1024 / 1024 SIZE_IN_MB
       FROM V$ARCHIVED_LOG
      GROUP BY TRUNC(COMPLETION_TIME), THREAD#
      ORDER BY 1, 2;
     
     
     

-- per day the volume in MBytes of archived logs generated
SELECT SUM_ARCH.DAY,
         SUM_ARCH.GENERATED_MB,
         SUM_ARCH_DEL.DELETED_MB,
         SUM_ARCH.GENERATED_MB - SUM_ARCH_DEL.DELETED_MB "REMAINING_MB"
    FROM (  SELECT TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY') DAY,
                   SUM (ROUND ( (blocks * block_size) / (1024 * 1024), 2))
                      GENERATED_MB
              FROM V$ARCHIVED_LOG
             WHERE ARCHIVED = 'YES'
          GROUP BY TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY')) SUM_ARCH,
         (  SELECT TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY') DAY,
                   SUM (ROUND ( (blocks * block_size) / (1024 * 1024), 2))
                      DELETED_MB
              FROM V$ARCHIVED_LOG
             WHERE ARCHIVED = 'YES' AND DELETED = 'YES'
          GROUP BY TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY')) SUM_ARCH_DEL
   WHERE SUM_ARCH.DAY = SUM_ARCH_DEL.DAY(+)
ORDER BY TO_DATE (DAY, 'DD/MM/YYYY');


-- display the number of archived logs generated per hour per day:

---number of archived logs generated per hour per day

SELECT TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY') DAY,
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '00', 1, NULL))
            "00-01",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '01', 1, NULL))
            "01-02",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '02', 1, NULL))
            "02-03",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '03', 1, NULL))
            "03-04",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '04', 1, NULL))
            "04-05",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '05', 1, NULL))
            "05-06",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '06', 1, NULL))
            "06-07",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '07', 1, NULL))
            "07-08",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '08', 1, NULL))
            "08-09",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '09', 1, NULL))
            "09-10",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '10', 1, NULL))
            "10-11",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '11', 1, NULL))
            "11-12",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '12', 1, NULL))
            "12-13",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '13', 1, NULL))
            "13-14",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '14', 1, NULL))
            "14-15",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '15', 1, NULL))
            "15-16",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '16', 1, NULL))
            "16-17",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '17', 1, NULL))
            "17-18",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '18', 1, NULL))
            "18-19",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '19', 1, NULL))
            "19-20",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '20', 1, NULL))
            "20-21",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '21', 1, NULL))
            "21-22",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '22', 1, NULL))
            "22-23",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '23', 1, NULL))
            "23-00",
         COUNT (*) TOTAL
    FROM V$ARCHIVED_LOG
WHERE ARCHIVED='YES'
GROUP BY TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY')
ORDER BY TO_DATE (DAY, 'DD/MM/YYYY');



-- Combination of these scripts is:

SELECT LOG_HISTORY.*,
         SUM_ARCH.GENERATED_MB,
         SUM_ARCH_DEL.DELETED_MB,
         SUM_ARCH.GENERATED_MB - SUM_ARCH_DEL.DELETED_MB "REMAINING_MB"
    FROM (  SELECT TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY') DAY,
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '00', 1, NULL))
                      "00-01",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '01', 1, NULL))
                      "01-02",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '02', 1, NULL))
                      "02-03",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '03', 1, NULL))
                      "03-04",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '04', 1, NULL))
                      "04-05",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '05', 1, NULL))
                      "05-06",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '06', 1, NULL))
                      "06-07",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '07', 1, NULL))
                      "07-08",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '08', 1, NULL))
                      "08-09",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '09', 1, NULL))
                      "09-10",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '10', 1, NULL))
                      "10-11",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '11', 1, NULL))
                      "11-12",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '12', 1, NULL))
                      "12-13",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '13', 1, NULL))
                      "13-14",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '14', 1, NULL))
                      "14-15",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '15', 1, NULL))
                      "15-16",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '16', 1, NULL))
                      "16-17",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '17', 1, NULL))
                      "17-18",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '18', 1, NULL))
                      "18-19",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '19', 1, NULL))
                      "19-20",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '20', 1, NULL))
                      "20-21",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '21', 1, NULL))
                      "21-22",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '22', 1, NULL))
                      "22-23",
                   SUM (
                      DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '23', 1, NULL))
                      "23-00",
                   COUNT (*) TOTAL
              FROM V$ARCHIVED_LOG
             WHERE ARCHIVED = 'YES'
          GROUP BY TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY')) LOG_HISTORY,
         (  SELECT TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY') DAY,
                   SUM (ROUND ( (blocks * block_size) / (1024 * 1024), 2))
                      GENERATED_MB
              FROM V$ARCHIVED_LOG
             WHERE ARCHIVED = 'YES'
          GROUP BY TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY')) SUM_ARCH,
         (  SELECT TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY') DAY,
                   SUM (ROUND ( (blocks * block_size) / (1024 * 1024), 2))
                      DELETED_MB
              FROM V$ARCHIVED_LOG
             WHERE ARCHIVED = 'YES' AND DELETED = 'YES'
          GROUP BY TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY')) SUM_ARCH_DEL
   WHERE LOG_HISTORY.DAY = SUM_ARCH.DAY AND SUM_ARCH.DAY = SUM_ARCH_DEL.DAY(+)
ORDER BY TO_DATE (LOG_HISTORY.DAY, 'DD/MM/YYYY');

Wednesday, March 13, 2019

Backup and Recovery Scenarios

Backup and Recovery Scenariosa) Consistent backups

A consistent backup means that all data files and control files are consistent  to a point in time. I.e. they have the same SCN. This is the only method of  backup when the database is in NO Archive log mode.

b) Inconsistent backups
An Inconsistent backup is possible only when the database is in Archivelog mode.  We must apply redo logs to the data files, in order to restore the database to a consistent state.  Inconsistent backups can be taken using RMAN when the database is open.
Inconsistent backups can also be taken using other OS tools provided the tablespaces (or database) is put into backup mode.
ie: SQL> alter tablespace data begin backup;
    SQL> alter database begin backup; (version 10 and above only)


c) Database Archive mode
The database can run in either Archivelog mode or noarchivelog mode.  When we first create the database, we specify if it is to be in Archivelog  mode. Then in the init.ora file we set the parameter log_archive_start=true  so that archiving will start automatically on startup.
If the database has not been created with Archivelog mode enabled, we can  issue the command whilst the database is mounted, not open.
SQL> alter database Archivelog;.
SQL> log archive start
SQL> alter database open;
SQL> archive log list
This command will show us the log mode and if automatic archival is set.
 

d) Backup Methods
Essentially, there are two backup methods, hot and cold, also known as online and offline, respectively. A cold backup is one taken when the database is shutdown. The database must be shutdown cleanly.  A hot backup is on taken when the database is running. Commands for a hot backup:
For non RMAN backups:
1. Have the database in archivelog mode (see above)
2. SQL> archive log list
--This will show what the oldest online log sequence is. As a precaution, always keep the all archived log files starting from the oldest online log sequence.
3. SQL> Alter tablespace tablespace_name BEGIN BACKUP;
or SQL> alter database begin backup (for v10 and above).
4. --Using an OS command, backup the datafile(s) of this tablespace.
5. SQL> Alter tablespace tablespace_name END BACKUP
--- repeat step 3, 4, 5 for each tablespace.
or SQL> alter database end backup; for version 10 and above
6. SQL> archive log list
---do this again to obtain the current log sequence. make sure that we have a copy of this redo log file.
7. So to force an archived log, issue
SQL> ALTER SYSTEM SWITCH LOGFILE
A better way to force this would be:
SQL> alter system archive log current;
8. SQL> archive log list
This is done again to check if the log file had been archived and to find the latest archived sequence number.
9. Backup all archived log files determined from steps 2 and 8.
10. Back up the control file:
SQL> Alter database backup controlfile to 'filename'
For RMAN backups:
see Note.<>  RMAN - Sample Backup Scripts 10g
or the appropriate RMAN documentation.


e) Incremental backups
These are backups that are taken on blocks that have been modified since the last backup. These are useful as they don't take up as much space and time. There are two kinds of incremental backups Cumulative and Non cumulative.
Cumulative incremental backups include all blocks that were changed since the  last backup at a lower level. This one reduces the work during restoration as  only one backup contains all the changed blocks.
Noncumulative only includes blocks that were changed since the previous backup  at the same or lower level.
Using rman, we issue the command "backup incremental level n"
Oracle v9 and below RMAN will back up empty blocks, oracle v10.2 RMAN will not back up empty blocks


f) Support scenarios
When the database crashes, we now have a backup. We restore the backup and
then recover the database. Also, don't forget to take a backup of the control
file whenever there is a schema change.

RECOVERY SCENARIOS

Note: All online datafiles must be at the same point in time when completing recovery;
There are several kinds of recovery we can perform, depending on the type of  failure and the kind of backup we have. Essentially, if we are not running in archive log mode, then we can only recover the cold backup of the database and we will lose any new data and changes made since that backup was taken. If, however, the database is in Archivelog mode we will be able to restore the database up to the time of failure. There are three basic types of recovery:


1. Online Block Recovery.
This is performed automatically by Oracle.(pmon) Occurs when a process dies  while changing a buffer. Oracle will reconstruct the buffer using the online  redo logs and writes it to disk.


2. Thread Recovery.
This is also performed automatically by Oracle. Occurs when an instance  crashes while having the database open. Oracle applies all the redo changes  in the thread that occurred since the last time the thread was checkpointed.


3. Media Recovery.
This is required when a data file is restored from backup. The checkpoint count in the data files here are not equal to the check point count in the  control file.
Now let's explain a little about Redo vs Undo.
Redo information is recorded so that all commands that took place can be  repeated during recovery. Undo information is recorded so that we can undo changes made by the current transaction but were not committed. The Redo Logs  are used to Roll Forward the changes made, both committed and non- committed  changes. Then from the Undo segments, the undo information is used to
rollback the uncommitted changes.
Media Failure and Recovery in Noarchivelog Mode
In this case, our only option is to restore a backup of Oracle files. The files we need are all datafiles, and control files.  We only need to restore the password file or parameter files if they are lost or are corrupted.
Media Failure and Recovery in Archivelog Mode
In this case, there are several kinds of recovery we can perform, depending on what has been lost.


The three basic kinds of recovery are:
1. Recover database - here we use the recover database command and the database must be closed and mounted. Oracle will recover all datafiles that are online.


2. Recover tablespace - use the recover tablespace command. The database can be open but the tablespace must be offline.


3. Recover datafile - use the recover datafile command. The database can be  open but the specified datafile must be offline.
Note: We must have all archived logs since the backup we restored from,  or else we will not have a complete recovery.


a) Point in Time recovery:
A typical scenario is that we dropped a table at say noon, and want to recover it. We will have to restore the appropriate datafiles and do a point-in-time  recovery to a time just before noon.
Note: We will lose any transactions that occurred after noon.  After we have recovered until noon, we must open the database with resetlogs. This is necessary to reset the log numbers, which will protect the database  from having the redo logs that weren't used be applied.
The four incomplete recovery scenarios all work the same:
Recover database until time '1999-12-01:12:00:00';
Recover database until cancel; (we type in cancel to stop)
Recover database until change n;
Recover database until cancel using backup controlfile;
Note: When performing an incomplete recovery, the datafiles must be online. Do a select * from v$recover_file to find out if there are any files  which are offline. If we were to perform a recovery on a database which has  tablespaces offline, and they had not been taken offline in a normal state, we  will lose them when we issue the open resetlogs command. This is because the data file needs recovery from a point before the resetlogs option was used.


b) Recovery without control file
If we have lost the current control file, or the current control file is  inconsistent with files that we  need to recover, we need to recover either by using a backup control file command or create a new control file. We can also recreate the control file based on the current one using the  'SQL> backup control file to trace' command which will create a script for we to  run to create a new one.  Recover database using backup control file command must be used when using a  control file other that the current. The database must then be opened with
resetlogs option.


c) Recovery of missing datafile with rollback segments
The tricky part here is if we are performing online recovery. Otherwise we can just use the recover datafile command. Now, if we are performing an  online recovery, we will need to create a new undo tablespace to be used.  Once the old tablespace has been recovered it can be dropped once any uncommitted  transactions have rolled back.


d) Recovery of missing datafile without undo segments
There are three ways to recover in this scenario, as mentioned above.
1. recover database;
2. recover datafile 'c:\orant\database\usr1orcl.ora';
3. recover tablespace user_data;


e) Recovery with missing online redo logs
Missing online redo logs means that somehow we have lost our redo logs before  they had a chance to archived. This means that crash recovery cannot be  performed, so media recovery is required instead. All datafiles will need to be restored and rolled forwarded until the last available archived log file is applied. This is thus an incomplete recovery, and as such, the recover
database command is necessary.
As always, when an incomplete recovery is performed, we must open the database with resetlogs.
Note: the best way to avoid this kind of a loss, is to mirror online log files.


f) Recovery with missing archived redo logs
If archives are missing, the only way to recover the database is to restore from latest backup. We will have lost any uncommitted
transactions which were recorded in the archived redo logs. Again, this is why  Oracle strongly suggests mirroring online redo logs and duplicating copies  of the archives.


g) Recovery with resetlogs option
Reset log option should be the last resort, however, as we have seen from above, it may be required due to incomplete recoveries. (recover using a backup control file, or a point in time recovery). It is imperative that we backup up the database immediately after we have opened the database with reset logs.  It is possible to recover through a resetlogs, and made easier with Oracle V10, but easier
to restore from the backup taken after the resetlogs


h) Recovery with corrupted undo segments.
If an undo segment is corrupted, and contains uncommitted system data we may not be able to open the database.
The best alternative in this situation is to recover the corrupt block using the RMAN blockrecover command next best would be to restore the datafile from backup and do a complete recovery.
If a backup does not exist and If the database is able to open (non system object) The first step is to find out what object is causing the rollback to appear corrupted. If we can determine that, we can drop that object.
So, how do we find out if it's actually a bad object?
1. Make sure that all tablespaces are online and all datafiles are online. This can be checked through via the v$recover_file view.


2. Put the following in the init.ora:
event = "10015 trace name context forever, level 10"
This event will generate a trace file that will reveal information about the  transaction Oracle is trying to roll back and most importantly, what object  Oracle is trying to apply the undo to.
Note: In Oracle v9 and above this information can be found in the alert log.
Stop and start the database.


3. Check in the directory that is specified by the user_dump_dest parameter (in the init.ora or show parameter command) for a trace file that was  generated at startup time.


4. In the trace file, there should be a message similar to: error recovery tx(#,#) object #.
TX(#,#) refers to transaction information.
The object # is the same as the object_id in sys.dba_objects.


5. Use the following query to find out what object Oracle is trying to perform recovery on.
select owner, object_name, object_type, status
from dba_objects where object_id = <object #>;


6. Drop the offending object so the undo can be released. An export or relying on a backup may be necessary to restore the object after the corrupted undo segment is released.
 

i) Recovery with System Clock change.
We can end up with duplicate timestamps in the datafiles when a system clock  changes. This usually occurs when daylight saving comes into or out of the picture. In this case, rather than a point in time recovery, recover to a specify log or SCN
 

j) Recovery with missing System tablespace.
The only option is to restore from a backup.
 

k) Media Recovery of offline tablespace
When a tablespace is offline, we cannot recover datafiles belonging to this  tablespace using recover database command. The reason is because a recover database command will only recover online datafiles. Since the tablespace is  offline, it thinks the datafiles are offline as well, so even if we  recover database and roll forward, the datafiles in this tablespace will not be touched.  Instead, we  need to perform a recover tablespace command. Alternatively, we could restored the datafiles from a cold backup, mount the database and select  from the v$datafile view to see if any of the datafiles are offline. If they are, bring them online, and then we can perform a recover database command.
 

l) Recovery of Read-Only tablespaces
If we have a current control file, then recovery of read only tablespaces is  no different than recovering read-write files. The issues with read-only tablespaces arise if we have to use a backup control file. If the tablespace is in read-only mode, and hasn't changed to read-write since the last backup, then we will be able to media recovery using a backup control file by taking the tablespace offline. The reason here is that when we are using the backup control file, we must open the database with resetlogs. And we know that Oracle wont let us read files from before a resetlogs was done. However, there is an exception with read-only tablespaces. We will be able to take the datafiles online after we have opened the database.
When we have tablespaces that switch modes and we don't have a current control file, we should use a backup control file that recognizes the tablespace in  read-write mode. If we don't have a backup control file, we can create a new  one using the create controlfile command.  Basically, the point here is that we should take a backup of the control file every time we switch a tablespaces mode.


RAC Backup, Restore and Recovery using RMAN

RAC Backup, Restore and Recovery using RMAN


Following Example is for a 2-node Oracle RAC Cluster.
The logs are being archived to their respective node.
We are allocating channels to each node to enable the autolocate feature of RMAN in a RAC env.

1. Verify the databases are in archivelog mode and archive destination.
a. NODE 1: thread 1
SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /u02/app/oracle/product/11.2.0/dbs/arch
Oldest online log sequence     20
Next log sequence to archive   21
Current log sequence           21
 b. NODE 2: thread 2
SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /u02/app/oracle/product/11.2.0/dbs/arch
Oldest online log sequence     8
Next log sequence to archive   9
Current log sequence           9

2. Verify connectivity to the target nodes and catalog if used.
$ setenv TNS_ADMIN $ORACLE_HOME/network/admin
$ sqlplus /nolog
SQL> connect sys/pwd@node1 as sysdba
SQL> connect sys/pwd@node2 as sysdba
SQL> connect rman/rman@rcat

3. Set your testing areas.
Testing HOME for logs:  /u02/home/usupport/rman
Backups HOME Location:  /rman/V112

4. Connect using RMAN to verify and set the controlfile persistent configuration. 
The controlfiles are shared between the instances so configuring the controlfile on node 1 also sets it for all nodes in the RAC cluster.

* Always note the target DBID
    connected to target database: V112 (DBID=228033884)
*  Default Configuration
RMAN> SHOW ALL;
    CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u02/app/oracle/product/11.2.0/dbs/snapcf_V11201.f'; # default
*Configuring Channels to Use a Specific Node
To configure one RMAN channel for each policy-managed Oracle RAC database instance, use the following syntax:
CONFIGURE CHANNEL DEVICE TYPE disk CONNECT 'SYS/RAC@NODE1';'
CONFIGURE CHANNEL DEVICE TYPE disk CONNECT ''SYS/RAC@NODE2';

 5. Make a backup using the new persistent configuration parameters.
*  Backup database with differential incremental 0 and then archived logs   using the delete input option.
BACKUP INCREMENTAL LEVEL 0 FORMAT '/rman/V112/%d_LVL0_%T_%u_s%s_p%p' DATABASE;
BACKUP ARCHIVELOG ALL FORMAT '/rman/V112/%d_AL_%T_%u_s%s_p%p'DELETE INPUT;
* Backup again using differential incremental level 1
     BACKUP INCREMENTAL LEVEL 1 FORMAT '/rman/V112/%d_LVL1_%T_%u_s%s_p%p' DATABASE;

     BACKUP ARCHIVELOG ALL FORMAT '/rman/V112/%d_AL_%T_%u_s%s_p%p' DELETE INPUT;
* To simplify this you can also use PLUS ARCHIVELOG 
BACKUP INCREMENTAL LEVEL 0 FORMAT '/rman/V112/%d_LVL0_%T_%u_s%s_p%p'
DATABASE PLUS ARCHIVELOG FORMAT '/rman/V112/%d_AL_%T_%u_s%s_p%p' DELETE INPUT;
 This uses a different algorithm than backup database and backup archivelog in separate commands, the algorithm for PLUS ARCHIVELOG is:
     1. Archive log current
     2. Backup archived logs
     3. Backup database level 0
     4. Archive log current
     5. Backup any remaining archived log created during backup

6. Backupset maintenance using the configured retention policy
RMAN> LIST BACKUP SUMMARY;
RMAN> LIST BACKUP BY DATAFILE;
RMAN> LIST BACKUP OF DATABASE;
RMAN> LIST BACKUP OF ARCHIVELOG ALL;
RMAN> LIST BACKUP OF CONTROLFILE;
These above can be enhanced with the "until time" clause as well as the archivelog backups using "not backed up x times" to cut down on  many copies of a log in several backup sets.
Then continuing with SMR  Server Managed Recovery use the change archivelog from...until...delete  to remove old logs no longer needed on disk.
To check/delete obsolete backups  or archivelogs we use:
RMAN> REPORT OBSOLETE;
RMAN> DELETE OBSOLETE;
         or
RMAN> DELETE NOPROMPT OBSOLETE;
To check the database files:
RMAN> REPORT SCHEMA; 

7. Restore and Recover
Complete Recovery
With the database mounted on the node1 and no-mount on node2 connect to the target and catalog using RMAN.
      rman target / catalog rman/rman@rcat
    This script will restore and recover the database completely and open the database in read/write mode.  
run {
         RESTORE DATABASE;
         RECOVER DATABASE;
         ALTER DATABASE OPEN;
}
Incomplete Recovery
If you are using instance registration the database must be mounted to register with the listener. This means you must use the current controlfile for restore and recovery or setup a dedicated listener if not  already done. RMAN requires a dedicated server connection and does not work with using instance registration before mounting the controlfile.  Using the autobackup controlfile feature requires the DBID of the  TARGET database. It must be set when the database is not mounted and only the controlfile and spfile (from 9.2) can be restored this way.
    1.Shutdown node1 and node2
    2. Startup no-mount node2 and node1
    3.Start rman and restore the controlfile from autobackup:
     rman trace reco1.log
RMAN> CONNECT CATALOG rman/rman@rcat
RMAN> SET DBID=228033884;
RMAN> CONNECT TARGET
RMAN>  restore controlfile;
   4. If no catalog is used, you can restore the controlfile from autobackup
     % rman trace recocf.log
RMAN> SET DBID=228033884;
RMAN> CONNECT TARGET /
RMAN> RUN
{
 SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE disk TO '/rman/V112/%F';
       ALLOCATE CHANNEL d1 DEVICE TYPE disk;
       RESTORE CONTROLFILE FROM AUTOBACKUP
       MAXSEQ 5           # start at sequence 5 and count down (optional)
       MAXDAYS 5;         # start at UNTIL TIME and search back 5 days (optional)
       MOUNT DATABASE;
}
  5. Verify what is available for incomplete recovery.
    We will recover with the highest available redo information.  In a RAC database, both thread must be considered to determine highest available redo.  The options are "until time", "until scn", or "until sequence".  We will use the log sequence in this case.
   a.  First we need to find the highest sequence of each thread:
SQL> select max(sequence#) from v$archived_log L, v$database D
            where L.resetlogs_change# = D.resetlogs_change# and 
            thread#=1;
       MAX(SEQUENCE#)
       --------------
               25
SQL> select max(sequence#) from v$archived_log L, v$database D
            where L.resetlogs_change# = D.resetlogs_change# and 
            thread#=2;
       MAX(SEQUENCE#)
       --------------
             13
  b.  Next is to find the thread with lowest NEXT_CHANGE# scn.
SQL> select sequence#, thread#, first_change#, next_change#
             from v$archived_log L, v$database D
             where L.resetlogs_change# = D.resetlogs_change# and
             sequence# in (13,25);
      SEQUENCE#    THREAD# FIRST_CHANGE# NEXT_CHANGE#
     -------------------- -------------- ------------------------- -------------------------
                            25            1                           1744432                    1744802
                            13            2                           1744429                    1744805
SQL> select sequence#, thread#, first_change#, next_change#
           from v$backup_redolog
           where sequence# in (13,25);
      SEQUENCE#    THREAD# FIRST_CHANGE# NEXT_CHANGE#
      -------------------- -------------- ------------------------- -------------------------
                            25            1                           1744432                    1744802
                            13            2                           1744429                    1744805
In this case the next_change# SCN in thread 1 sequence 25 is lower than sequence 13 thread 2.  In a RAC environment, we use the lower to ensure we have the redo required from BOTH threads.   In other words, we use the lower (thread# 1) to ensure that ALL scn (s) in thread #1 exist in the available sequence for thread #2.
So we will set sequence 26 for thread 1 for RMAN  'until sequence'  recovery,  because RMAN stops the recovery  before applying the indicated sequence. Log sequence for  recovery  needs always  be sequence+1 to end  at +1 after applying the prior sequence.  I.e.:
       SET UNTIL SEQUENCE 26 THREAD 1;
6.  Get the command to add TEMPFILES after opening DB.
Locally Managed Temporary Tablespaces are not restored by  RESTORE command, we need to create them manually after recovery is complete.
If using LMT Temporary tablespace the controlfile will have the syntax  to add the tempfile after recovery is complete. The following command will give us the create controlfile statement:
               SQL> alter database backup controlfile to trace;
                Example:
      # Commands to add tempfiles to temporary tablespaces.
      # Online tempfiles have complete space information.
      # Other tempfiles may require adjustment.
        ALTER TABLESPACE TEMP ADD TEMPFILE '/dev/db/temp_01.dbf'  SIZE 41943040  REUSE AUTOEXTEND OFF;
      # End of tempfile additions.
      #
NOTE:  In newer versions, the tempfiles are added automatically. 

7. Run the rman script
Since log sequence 13 thread 2 next_change# is 3 changes ahead of thread 1 sequence 25 we are using sequence 26 (25+1) to stop recovery. This will restore  the data files and recover them applying all of sequence #25 of thread 1 and stopping at sequence #26. 
run {
       SET UNTIL SEQUENCE 26 THREAD 1;
       RESTORE DATABASE;
       RECOVER DATABASE;
       ALTER DATABASE OPEN RESETLOGS;
      }
 8. Review and understand the impact of resetlogs on the catalog.
After resetlogs a new incarnation for the database is recorded in the RMAN catalog and database controlfile.  Only one incarnation can be current and any need to restore from a previous incarnation requires you to "reset database to incarnation...". 
For example:
RMAN> LIST INCARNATION OF DATABASE V112;
List of Database Incarnations
DB Key  Inc Key  DB Name  DB ID            CUR  Reset SCN  Reset Time
-----------  ----------- ------------ ----------------  ------- --------------  ---------------
2656          2657       V112         228033884      NO   1                   29-MAY-13
2656          3132       V112         228033884      YES  1744806       13-JUN-13

We see that an "open resetlogs" was executed against this database on 13-JUN-2013. 
9. RMAN Sample Commands
* With a dedicated listener (not using instance registration)  restoring the controlfile.
 run {
 ALLOCATE CHANNEL D1 TYPE DISK CONNECT 'SYS/RAC@NODE1';
 ALLOCATE CHANNEL D2 TYPE DISK CONNECT 'SYS/RAC@NODE2';
  SET UNTIL SEQUENCE 14 THREAD 2;
   RESTORE CONTROLFILE;
  ALTER DATABASE MOUNT;
 RELEASE CHANNEL D1;
 RELEASE CHANNEL D2;
 }
 * Backup Archivelog
 BACKUP ARCHIVELOG ALL NOT BACKED UP 3 TIMES;
 BACKUP ARCHIVELOG UNTIL TIME 'SYSDATE-2' NOT BACKED UP 2 TIMES;

Some Tips About FNDLOAD

Data Synchronization  Data Synchronization is a process in which some setup data would be synchronized, and this would be more important w...