dba daily activities

17
dba daily activities Oracle Database Health check scripts Hi All, Lot of time DBAs are asked to check the health of the Database,Health of the Database can be check in various ways.It includes: SL No Monitoring Scope Current Status OS Level 1 Physical memory / Load :Load normal, Load averages: 0.35, 0.37, 0.36 2 OS Space threshold ( archive, ora_dump etc.. ) :Sufficient Space available. 3 Top 10 process consuming memory:No process using exceptional high memory 4 Free volumes available :Sufficient disk space is available on the mount points 5 Filesystem space Under normal threshold Database level. 6 Check extents / Pro active Space addition:Space is being regularly added. 7 Check alert log for ORA- and warn messages. 8 Major wait events (latch/enqueue/Lib cache pin) No major wait events 9 Max Sessions 10 Long running Jobs 6 inactive sessions running for more than 8 hrs 11 Invalid objects 185 12 Analyze Jobs ( once in a week ) Done on 20-JAN-2008 Time 06:00:06 13 Temp usage / Rollback segment usage Normal 14 Nologging Indexes 15 Hotbackup/Coldbackup Gone fine 16 Redo generation normal 17 PQ proceses Normal 18 I/O Generation Under normal threshold 19 2 PC Pending transactions 0 DR / backup 1 Sync arch Normal 2 Purge arch Normal 3 Recovery status Normal 20)DATABASE HEALTH CHECK SCRIPT: Showing locks and Archive generation details In Detail DATABASE Health check: OPERATING SYSTEM: 1)Physical memory/ Load: 1) Free:free command displays amount of total, free and used physical memory (RAM) in the system as well as showing information on shared memory, buffers, cached memory and swap space used by the Linux kernel. Usage: $ free -m

Upload: joao-antonio

Post on 14-Sep-2015

1 views

Category:

Documents


0 download

TRANSCRIPT

dba daily activities

Oracle Database Health check scriptsHi All,Lot of time DBAs are asked to check the health of the Database,Health of the Database can be check in various ways.It includes:

SL No Monitoring Scope Current StatusOS Level1 Physical memory / Load :Load normal, Load averages: 0.35, 0.37, 0.362 OS Space threshold ( archive, ora_dump etc.. ) :Sufficient Space available.3Top 10process consuming memory:No process using exceptional high memory4 Free volumes available :Sufficient disk space is available on the mount points5 Filesystem space Under normal thresholdDatabase level.6 Check extents / Pro active Space addition:Space is being regularly added.7 Check alert log for ORA- and warn messages.8 Major wait events (latch/enqueue/Lib cache pin) No major wait events9 Max Sessions10 Long running Jobs 6 inactive sessions running for more than 8 hrs11 Invalid objects 18512 Analyze Jobs ( once in a week ) Done on 20-JAN-2008 Time 06:00:0613 Temp usage / Rollback segment usage Normal14 Nologging Indexes15 Hotbackup/Coldbackup Gone fine16 Redo generation normal17 PQ proceses Normal18 I/O Generation Under normal threshold19 2 PC Pending transactions 0DR / backup1 Sync arch Normal2 Purge arch Normal3 Recovery status Normal20)DATABASE HEALTH CHECK SCRIPT: Showing locks and Archive generation details

In Detail DATABASE Health check:OPERATING SYSTEM:

1)Physical memory/ Load:1) Free:free command displays amount of total, free and used physical memory (RAM) in the system as well as showing information on shared memory, buffers, cached memory and swap space used by the Linux kernel.Usage:$ free -m

2) vmstat:vmstat reports report virtual memory statistics, which has information about processes, swap, free, buffer and cache memory, paging space, disk IO activity, traps, interrupts, context switches and CPU activityUsage:$vmstat 5

3) top:top command displays dynamic real-time view of the running tasks managed by kernel and in Linux system. The memory usage stats by top command include real-time live total, used and free physical memory and swap memory with their buffers and cached memory size respectivelyUsage:$top4) ps :ps command reports a snapshot on information of the current active processes. ps will show the percentage of memory resource that is used by each process or task running in the system. With this command, top memory hogging processes can be identified.Usage:$ps aux

2) OS Space threshold ( archive, ora_dump etc.. ):Checking the OS space is available in all filesystems,specially the location which is having archive logs ,oracle Database files.We can use the below OS commands:$df h$du csh *3) Top 10 process consuming memory:We can Displaying top 10 memory consuming processes as follows:

ps aux|head -1;ps aux|sort -m

We can use the top command, and press M which orders the process list by memory usage.

4) Free volumes available:

We have to make sure Sufficient disk space is available on the mount points on each OS servers where the Database is up and running.

$df h

5)Filesystem space:

Under normal threshold.Check the filesystem in the OS side whether the sufficient space is available at all mount points.

DATABASE :

6)Check extents / Pro active Space addition:Check each of the Data,Index and temporary tablespaces for extend and blocksAllocation details.

SET LINES 1000SELECT SEGMENT_NAME,TABLESPACE_NAME,EXTENTS,BLOCKSFROM DBA_SEGMENTS;

SELECT SEGMENT_NAME,TABLESPACE_NAME,EXTENTS,BLOCKSFROM DBA_SEGMENTS WHERE TABLESPACE_NAME=STAR01D;

7) Check alert log for ORA- and warn messages:

Checking the alert log file regulary is a vital task we have to do.In the alert log files we have to looks for the following things:

1) Look for any of the oracle related errors.Open the alert log file with less or more command and search for ORA-This will give you the error details and time of occurrence.

2) Look for the Database level or Tablespace level changesMonitor the alert log file and search the file for each Day activities happeningIn the Database either whether it is bouncing of Database.Increase in the size of the tablespaces,Increase in the size of the Database parameters.In the 11g Database we can look for TNS errors in the alert log file.

8) Major wait events (latch/enqueue/Lib cache pin):

We can check the wait events details with the help of below queries:

SELECT s.saddr, s.SID, s.serial#, s.audsid, s.paddr, s.user#, s.username,s.command, s.ownerid, s.taddr, s.lockwait, s.status, s.server,s.schema#, s.schemaname, s.osuser, s.process, s.machine, s.terminal,UPPER (s.program) program, s.TYPE, s.sql_address, s.sql_hash_value,s.sql_id, s.sql_child_number, s.sql_exec_start, s.sql_exec_id,s.prev_sql_addr, s.prev_hash_value, s.prev_sql_id,s.prev_child_number, s.prev_exec_start, s.prev_exec_id,s.plsql_entry_object_id, s.plsql_entry_subprogram_id,s.plsql_object_id, s.plsql_subprogram_id, s.module, s.module_hash,s.action, s.action_hash, s.client_info, s.fixed_table_sequence,s.row_wait_obj#, s.row_wait_file#, s.row_wait_block#,s.row_wait_row#, s.logon_time, s.last_call_et, s.pdml_enabled,s.failover_type, s.failover_method, s.failed_over,s.resource_consumer_group, s.pdml_status, s.pddl_status, s.pq_status,s.current_queue_duration, s.client_identifier,s.blocking_session_status, s.blocking_instance, s.blocking_session,s.seq#, s.event#, s.event, s.p1text, s.p1, s.p1raw, s.p2text, s.p2,s.p2raw, s.p3text, s.p3, s.p3raw, s.wait_class_id, s.wait_class#,s.wait_class, s.wait_time, s.seconds_in_wait, s.state,s.wait_time_micro, s.time_remaining_micro,s.time_since_last_wait_micro, s.service_name, s.sql_trace,s.sql_trace_waits, s.sql_trace_binds, s.sql_trace_plan_stats,s.session_edition_id, s.creator_addr, s.creator_serial#FROM v$session sWHERE ( (s.username IS NOT NULL)AND (NVL (s.osuser, 'x') 'SYSTEM')AND (s.TYPE 'BACKGROUND') AND STATUS='ACTIVE')ORDER BY "PROGRAM";

The following query provides clues about whether Oracle has been waiting for library cache activities:

Select sid, event, p1raw, seconds_in_wait, wait_timeFrom v$session_waitWhere event = 'library cache pin'And state = 'WAITING';

The below Query gives details of Users sessions wait time and state:

SELECT NVL (s.username, '(oracle)') AS username, s.SID, s.serial#, sw.event,sw.wait_time, sw.seconds_in_wait, sw.stateFROM v$session_wait sw, v$session sWHERE s.SID = sw.SIDORDER BY sw.seconds_in_wait DESC;

9) Max Sessions:There should not be more than 6 inactive sessions running for more than 8 hours in a Database in order to minimize the consumption of CPU and I/O resources.

a)Users and Sessions CPU consumption can be obtained by below query:

Set lines 1000select ss.username, se.SID,VALUE/100 cpu_usage_secondsfrom v$session ss, v$sesstat se, v$statname snwhere se.STATISTIC# = sn.STATISTIC#and NAME like '%CPU used by this session%'and se.SID = ss.SID and ss.status='ACTIVE'and ss.username is not nullorder by VALUE desc;

b) Users and Sessions CPU and I/O consumption can be obtained by below query:

-- shows Day wise,User wise,Process id of server wise- CPU and I/O consumptionset linesize 140col spid for a6col program for a35 truncselect p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI') date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program ) program,ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J')) days,round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_dayfrom V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bgwhere s.paddr=p.addr and ss.sid=s.sidand ss.statistic#=12 and si.sid=s.sidand bg.paddr(+)=p.addrand round((ss.value/100),0) > 10order by 8;

10) Long running Jobs:

We can find out long running jobs with the help of the below query:

col username for a20col message for a50col remaining for 9999select username,to_char(start_time, 'hh24:mi:ss dd/mm/yy') started,time_remaining remaining, messagefrom v$session_longopswhere time_remaining = 0order by time_remaining desc;

11) Invalid objects:

We can check the invalid objects with the help of the below query:

select owner||' '||object_name||' '||created||' '||status from dba_objects where status='INVALID';

12) Analyze Jobs ( once in a week ):

We need to analyze the jobs that are running once in a week as a golden rule.The below steps can be considered for analyzing jobs.

Analyzing a Running JobThe status of a job or a task changes several times during its life cycle. A job can have the following as its status:Scheduled: The job is created and will run at the specified time.Running: The job is being executed and is in progress.Initialization Error: The job or step could not be run successfully. If a step in a job fails initialization, the job status is Initialization Error.Failed: The job was executed but failed.Succeeded: The job was executed completely.Stopped: The user canceled the job.Stop Pending: The user has stopped the job. The already running steps are completing execution.Suspended: This indicates that the execution of the job is deferred.Inactive: This status indicates that the target has been deleted.Reassigned: The owner of the job has changed.Skipped: The job was not executed at the specified time and has been omitted.The running jobs can be found out by the help of below query:

select sid, job,instance from dba_jobs_running;

We can find out the failed jobs and Broken jobs details with the help of the Below query:

select job||' '||schema_user||' '||Broken||' '||failures||' '||what||' '||last_date||' '||last_sec from dba_jobs;

13) Temp usage / Rollback segment/PGA usage:

We can get information of temporary tablespace usage details with the help of below query:Set lines 1000SELECT b.tablespace,ROUND(((b.blocks*p.value)/1024/1024),2)||'M' "SIZE",a.sid||','||a.serial# SID_SERIAL,a.username,a.programFROM sys.v_$session a,sys.v_$sort_usage b,sys.v_$parameter pWHERE p.name = 'db_block_size'AND a.saddr = b.session_addrORDER BY b.tablespace, b.blocks;

We can get information of Undo tablespace usage details with the help of the below query:set lines 1000SELECT TO_CHAR(s.sid)||','||TO_CHAR(s.serial#) sid_serial,NVL(s.username, 'None') orauser,s.program,r.name undoseg,t.used_ublk * TO_NUMBER(x.value)/1024||'K' "Undo"FROM sys.v_$rollname r,sys.v_$session s,sys.v_$transaction t,sys.v_$parameter xWHERE s.taddr = t.addrAND r.usn = t.xidusn(+)AND x.name = 'db_block_size';

We can get the PGA usage details with the help of the below query:select st.sid "SID", sn.name "TYPE",ceil(st.value / 1024 / 1024/1024) "GB"from v$sesstat st, v$statname snwhere st.statistic# = sn.statistic#and sid in(select sid from v$session where username like UPPER('&user'))and upper(sn.name) like '%PGA%'order by st.sid, st.value desc;Enter value for user: STARTXNAPP14)Validating the Backup:

We have to verify the Hotbackup/Coldbackup(or any physical or logical backup) of all the Production and non-production Databases went fine.Make sure you are having a valid backups of all the Databases.Check the Backup locations to make sure the Backup completed on time with the required Backup data.

14)Hotbackup/Coldbackup:Validating the backup of Database.It should complete on time with the required data for restoring and recovery purpose if required.

15) Redo generation/Archive logs generation details:We should make sure there should not be frequent log switch happening in a Database.If there are frequent log switches than archive logs might generate more which may decrease the performance of the Database however in a production Database log switches could vary depending upon the Server configuration between 5 to 20.

We can the log switch details with the help of the below query:

Redolog switch Datewise and hourwise:-------------------------------set lines 120;set pages 999;select to_char(first_time,'DD-MON-RR') "Date",to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"from v$log_historygroup by to_char(first_time,'DD-MON-RR')order by 1/Archive logs generations is directly proportional to the number of log switches happening in a Database. If there are frequent log switches than archive logs might generate more which can affect the performance of Database.

We can use the below queries for archive logs generation details:

a)Archive logs by dates:set lines 1000select to_char(first_time,'DD-MON-RR') "Date",to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"from v$log_historygroup by to_char(first_time,'DD-MON-RR')order by 1/b)Archive log generation details Day-wise :

select to_char(COMPLETION_TIME,'DD-MON-YYYY'),count(*)from v$archived_log group by to_char(COMPLETION_TIME,'DD-MON-YYYY')order by to_char(COMPLETION_TIME,'DD-MON-YYYY');

c) Archive log count of the day:

select count(*)from v$archived_logwhere trunc(completion_time)=trunc(sysdate);

count of archived logs generated today on hourly basis:-------------------------------------------------------select to_char(first_time,'DD-MON-RR') "Date",to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"from v$log_historywhere to_char(first_time,'DD-MON-RR')='16-AUG-10'group by to_char(first_time,'DD-MON-RR')order by 1/

16)I/O Generation:We can find out CPU and I/O generation details for all the users in the Database with the help of the below query:-- Show IO per session,CPU in seconds, sessionIOS.set linesize 140col spid for a6col program for a35 truncselect p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI') date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program ) program,ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J')) days,round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_dayfrom V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bgwhere s.paddr=p.addr and ss.sid=s.sidand ss.statistic#=12 and si.sid=s.sidand bg.paddr(+)=p.addrand round((ss.value/100),0) > 10order by 8;To know what the session is doing and what kind of sql it is using:

-- what kind of sql a session is usingset lines 9999set pages 9999

select s.sid, q.sql_text from v$sqltext q, v$session swhere q.address = s.sql_addressand s.sid = &sid order by piece;

eg: sid=1853

17)Sync arch:In a Dataguard environment we have to check primary is in sync with the secondary Database.This we can check as follows:The V$ MANAGED_STANDBY view on the standby database site shows you the activities performed byboth redo transport and Redo Apply processes in a Data Guard environmentSELECT PROCESS, CLIENT_PROCESS, SEQUENCE#, STATUS FROM V$MANAGED_STANDBY;In some situations, automatic gap recovery may not take place and you will need to perform gap recovery manually. For example, you will need to perform gap recovery manually if you are using logical standby databases and the primary database is not available.The following sections describe how to query the appropriate views to determine which log files are missing and perform manual recovery.On a physical standby databaseTo determine if there is an archive gap on your physical standby database, query the V$ARCHIVE_GAP view as shown in the following example:SQL> SELECT * FROM V$ARCHIVE_GAP;

If it displays no rows than the primary Database is in sync with the standy Database.If it display any information with row than manually we have to apply the archive logs.

After you identify the gap, issue the following SQL statement on the primary database to locate the archived redo log files on your primary database (assuming the local archive destination on the primary database is LOG_ARCHIVE_DEST_1):Eg:SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND SEQUENCE# BETWEEN 7 AND 10;Copy these log files to your physical standby database and register them using the ALTER DATABASE REGISTER LOGFILE statement on your physical standby database. For example:SQL> ALTER DATABASE REGISTER LOGFILE'/physical_standby1/thread1_dest/arcr_1_7.arc';SQL> ALTER DATABASE REGISTER LOGFILE'/physical_standby1/thread1_dest/arcr_1_8.arc';

After you register these log files on the physical standby database, you can restart Redo Apply. The V$ARCHIVE_GAP fixed view on a physical standby database only returns the next gap that is currently blocking Redo Apply from continuing. After resolving the gap and starting Redo Apply, query the V$ARCHIVE_GAP fixed view again on the physical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.

On a logical standby database:To determine if there is an archive gap, query the DBA_LOGSTDBY_LOG view on the logical standby database. For example, the following query indicates there is a gap in the sequence of archived redo log files because it displays two files for THREAD 1 on the logical standby database. (If there are no gaps, the query will show only one file for each thread.) The output shows that the highest registered file is sequence number 10, but there is a gap at the file shown as sequence number 6:SQL> COLUMN FILE_NAME FORMAT a55SQL> SELECT THREAD#, SEQUENCE#, FILE_NAME FROM DBA_LOGSTDBY_LOG L2> WHERE NEXT_CHANGE# NOT IN3> (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# = THREAD#)4> ORDER BY THREAD#,SEQUENCE#;

THREAD# SEQUENCE# FILE_NAME---------- ---------- -----------------------------------------------1 6 /disk1/oracle/dbs/log-1292880008_6.arc1 10 /disk1/oracle/dbs/log-1292880008_10.arc

Copy the missing log files, with sequence numbers 7, 8, and 9, to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE statement on your logical standby database. For example:SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/disk1/oracle/dbs/log-1292880008_10.arc';

After you register these log files on the logical standby database, you can restart SQL Apply.

The DBA_LOGSTDBY_LOG view on a logical standby database only returns the next gap that is currently blocking SQL Apply from continuing. After resolving the identified gap and starting SQL Apply, query the DBA_LOGSTDBY_LOG view again on the logical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.Monitoring Log File Archival Information:Step 1 Determine the current archived redo log file sequence numbers.Enter the following query on the primary database to determine the current archived redo log file sequence numbers:SQL> SELECT THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$LOGWHERE STATUS='CURRENT';Step 2 Determine the most recent archived redo log file.Enter the following query at the primary database to determine which archived redo log file contains the most recently transmitted redo data:SQL> SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;Step 3 Determine the most recent archived redo log file at each destination.Enter the following query at the primary database to determine which archived redo log file was most recently transmitted to each of the archiving destinations:SQL> SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ#2> FROM V$ARCHIVE_DEST_STATUS3> WHERE STATUS 'DEFERRED' AND STATUS 'INACTIVE';

DESTINATION STATUS ARCHIVED_THREAD# ARCHIVED_SEQ#------------------ ------ ---------------- -------------/private1/prmy/lad VALID 1 947standby1 VALID 1 947The most recently written archived redo log file should be the same for each archive destination listed. If it is not, a status other than VALID might identify an error encountered during the archival operation to that destination.Step 4 Find out if archived redo log files have been received.You can issue a query at the primary database to find out if an archived redo log file was not received at a particular site. Each destination has an ID number associated with it. You can query the DEST_ID column of the V$ARCHIVE_DEST fixed view on the primary database to identify each destination's ID number.Assume the current local destination is 1, and one of the remote standby destination IDs is 2. To identify which log files are missing at the standby destination, issue the following query:SQL> SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM2> (SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1)3> LOCAL WHERE4> LOCAL.SEQUENCE# NOT IN5> (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND6> THREAD# = LOCAL.THREAD#);

THREAD# SEQUENCE#--------- ---------1 121 131 14

18)Purge arch:We have to make sure the archive logs files are purged safely or move to Tape drive or any other location in order to make space for new archive logs files in the Archive logs destination locations.

19)Recovery status:In order to do recover make sure you are having latest archive logs,so that you can restore and do the recovery if required.

20) MY DATABASE HEALTH CHECK SCRIPT:/* SCRIPT FOR MONITORING AND CHECKING HEALTH OF DATABASE-USEFUL FOR PRODUCTION DATABASES */

-- SHOWS RUNNING JOBSselect 'RUNNING JOBS', sid, job,instance from dba_jobs_running;set lines 1000-- SHOWS ARCHIVE LOGS GENERAION DETAILS HOURLY AND DATE WISE BASISselect 'ARCHIVE LOG REPORT',to_char(first_time,'DD-MON-RR') "Date",to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"from v$log_historygroup by to_char(first_time,'DD-MON-RR')order by 1/-- WHAT ALL THE SESSIONS ARE GETTING BLOCKEDselect 'SESSIONS BLOCKED',process,sid, blocking_session from v$session where blocking_session is not null;-- WHICH SESSION IS BLOCKING WHICH SESSIONset lines 9999set pages 9999select s1.username || '@' || s1.machine|| ' ( SID=' || s1.sid || ' ) is blocking '|| s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_statusfrom v$lock l1, v$session s1, v$lock l2, v$session s2where s1.sid=l1.sid and s2.sid=l2.sidand l1.BLOCK=1 and l2.request > 0and l1.id1 = l2.id1and l2.id2 = l2.id2 ;-- SHOWS BLOCK CHANGES DETAILS AND PHYSICAL READS DETAILselect a.sid,b.username,block_gets,consistent_gets,physical_reads,block_changesfrom V$SESS_IO a,V$SESSION bwhere a.sid=b.sid and block_changes > 10000 order by block_changes desc;-- show IO per session / CPU in seconds. sessionIOS.sqlset linesize 140col spid for a6col program for a35 truncselect p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI') date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program ) program,ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J')) days,round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_dayfrom V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bgwhere s.paddr=p.addr and ss.sid=s.sidand ss.statistic#=12 and si.sid=s.sidand bg.paddr(+)=p.addrand round((ss.value/100),0) > 10order by 8;-- SCRIPT TO IDENTIFY LONG RUNNING STATEMENTSrem LONGOPS.SQLrem Long Running Statementsrem Helmut Pfau, Oracle Deutschland GmbHset linesize 120col opname format a20col target format a15col units format a10col time_remaining format 99990 heading Remaining[s]col bps format 9990.99 heading [Units/s]col fertig format 90.99 heading "complete[%]"select sid,opname,target,sofar,totalwork,units,(totalwork-sofar)/time_remaining bps,time_remaining,sofar/totalwork*100 fertigfrom v$session_longopswhere time_remaining > 0/-- ACTIVE SESSIONS IN DATABASEselect 'ACTIVE SESSION', sid, serial#,machine, osuser,username,status from v$session where username!='NULL' and status='ACTIVE';-- WHAT SQL A SESSION IS USINGset lines 9999set pages 9999select s.sid, q.sql_text from v$sqltext q, v$session swhere q.address = s.sql_addressand s.sid = &sid order by piece;

eg:SID=1844I would like to add one more script which will tell me details regarding the Size of the Database used,occupied and available and Tablespace usagedetails along with hit ratio of various SGA components which can be very helpfullto monitor the performance of the Databases.

Database_monitor.sql:

ttitle "1. :============== Tablespace Usage Information ==================:" skip 2set linesize 140col Total format 99999.99 heading "Total space(MB)"col Used format 99999.99 heading "Used space(MB)"col Free format 99999.99 heading "Free space(MB)"break on reportcompute sum of Total space(MB) on reportcompute sum of Used space(MB) on reportcompute sum of Free space(MB) on reportselect a.tablespace_name, round(a.bytes/1024/1024,2) Total,round( nvl( b.bytes,0)/1024/1024,2) Used,round(nvl(c.bytes, 0)/1024/1024,2) Free ,round(nvl(b.bytes,0)*100/nvl(a.bytes,0),2) "% Used"from sys.sm$ts_avail a, sys.sm$ts_used b, sys.sm$ts_free cwhere a.tablespace_name=b.tablespace_name(+)and b.tablespace_name=c.tablespace_name(+);

ttitle "2. :============== Hit Ratio Information ==================:" skip 2set linesize 80clear columnsclear breaksset pagesize 60 heading off termout off echo off verify offREMcol val1 new_val lib noprintselect 100*(1-(SUM(Reloads)/SUM(Pins))) val1from V$LIBRARYCACHE;

ttitle offcol val2 new_val dict noprintselect 100*(1-(SUM(Getmisses)/SUM(Gets))) val2from V$ROWCACHE;

ttitle offcol val3 new_val phys_reads noprintselect Value val3from V$SYSSTATwhere Name = 'physical reads';

ttitle offcol val4 new_val log1_reads noprintselect Value val4from V$SYSSTATwhere Name = 'db block gets';

ttitle offcol val5 new_val log2_reads noprintselect Value val5from V$SYSSTATwhere Name = 'consistent gets';

ttitle offcol val6 new_val chr noprintselect 100*(1-(&phys_reads / (&log1_reads + &log2_reads))) val6from DUAL;

ttitle offcol val7 new_val avg_users_cursor noprintcol val8 new_val avg_stmts_exe noprintselect SUM(Users_Opening)/COUNT(*) val7,SUM(Executions)/COUNT(*) val8from V$SQLAREA;ttitle offset termout onset heading offttitle -center 'SGA Cache Hit Ratios' skip 2

select 'Data Block Buffer Hit Ratio : '||&chr db_hit_ratio,' Shared SQL Pool ',' Dictionary Hit Ratio : '||&dict dict_hit,' Shared SQL Buffers (Library Cache) ',' Cache Hit Ratio : '||&lib lib_hit,' Avg. Users/Stmt : '||&avg_users_cursor||' ',' Avg. Executes/Stmt : '||&avg_stmts_exe||' 'from DUAL;

ttitle "3. :============== Sort Information ==================:" skip 2

select A.Value Disk_Sorts,B.Value Memory_Sorts,ROUND(100*A.Value/DECODE((A.Value+B.Value),0,1,(A.Value+B.Value)),2)Pct_Disk_Sortsfrom V$SYSSTAT A, V$SYSSTAT Bwhere A.Name = 'sorts (disk)'and B.Name = 'sorts (memory)';

ttitle "4. :============== Database Size Information ==================:" skip 2

select sum(bytes/1024/1024/1024) Avail from sm$ts_avail union all select sum(bytes/1024/1024/1024) Used from sm$ts_used union all select sum(bytes/1024/1024/1024) Free from sm$ts_free;