29 October 2015

Tablespace info 2

Another script to do the same task, data about the tablespaces.

REM ***************************************************************************
REM Category: Admin
REM Purpose : To report space usage information for all the tablespaces in database.
REM Versions: 9i, 10g, 11g
REM
REM Additional info:
REM  Explantion of Columns in Result Set:
REM  tablespace_name:    The Name of the tablespace
REM  total:              Sum of data files bytes.
REM  used:               Space currently used.
REM  free:               Free space available.
REM  pct_used:           Percentage of Free space.
REM  warning             Highlight warning when above 90%
REM ***************************************************************************

SET linesize 300 pagesize 1000 feedback off echo off TRIMSPOOL ON
SET tab off

column tablespace_name format a30 justify c heading 'TABLESPACE_NAME'
column total format 9G999G999 justify c heading 'TOTAL(MB)'
column used format 9G999G999 justify c heading 'USED(MB)'
column free format 9G999G999 justify c heading 'FREE(MB)'
column pct_used format 990.99 justify c heading 'PCT_USED'
column warning format a10 justify c heading 'WARNING'
compute sum of total on report
compute sum of used on report
compute sum of free on report
break on report

SELECT rpad(a.tablespace_name,30,'.') tablespace_name,
       a.total/1024/1024 total,
       (a.total - nvl(b.free,0))/1024/1024 used,
       nvl(b.free, 0)/1024/1024 free,
       ((a.total - nvl(b.free,0)) / a.total) * 100 pct_used,
       decode(sign(((1-nvl(b.free,0)/a.total)*100)-90),0,' ',1,'<*******',' ') warning
FROM
      (SELECT tablespace_name
             ,SUM(bytes) total
       FROM dba_data_files
       GROUP BY tablespace_name) a
     ,(SELECT tablespace_name
             ,SUM(bytes) free
       FROM dba_free_space
       GROUP BY tablespace_name) b
WHERE a.tablespace_name = b.tablespace_name(+)
ORDER BY a.tablespace_name;



Tablespace info

Hello friends,

Came across this script to calculate the tablespace data and display in a neat format.

REM ***************************************************************************
REM Script  : tablespaceinfo.sql
REM Usage   : @tablespaceinfo (no parameters)
REM Purpose : To report space usage information for all the tablespaces in database.
REM Versions: 11g
REM
REM Additional info:
REM  Explantion of Columns in Result Set:
REM  Tablespace:         The Name of the tablespace
REM  Megs Max Size:      Maximum size the tablespace will occupy on disk.
REM  Megs Allocated:     Space currently allocated on disc.
REM  Megs used:          Space currently be used within the Megs Allocated.
REM  Megs Free Till Max: Free space untill the tablespace  extends
REM                        + Free space in what has not yet been allocated
REM  Megs Free Till Ext: Free space until the tablespace extends.
REM  %Used:              Percentage of used space based on the Megs Max Size value.
REM  %Free:              Percentabe of free space based on the Megs Max Size value.
REM ***************************************************************************

REM set parameters
SET linesize 300 pagesize 1000 feedback off echo off TRIMSPOOL ON
SET tab off
SET underline =

REM ttitle left -
REM "Tablespace Usage Report" skip 2

COLUMN tablespace_name     heading "Tablespace Name"      format a30 truncate justify c
COLUMN total_maxspace_mb   heading "Megs|Max Size"        format 9G999G999 justify c
COLUMN total_allocspace_mb heading "Megs|Allocated"       format 9G999G999 justify c
COLUMN used_space_mb       heading "Megs|Used"            format 9G999G999 justify c
COLUMN free_space_ext_mb   heading "Megs Free|Current"    format 9G999G999 justify c
COLUMN free_space_mb       heading "Megs Free|Till Max"   format 9G999G999 justify c
COLUMN pct_used            heading "% Used|Till MAX"      format 999 justify c
COLUMN pct_free            heading "%|Free"               format 999 justify c
COLUMN warning             heading "WARNING"              format a12 justify c

break on report
COMPUTE SUM label "Total Size:" of total_maxspace_mb total_allocspace_mb used_space_mb -
    free_space_ext_mb free_space_mb (used_space_mb/total_maxspace_mb)*100 on report

select /*+ALL_ROWS */
       rpad(alloc.tablespace_name,30,'.') tablespace_name,
       alloc.total_maxspace_mb,
       alloc.total_allocspace_mb,
       (alloc.total_allocspace_mb - free.free_space_mb) used_space_mb,
       free.free_space_mb free_space_ext_mb,
       free.free_space_mb+(alloc.total_maxspace_mb-alloc.total_allocspace_mb) free_space_mb,
       ((alloc.total_allocspace_mb - free.free_space_mb)/alloc.total_maxspace_mb)*100 pct_used,
     -- ((free.free_space_mb+(alloc.total_maxspace_mb-alloc.total_allocspace_mb))/alloc.total_maxspace_mb)*100 pct_free
     CASE WHEN ((alloc.total_allocspace_mb - free.free_space_mb)/alloc.total_maxspace_mb)*100 > 90
          THEN '<*******'
          WHEN free.free_space_mb > 8192
                AND ((alloc.total_allocspace_mb - free.free_space_mb)/alloc.total_maxspace_mb)*100 < 50
          THEN 'SHRINK THIS'
          ELSE NULL END warning
 FROM (SELECT tablespace_name,
              ROUND(SUM(CASE WHEN maxbytes = 0 THEN bytes ELSE maxbytes END)/1048576) total_maxspace_mb,
              ROUND(SUM(bytes)/1048576) total_allocspace_mb
         FROM dba_data_files
    --        WHERE file_id NOT IN (SELECT FILE# FROM v$recover_file)
        GROUP BY tablespace_name) alloc,
      (SELECT tablespace_name,
              SUM(bytes)/1048576 free_space_mb
         FROM dba_free_space
    --       WHERE file_id NOT IN (SELECT FILE# FROM v$recover_file)
        GROUP BY tablespace_name) free
WHERE alloc.tablespace_name = free.tablespace_name (+)
ORDER BY pct_used DESC;
PROMPT

CLEAR COLUMNS
CLEAR BREAKS
CLEAR COMPUTES
TTITLE OFF




Typical RMAN backup

Hello friends,

Configured RMAN backup on one of the servers today. We already had a catalog in place in this project. If I get an opportunity to create/configure the catalog, will update it here.

There was no big deal in this server, we had a backup mount provided for the database by the storage team. The team responsible for moving the backups to TAPE had to be intimated or we could raise a schedule ticket for doing so.

I had a shell script and a parameter file to configure the backup.

shell script as below, save it as rman_bkp.sh(or any name as per your convenience):

ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export ORACLE_HOME
ORACLE_SID=savprdpr
export ORACLE_SID
LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0/dbhome_1/lib
export LD_LIBRARY_PATH
/u01/app/oracle/product/11.2.0/dbhome_1/bin/rman target system/********* catalog RMANCAT/**********@grid12cp @/dbprdbkp/scripts/rman_incr_daily_bkp_savprdpr.rcv > /dbprdbkp/rmanbkp/rman_incr_daily_bkp_savprdpr.log


The parameter file with *.rcv as below:(full backup)

run
{
allocate channel c1 device type disk format '/dbprdbkp/rmanbkp/Backup_%d_DB_%u_%s_%p_%T';
allocate channel c2 device type disk format '/dbprdbkp/rmanbkp/Backup_%d_DB_%u_%s_%p_%T';
allocate channel c3 device type disk format '/dbprdbkp01/rmanbkp/Backup_%d_DB_%u_%s_%p_%T';
allocate channel c4 device type disk format '/dbprdbkp01/rmanbkp/Backup_%d_DB_%u_%s_%p_%T';
crosscheck backup;
crosscheck archivelog all;
report obsolete;
delete noprompt obsolete;
sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL 0 tag='FULL_BACKUP_WEEKLY' DATABASE INCLUDE CURRENT CONTROLFILE FILESPERSET 1;
BACKUP AS COMPRESSED BACKUPSET ARCHIVELOG ALL;
delete noprompt archivelog all backed up 1 times to disk;
copy current controlfile to '/dbprdbkp/rmanbkp/control_%d_%T_%U.ctl';
backup current controlfile for standby format '/dbprdbkp/rmanbkp/standbycontrol_%d_%T_%U.ctl';
release channel c1;
release channel c2;
release channel c3;
release channel c4;
}



The parameter file with *.rcv as below:(daily incremental backup)

run
{
allocate channel c1 device type disk format '/dbprdbkp/rmanbkp/Backup_%d_DB_%u_%s_%p_%T';
allocate channel c2 device type disk format '/dbprdbkp/rmanbkp/Backup_%d_DB_%u_%s_%p_%T';
allocate channel c3 device type disk format '/dbprdbkp01/rmanbkp/Backup_%d_DB_%u_%s_%p_%T';
allocate channel c4 device type disk format '/dbprdbkp01/rmanbkp/Backup_%d_DB_%u_%s_%p_%T';
crosscheck backup;
crosscheck archivelog all;
report obsolete;
delete noprompt obsolete;
sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL 1 tag='INCR_BACKUP_DAILY' DATABASE INCLUDE CURRENT CONTROLFILE FILESPERSET 1;
BACKUP AS COMPRESSED BACKUPSET ARCHIVELOG ALL;
delete noprompt archivelog all backed up 1 times to disk;
copy current controlfile to '/dbprdbkp/rmanbkp/control_%d_%T_%U.ctl';
backup current controlfile for standby format '/dbprdbkp/rmanbkp/standbycontrol_%d_%T_%U.ctl';
release channel c1;
release channel c2;
release channel c3;
release channel c4;
}


The full backup would run on Sundays and the daily backup, daily.

28 October 2015

Database Capacity Planning

Hello friends,
One of the tasks a DBA has to do, not always though, but application team might come around and ask for a database capacity planning for the storage addition Year on Year(YoY). I have found this script from some site(don't remember, but credit goes to the author of the script) which I used recently to provide the capacity planning to the application/storage team.

select
  to_char(CREATION_TIME,'RRRR') year, to_char(CREATION_TIME,'MM') month,
  sum(bytes/1024/1024/1024) GigaBytes
from v$datafile
group by
  to_char(CREATION_TIME,'RRRR'), to_char(CREATION_TIME,'MM')
order by 1, 2;

Output below:

YEAR    MO  GIGABYTES
----       --     ----------
2010     09         21.0048828
2013     12         82.515625
2014     03         42
2014     04         12

Please note, the datafiles created in year 2010 are system, sysaux, undo and users tablespaces, but the database creation date is as below.

Now comes the real number crunching.

-- Compute how many months has gone since the database was created using the below query:

SQL> select created from v$database;

CREATED
---------
28-DEC-13


SQL> select ceil(months_between(sysdate, created)) tot_mon from v$database;

   TOT_MON
----------
        22


-- Compute the maximum number of times a file created in 'autoextend' mode has grown using the below query:

select max(round((d.bytes - d.create_bytes) / f.inc / d.block_size)) maxinc
from sys.file$ f, v$datafile d where f.inc > 0 and f.file# = d.file# and d.bytes > d.create_bytes;

    MAXINC
----------
      1229


Run the below query and pass the input from the above queries:
Enter value for max_inc: 1229
Enter value for range:22


col GB format 9999990.00
col volume format A80
with extended_files as
            (select file#,
                    nvl(lag(file_size, 1) over (partition by file#
                                                order by file_size), 0)
prior_size,
                    file_size,
                    block_size
             from (select f.file#,
                          f.create_blocks + x.rn * f.inc file_size,
                          f.block_size     
                   from (select f.file#,
                                d.create_bytes / d.block_size create_blocks,
                                f.inc,
                                d.bytes / d.block_size blocks,
                                d.block_size
                         from sys.file$ f,
                              v$datafile d    
                         where f.inc > 0
                           and f.file# = d.file#
                           and d.bytes > d.create_bytes
                           and rownum > 0) f,
                        (select rownum - 1 rn
                         from dual
                         connect by level <= &max_inc + 1) x
                   where (f.create_blocks + x.rn * f.inc) <= f.blocks))
select "MONTH",
       round(cumul/1024, 2) GB,
       -- Draw a histogram
       rpad('=', round(60 * cumul / current_M), '=') volume
from 
(select to_char(cal.mon, 'MON-YYYY') "MONTH",
             sum(nvl(evt.M, 0)) over (order by cal.mon range unbounded
preceding) cumul,
             tot.curr_M current_M,
             cal.mon
      from -- current database size (data size)
           (select round(sum(bytes)/1024/1024) curr_M
            from v$datafile) tot,
           -- all the months since the database was created
           (select add_months(trunc(sysdate, 'MONTH'), -rn) mon
            from (select rownum - 1 rn
                  from dual
                  connect by level <= &range)) cal,
           -- all the months when the size of the database changed
           (select size_date,
                   round(sum(bytes)/1024/1024) M
            from (-- files in autoextend mode
                  select file#, max(bytes) bytes, size_date
                  from (select file#, bytes, trunc(min(ctime), 'MONTH')
size_date
                        -- Get the oldest creation date of tables or indexes
                        -- that are located in extensions.
                        -- Other segment types are ignored.
                        from (select s.file#,
                                     f.file_size * f.block_size bytes,
                                     o.ctime
                              from sys.seg$ s,
                                   extended_files f,
                                   sys.tab$ t,
                                   sys.obj$ o
                              where s.file# = f.file#
                                and s.type# = 5
                                and s.block# between f.prior_size and
f.file_size
                                and s.file# = t.file#
                                and s.block# = t.block#
                                and t.obj# = o.obj#
                              union all
                              select s.file#,
                                     f.file_size * f.block_size bytes,
                                     o.ctime
                              from sys.seg$ s,
                                   extended_files f,
                                   sys.ind$ i,
                                   sys.obj$ o
                              where s.file# = f.file#
                                and s.type# = 6
                                and s.block# between f.prior_size and
f.file_size
                                and s.file# = i.file#
                                and s.block# = i.block#
                                and i.obj# = o.obj#)
                        group by file#, bytes)
                  group by file#, size_date
                  union all
                  -- files that are not in autoextend mode
                  select d.file#,
                         d.create_bytes bytes,
                         trunc(d.creation_time, 'MONTH') size_date
                  from v$datafile d,
                       sys.file$ f
                  where nvl(f.inc, 0) = 0
                    and f.file# = d.file#)
            group by size_date) evt
      where evt.size_date (+) = cal.mon)
order by mon
/



You get a neat output as below:


Note that this is a cumulative database growth Month on Month till date from the inception of the database.

If you want to make a YoY growth, calculate the Yearly growth and apply the YoY formula in a excel sheet and provide it to the application/storage team for analysis.

Below is the excel data I made and provided to an application team last week:



Growth YoY
2012 minus 2011 equals to 3.75
2013 minus 2012 equals to 1
2014 minus 2013 equals to 8.44
2015 minus 2014 equals to 13.36

%YoY
2012 - YoY of 2012 divided by 2011 database size multiplied by 100 - (3.75/14.64)*100 = 25.61%
2013 - YoY of 2013 divided by 2012 database size multiplied by 100 - (1/18.39)*100 = 5.44%
2014 - YoY of 2014 divided by 2013 database size multiplied by 100 - (8.44/19.39)*100 = 43.53%
2015 - YoY of 2015 divided by 2014 database size multiplied by 100 - (13.36/27.83)*100 = 48.01%

From the above growth analysis, we inferred that there would be an average growth of 50% in the database size.

Get AWR, ADDR, and ASH reports using SQL scripts

Get AWR, ADDR, and ASH reports from SQL scripts
One way to identify the causes of poor performance in the database is to review AWR, ADDR, and
ASH reports. They should be for a period of 60 or more minutes, covering the poor performance
period. The report gets generated in the same directory where you have initiated the sqlplus session.

To generate the AWR report, execute the following script and select two snapshots:

SQL> @$ORACLE_HOME/rdbms/admin/awrrpt.sql

>> Choose the TXT format or the HTML format.

Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Would you like an HTML report, or a plain text report?
Enter 'html' for an HTML report, or 'text' for plain text
Defaults to 'html'
Enter value for report_type:


. To generate the ADDM report, execute:

SQL> @$ORACLE_HOME/rdbms/admin/addmrpt

>> Choose the TXT format or the HTML format.

Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 46086
Begin Snapshot Id specified: 46086

Enter value for end_snap: 46087
End   Snapshot Id specified: 46087



Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is addmrpt_1_46086_46087.txt.  To use this name,
press to continue, otherwise enter an alternative.


. To generate the ASH report, execute:

SQL> @$ORACLE_HOME/rdbms/admin/ashrpt.sql

>> Choose the TXT format or the HTML format.

Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Enter 'html' for an HTML report, or 'text' for plain text
Defaults to 'html'
Enter value for report_type:

Defaults to current database

Using database id: 457724301

Defaults to current instance

Using instance number: 1


ASH Samples in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Oldest ASH sample available:  20-Oct-15 12:50:18   [  10787 mins in the past]
Latest ASH sample available:  28-Oct-15 00:36:05   [      1 mins in the past]


Specify the timeframe to generate the ASH report
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter begin time for report:

--    Valid input formats:
--      To specify absolute begin time:
--        [MM/DD[/YY]] HH24:MI[:SS]
--        Examples: 02/23/03 14:30:15
--                  02/23 14:30:15
--                  14:30:15
--                  14:30
--      To specify relative begin time: (start with '-' sign)
--        -[HH24:]MI
--        Examples: -1:15  (SYSDATE - 1 Hr 15 Mins)
--                  -25    (SYSDATE - 25 Mins)

Defaults to -15 mins
Enter value for begin_time:


Just keep pressing 'ENTER' for passing default values.

Archivelog generation

Hello friends, today again comes a post related to my favorite topic "Oracle Dataguard"
This may or may not be a useful context, but I have this script handy to check the amount of archives generated in the database, just to have a rough estimation of archive generation in the database over a period of time. Hope you might find it useful.

Archivelog generation on a daily basis:

alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS';

set pages 1000
select trunc(COMPLETION_TIME,'DD') Day, thread#, round(sum(BLOCKS*BLOCK_SIZE)/1048576) MB,count(*) Archives_Generated from v$archived_log group by trunc(COMPLETION_TIME,'DD'),thread# order by 1;


Archive log generation on an hourly basis:

set pages 1000
select trunc(COMPLETION_TIME,'HH') Hour,thread# , round(sum(BLOCKS*BLOCK_SIZE)/1048576) MB,count(*) Archives from v$archived_log group by trunc(COMPLETION_TIME,'HH'),thread#  order by 1 ; 

27 October 2015

Switchover and Failover

An Oracle database operates in one of two roles: primary or standby. Data Guard helps you change the role of a database using either a switchover or a failover:

A switchover is a role reversal between the primary database and one of its standby databases. A switchover guarantees no data loss and is typically done for planned maintenance of the primary system. During a switchover, the primary database transitions to a standby role, and the standby database transitions to the primary role.



Switchover
You can switch a database from the primary role to the standby role, as well as from standby to primary. This is known as a database switchover, because the standby database that you specify becomes the primary database, and the original primary database becomes a standby database, with no loss of data.

Whenever possible, you should switch over to a physical standby database:


  • If the switchover transitions a physical standby database to the primary role, then:
  • The original primary database will be switched to a physical standby role.
  • The online redo log files will be continuously archived from the new primary database to all standby databases in the configuration.


The original primary database will be restarted as a part of the switchover operation.

Standby databases not involved in the switchover will continue operating in the state they were in before the switchover occurred and will automatically begin applying redo data received from the new primary database.


  • If the switchover transitions a logical standby database to the primary role, then:
  • The original primary database will be switched to a logical standby role.


Neither the primary database nor the logical standby database needs to be restarted after the switchover completes.

Other logical standby databases in the broker configuration that were not involved in the switchover will remain viable after the switchover. There is no need to restart any databases. All physical and snapshot standby databases will be disabled after a switchover to a logical standby database.

Switchover to a logical standby database is disallowed when the configuration is operating in maximum protection mode.


Failover
A failover is done when the primary database (all instances of an Oracle RAC primary database) fails or has become unreachable and one of the standby databases is transitioned to take over the primary role. Failover should be performed when the primary database cannot be recovered in a timely manner. Failover may or may not result in data loss depending on the protection mode in effect at the time of the failover.



References:
http://docs.oracle.com/cd/B28359_01/server.111/b28295/sofo.htm

ORA-01000: maximum open cursors exceeded

Users started complaining about the below error since today evening:

ORA-01000: maximum open cursors exceeded

Cause: Application attempted to open too many cursors. The initialization parameter OPEN_CURSORS determines the maximum number of cursors per user.

Action: Modify the application to use fewer cursors. If this error occurs often, shut down Oracle, increase the value of OPEN_CURSORS, and then restart Oracle.

Increase the value for open_cursors parameter and keep a track on the cursors being opened by the application and whether they are being closed or left open.

alter system set open_cursors = 1000 scope=both;

Try to close whatever cursors are no longer in use, restart the database.

Will keep posted on further improvements or modifications to this issue.

Update:

Also came across the parameter SESSION_CACHED_CURSORS which sets the number of cached closed cursors each session can have. You can set SESSION_CACHED_CURSORS to higher than OPEN_CURSORS, lower than OPEN_CURSORS, or anywhere in between. This parameter has no effect on ora-1000's or on the number of cursors a session will have open. Conversely, OPEN_CURSORS has no effect on the number of cursors cached. There's no relationship between the two parameters.

Let's see, if this is going to work, will keep this parameter to 100

26 October 2015

Sync standby database using incremental backups

Hello friends, this article is about syncing the standby database with the primary database using the incremental backup method. There was an upgrade activity in the project where one of the DR location was blacked out for a switch upgrade or something. This created a huge gap in the archive shipping to the DR location, also, the archive logs were deleted at the primary location when the RMAN backups ran during the outage. The only other easy option left was to recover using the incremental backups from the primary database.

Cancel the MRP process at the standby database and get the minimum SCN using the below commands:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

select to_char(current_scn) from v$database;

select min(fhscn) from x$kcvfh;

select min(f.fhscn) from x$kcvfh f, v$datafile d where f.hxfil =d.file# and d.enabled != 'READ ONLY';


TO_CHAR(CURRENT_SCN)
----------------------------------------
33750546754

SQL> SQL>
MIN(FHSCN)
----------------
33750545679

SQL> SQL>
MIN(F.FHSCN)
----------------
33750545679


go to the primary database and take an incremental backup using the minimum SCN you get from the above queries, in this case the SCN used was "33750545679"

run
{
allocate channel d0 type disk FORMAT '/dbsabbkp/rmanbkp/%d_%s_%t.bak';
allocate channel d1 type disk FORMAT '/dbsabbkp/rmanbkp/%d_%s_%t.bak';
allocate channel d2 type disk FORMAT '/dbsabbkp/rmanbkp/%d_%s_%t.bak';
allocate channel d3 type disk FORMAT '/dbsabbkp/rmanbkp/%d_%s_%t.bak';
backup as compressed backupset incremental from scn 33750545679 database;
copy current controlfile to '/dbsabbkp/rmanbkp/control01full_%d_%t_%U.bak';
backup current controlfile for standby format '/dbsabbkp/rmanbkp/standbycontrol_%d_%T_%U.ctl';
RELEASE CHANNEL d0;
RELEASE CHANNEL d1;
RELEASE CHANNEL d2;
RELEASE CHANNEL d3;
}


scp the backup files to the standby database.

scp oracle@10.1.1.7:/dbstpbkp/rmanbkp

At the standby database server

rman target /
startup nomount;

restore standby controlfile from '/dbsabbkp/rmanbkp/standbycontrolXXXXXX.ctl';

sql 'alter database mount standby database';

catalog start with '/dbsabbkp/rmanbkp';

RECOVER DATABASE NOREDO;

alter database recover managed standby database disconnect from session;


On Primary:-
==========

select thread#, max(sequence#) "Last Primary Seq Generated"
from v$archived_log
where first_time between (sysdate-1) and (sysdate+1)
group by thread# order by 1;

On Standby:-
==========

select thread#, max(sequence#) "Last Standby Seq Received"
from v$archived_log
group by thread# order by 1;

select thread#, max(sequence#) "Last Standby Seq Applied"
from v$archived_log where applied='YES'
group by thread# order by 1;

SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE# ASC;

select process, thread#, sequence#, status from v$managed_standby where process='MRP0';

Data file resizing

Hello friends, this is a vast topic to be discussed, I will try to keep it short and understandable, how much space can you gain by resizing the data file, reclaiming the un-used space. It depends on different environments to hold the different amount of free space for different tablespaces. Many go with the 70% usage, while many go with the 85%. Be sure that extents can still be allocated in each tablespace. There may be enough free space, but it may be too fragmented to be used. The query which can be used is:

column file_name format a50 word_wrapped
column smallest format 999,990 heading "Smallest|Size|Poss."
column currsize format 999,990 heading "Current|Size"
column savings  format 999,990 heading "Poss.|Savings"
break on report
compute sum of savings on report
column value new_val blksize
select value from v$parameter where name = 'db_block_size';
/
select file_name,
       ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) smallest,
       ceil( blocks*&&blksize/1024/1024) currsize,
       ceil( blocks*&&blksize/1024/1024) -
       ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) savings
from dba_data_files a,
     ( select file_id, max(block_id+blocks-1) hwm
         from dba_extents
        group by file_id ) b
where a.file_id = b.file_id(+) order by savings desc
/
*********************************************************************************
The above query has been derived from the below work-around. We are concentrating on the datafiles here, rather than the tablespaces, because it is possible that the tablespace might have one or multiple data files. Plus, only space that is after the last data block can be de-allocated. So the next step should be to find the last data block. To check this, we use the command:

select tablespace_name,file_id,max(block_id) max_data_block_id
from dba_extents
group by tablespace_name,file_id;


Now that the command to find the last data block has been figured out, it is time to find the free space in each file above that last data block:

Select a.tablespace_name,a.file_id,b.bytes bytes_free
From (select tablespace_name,file_id,max(block_id) max_data_block_id
from dba_extents
group by tablespace_name,file_id) a
,dba_free_space b
where a.tablespace_name = b.tablespace_name
and a.file_id = b.file_id
and b.block_id > a.max_data_block_id;

Basic checks

Hello friends,

Below are the few commands and scripts used for doing some basic checks, when user complains about the slowness/no connectivity/unable to connect to the database. Please note, these are just few of the aspects which you can look into to check the performance/issues in the database. There can be few/many aspects which you need to look into to check/resolve the issues.
*********************************************************************************
Check DB up time, to see whether there was a database bounce recently without your knowledge:

col host_name for a20
select host_name,
       instance_name,
       to_char(startup_time, 'dd-mm-yyyy hh24:mi:ss') startup_time,
       floor(sysdate-startup_time) days
from   sys.v_$instance;

*********************************************************************************
Check the listener status

lsnrctl status LISTENER_NAME

*********************************************************************************
Check if the mount points are full

df -h

*********************************************************************************
Check the alert log for errors:

SQL> show parameter background_dump_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
background_dump_dest                 string      /oracle/admin/cpowerp2/bdump

open the log and search for errors

*********************************************************************************
To kill a session:

alter system kill session '88,20708' immediate;
*********************************************************************************
The Linux top command is an interactive utility that displays a character-based screen of all processes running on the current system. The screen updates itself every few seconds, and you can sort the screen contents by characteristics like CPU Usage or Memory Use.

top

It will show information like tasks, memory, cpu and swap. Press 'ctrl+c' to quit window.

Press (Shift+O) to Sort field via field letter, for example press 'a' letter to sort process with PID (Process ID). Press 'ctrl+c' to quit exit the window.

Use top command with 'u' option will display specific User process details.

top -u oracle

Press 'z' option in running top command will display running process in color which may help you to identified running process easily.

Press 'c' option in running top command, it will display absolute path of running process.

By default screen refresh interval is 3.0 seconds, same can be change pressing 'd' option in running top command and change it as desired.

You can kill a process after finding PID of process by pressing 'k' option in running top command without exiting from top window.

Press (Shift+P) to sort processes as per CPU utilization.

You can use 'r' option to change the priority of the process.

Press (Shift+W) to save the running top command results under /root/.toprc.

Press 'h' option to obtain the top command help.

*********************************************************************************

To check which mode, role, the database is at present.

select name,open_mode, database_role from v$database;

*********************************************************************************

Check Tablespace details

select a.tablespace_name,a.allocated, b.freespace,round(b.freespace/a.allocated*100) "% free" from
(select tablespace_name ,sum(bytes)/1024/1024/1024 allocated from dba_data_files group by tablespace_name) a,
(select tablespace_name,sum(bytes)/1024/1024/1024 freespace from dba_free_space group by tablespace_name) b
where a.tablespace_name=b.tablespace_name order by 4;

*********************************************************************************
Check the archive location, if the archive destination is full(please note, the check is different when you have FRA as archive destination)

SQL> archive log list
Database log mode                   Archive Mode
Automatic archival                   Enabled
Archive destination                  /archive/oradata/cpowerp2/archive
Oldest online log sequence      346812
Next log sequence to archive   346814
Current log sequence               346814

exit the sqlplus environment, on the OS level, check using below command:

[oracle@GCPLINCIN1CPW01 ~]$ df -h /archive/oradata/cpowerp2/archive
Filesystem                                   Size    Used   Avail    Use%     Mounted on
/dev/mapper/emcvg4-archive      92G    42G    46G      48%       /archive

If your archive destination is full, the database will not allow connections to the database and the users will get the "ORA-00257: archiver error. Connect internal only, until freed" message.

clear the archive destination by taking archive log backups and switch logfile.

The "ALTER SYSTEM SWITCH LOGFILE" does not wait for the archiver process (ARCH) to complete writing the online redo log to the archivelog log filesystem:

It issues database checkpoint
It immediately starts writing to the next redo log
In the background, the "switch logfile" command tells the ARCH background process to copy the "old" redo log file to the redo log filesystem.

*********************************************************************************

Check the datafile size and details.

col file_name for a25
set lines 200

select   substr (df.name, 1, 40) file_name, df.bytes / 1024 / 1024 "allocated size(mb)",
         ((df.bytes / 1024 / 1024) - nvl (sum (dfs.bytes) / 1024 / 1024, 0)) "used size (mb)",
         nvl (sum (dfs.bytes) / 1024 / 1024, 0) "free size(mb)"
    from v$datafile df, dba_free_space dfs
   where df.file# = dfs.file_id(+)
group by dfs.file_id, df.name, df.file#, df.bytes
order by file_name;

Check the datafiles of a particular tablespace

select sum(bytes)/1024/1024/1024,file_name from dba_data_files where tablespace_name='SYSAUX' group by file_name;

*********************************************************************************
To check the sql for a particular SID:

select sid,username,machine,program,logon_time from v$session where sid=496;

--------------------------------
to find the query depends on sid :
--------------------------------

select a.sql_text
from v$sqltext a,
v$session b
where a.address = b.sql_address
and a.hash_value = b.sql_hash_value
and b.sid = &1 --> from above sql
order by a.piece;

*********************************************************************************
To check the session counts and the username connected to the database:

select username,status,count(*) from v$session group by username,status;

*********************************************************************************
Long running queries:

select * from v$session_longops where time_remaining>0;

select sid, serial#, opname, target, sofar, totalwork, units,
to_char(start_time,'dd/mon/yyyy hh24:mi:ss') start_time,
to_char(last_update_time,'dd/mon/yyyy hh24:mi:ss') last_update_time,
time_remaining, elapsed_seconds, message, username
from v$session_longops
where time_remaining != 0;

select s.username,s.type,s.sid,s.serial#,s.last_call_et seconds_running,q.sql_text from v$session s
join v$sql q
on s.sql_id = q.sql_id
 where status='ACTIVE'
 and type <> 'BACKGROUND'
 and last_call_et> 60
order by sid,serial#;

*********************************************************************************
Check the lock:

select * from v$lock where block > 0;

select * from dba_2pc_pending;

select * from dba_blockers;

select * from v$resource_limit;

*********************************************************************************
select event,count(*) from v$session_event group by event;
*********************************************************************************
select local_tran_id,state from dba_2pc_pending where state='prepared';
*********************************************************************************
alter system flush buffer_cache;

alter system flush shared_pool;
*********************************************************************************

Fragmentation, Index rebuild, Gather stats

Some of the scripts used by dba for fragmentation, index rebuild, gather stats.

select owner,index_name,table_owner,table_name,status from dba_indexes where status='unusable';

Fragmentation

select 'alter table '||owner||'.'||segment_name||' move tablespace '||tablespace_name||';' from dba_segments
where owner='schema_name' and segment_name in('table_name');

Rebuild index

select 'alter index '||index_owner||'.'||index_name||' rebuild online;' from dba_ind_columns
where table_owner='schema_name' and table_name in('table_name');

Gather stats

exec dbms_stats.gather_schema_stats (ownname => 'schema_name', cascade =>true, estimate_percent => dbms_stats.auto_sample_size, method_opt => 'for all columns size auto', degree => 14)

exec dbms_stats.gather_schema_stats (ownname => 'schema_name', cascade =>true, estimate_percent => dbms_stats.auto_sample_size, method_opt => 'for all columns size auto', degree => 14)

Update:

With standard Oracle tables, we can reclaim space with the "alter table shrink space" command:
SQL> alter table enable row movement;


SQL> alter table shrink space;


SQL> Alter table shrink space cascade;

Cleanup trace files

Oracle trace files can be useful for tracing great deal of useful information, but same time, fills up the mount pretty soon, if you have active trace.

# Cleanup trace files more than 7 days old

root> find $DBA/$ORACLE_SID/bdump/*.trc -mtime +7 -exec rm {} \;
root> find $DBA/$ORACLE_SID/udump/*.trc -mtime +7 -exec rm {} \;
root> find $DBA/$ORACLE_SID/cdump/*.trc -mtime +7 -exec rm {} \;


Note that the first part of this script (before the –exec) displays all trace files that are more than 7 days old.
root> find $DBA/$ORACLE_SID/bdump/*.trc -mtime +7

/u01/app/oracle/admin/janet1/bdump/janet1_arc0_25005.trc
/u01/app/oracle/admin/janet1/bdump/janet1_arc0_25173.trc
/u01/app/oracle/admin/janet1/bdump/janet1_arc0_9312.trc
/u01/app/oracle/admin/janet1/bdump/janet1_arc0_9425.trc
/u01/app/oracle/admin/janet1/bdump/janet1_arc1_14300.trc
/u01/app/oracle/admin/janet1/bdump/janet1_arc2_14380.trc
/u01/app/oracle/admin/janet1/bdump/janet1_pmon_25159.trc
/u01/app/oracle/admin/janet1/bdump/janet1_snp0_25171.trc
/u01/app/oracle/admin/janet1/bdump/janet1_snp0_8188.trc

then the second part executes the rm command to remove the 'so found' files in the first part of the above script.

You can also remove files based on dates, for example, the files generated on Sep 27 will be deleted from the current directory.

rm -rf `ls -lt | grep 'Sep 27' | awk '{print $9}'`

Miscellaneous

Hello friends,

Planning to post few miscellaneous scripts and commands used for day-to-day DBA activities soon.

09 October 2015

Upgrade Oracle 9.2.0.8 to Oracle 11.2.0.3

Hello aspiring and fellow DBA's, posting brief overview about a recent upgrade in my project. And sadly, this is a windows environment!!!


==================================== EPDM upgrade from 9.2.0.8 to 11.2.0.3


1. run pre-requisite ( utlu112i.sql in 9i database)

Take 9i pfile backup and copy to 11g home location.

2. update pfile with required parameters from pre-requisite.
gather DBMS_stats for DBSNMP.

3. shut down the 9i database

4 . create ORADIM service in windows box

oradim -new -sid EPSTG -syspwd system123 -pfile H:\oracle\product\11.2.0\dbhome_1\database\INITepstg.ora

refresh services in services.msc

5.  set ORACLE_HOME=11g home
set ORACLE_SID=EPDM

startup upgrade

6. create sysaux tablespace with segment management auto

create tablespace SYSAUX datafile 'H:\EP_DM\oradata\sysaux01.dbf' size 500M reuse
    extent management local
    segment space management auto
    online;

7. now run the catupgrd.sql

8. shutdown & startup the database and run utlrp.sql for any invalid objects

9 . check status of packages in database_registry

select comp_id,version,status from dba_registry;


POST UPGRADE SCRIPTS :

--------------------------------
SQL> @?/rdbms/admin/utlu112s.sql

Recompile any remaining stored PL/SQL:
-------------------------------------
SQL> @?/rdbms/admin/catuppst.sql
SQL> @?/rdbms/admin/utlrp.sql


LISTENER/TNS NAMES.ORA

--> right click on MY COMPUTER --> properties --> advanced --> enivronment settings --> update TNS_ADMIN & ORACLE_HOME
with new Oracle HOME

then set new ORACLE_HOME in cmd prompt
then start netca
configure listner.ora and update tnsnames.ora