Oracle RAC:
A RAC database allows multiple instances residing on different(usually) servers (which are) in a cluster to access a common database residing on shared storage.
All the servers communicate to the shared storage(where data files, control and redo files are stored) via storage network.
The Servers in the clusters are bound together using cluster management software called ORACLE CLUSTERWARE-- this enables the server to look as though they are single server for the applications.
Clusters require special software often referred to as clusterware, in RAC this is Oracle Clusterware is done by Cluster Ready Services(CRS) in 10g release.
Oracle clusterware stores configuration information
Practical stuff related to: Oracle Database 9i, 10g, 11g Oracle RAC database 10g, 11g installations, cloning and upgrades Oracle Applications 11i, R12 issues Oracle Business Intelligence(OBIEE) Fusion Middleware 11g SOA 11g SQL for beginners PL/SQL for beginners
19 October 2009
15 October 2009
Database Issues
1. SQLPLUS NOT FOUND:
[oracle@eprax01 ~]$ sqlplus '/ as sysdba'
-bash: sqlplus: command not found
Soln: ORACLE HOME and PATH is not set.
Do this:
[oracle@eprax01 ~]$ export PATH=/u01/app/oracle/product/11.2.0/bin:$PATH
[oracle@eprax01 ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0
[oracle@eprax01 ~]$ export ORACLE_SID=prd
[oracle@eprax01 ~]$ sqlplus '/ as sysdba'
SQL*Plus: Release 11.2.0.1.0 Production on Tue Jul 5 16:40:51 2011
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
SQL>
2. ORA-00845: MEMORY_TARGET not supported on this system
Soln:
Starting from Oracle 11g the automatic memory management feature is now defined with parameter MEMORY_TARGET and
MEMORY_MAX_TARGET.
If /dev/shm value is less than MEMORY_TARGET parameter.
Make sure /dev/shm is properly mounted. You can see it by,
df -h
# mount -t tmpfs shmfs -o size=13g /dev/shm
3. ORA-12528: TNS:listener: all appropriate instances are blocking new connections
This happens becausewhen you start the database in nomount/ mount state then for some reason this particular
service(Which is mounted/started) is blocked by Oracle.
Symptoms:
[oracle@eprax01 admin]$ sqlplus sys/XXXX@dev
SQL*Plus: Release 11.2.0.1.0 Production on Tue Jul 5 23:14:54 2011
Copyright (c) 1982, 2009, Oracle. All rights reserved.
ERROR:
ORA-12528: TNS:listener: all appropriate instances are blocking new connections
Soln:
Add the below entry in your TNSNAMES.ORA
( UR = A)
Ex:
DEV =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = eprax01)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = dev)
(UR = A)
)
)
Then try to connect again:
[oracle@eprax01 admin]$ sqlplus sys/XXXX@dev as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Tue Jul 5 23:24:35 2011
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
SQL>
5. RMAN-05001: auxiliary file name +DATA/prd/datafile/example.269.755015953 conflicts with a file used by the target database
Problem with init.ora parameters: db_file_name_convert and log_file_name_convert, make sure they have proper values.
For ex:
*.db_file_name_convert=('+DATA/prd','+DATA/dev')
*.log_file_name_convert=('+DATA/prd','+DATA/dev')
6. ORA-15041: diskgroup "DATA" space exhausted
Add disks to diskgroup.
7. RMAN-04006: error from auxiliary database: ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
[oracle@eprax01 ~]$ sqlplus '/ as sysdba'
-bash: sqlplus: command not found
Soln: ORACLE HOME and PATH is not set.
Do this:
[oracle@eprax01 ~]$ export PATH=/u01/app/oracle/product/11.2.0/bin:$PATH
[oracle@eprax01 ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0
[oracle@eprax01 ~]$ export ORACLE_SID=prd
[oracle@eprax01 ~]$ sqlplus '/ as sysdba'
SQL*Plus: Release 11.2.0.1.0 Production on Tue Jul 5 16:40:51 2011
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
SQL>
2. ORA-00845: MEMORY_TARGET not supported on this system
Soln:
Starting from Oracle 11g the automatic memory management feature is now defined with parameter MEMORY_TARGET and
MEMORY_MAX_TARGET.
If /dev/shm value is less than MEMORY_TARGET parameter.
Make sure /dev/shm is properly mounted. You can see it by,
df -h
# mount -t tmpfs shmfs -o size=13g /dev/shm
3. ORA-12528: TNS:listener: all appropriate instances are blocking new connections
This happens becausewhen you start the database in nomount/ mount state then for some reason this particular
service(Which is mounted/started) is blocked by Oracle.
Symptoms:
[oracle@eprax01 admin]$ sqlplus sys/XXXX@dev
SQL*Plus: Release 11.2.0.1.0 Production on Tue Jul 5 23:14:54 2011
Copyright (c) 1982, 2009, Oracle. All rights reserved.
ERROR:
ORA-12528: TNS:listener: all appropriate instances are blocking new connections
Soln:
Add the below entry in your TNSNAMES.ORA
( UR = A)
Ex:
DEV =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = eprax01)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = dev)
(UR = A)
)
)
Then try to connect again:
[oracle@eprax01 admin]$ sqlplus sys/XXXX@dev as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Tue Jul 5 23:24:35 2011
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
SQL>
5. RMAN-05001: auxiliary file name +DATA/prd/datafile/example.269.755015953 conflicts with a file used by the target database
Problem with init.ora parameters: db_file_name_convert and log_file_name_convert, make sure they have proper values.
For ex:
*.db_file_name_convert=('+DATA/prd','+DATA/dev')
*.log_file_name_convert=('+DATA/prd','+DATA/dev')
6. ORA-15041: diskgroup "DATA" space exhausted
Add disks to diskgroup.
7. RMAN-04006: error from auxiliary database: ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
You may encounter this issue while creating physical standby or clone your database using RMAN.
The reason is: When you clone your database using RMAN, the auxiliary instance is started(IN NOMOUNT STATE) for this reason the services with the listener is not registered because to pick up the new service name the database must be at least in mount state. In our case we put the database in nomount state and hence the abover error pops-out when you run: DUPLICATE TARGET DATABASE TO <>
Soln:
To avoid this, connect Auxiliary database locally and connect to Target database remotely using services.
Set Oracle SID as AUX database and then logon to RMAN as shown below:
[oracle@eprax01 ~]$ rman target sys/XXXXX@prd auxiliary /
8. ORA-04021: timeout occurred while waiting to lock object
Not advicible every time, This is issue with shared_pool as it's not able to get this new code registered.
alter system flush shared_pool
9. Message 1053 not found; No message file for product=network, facility=TNS
Set ORACLE_HOME properly to the installation directory
10. TNS-12543: TNS:destination host unreachable
Add the service entery in TNSNAMES.ORA
DR01 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = dba.asteroid.com)(PORT = 1523))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DR01)
)
)
12. RMAN-05538: WARNING: implicitly using DB_FILE_NAME_CONVERT
released channel: prm01
released channel: prm02
released channel: prm04
released channel: prm05
released channel: stb01
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 08/22/2011 05:53:58
RMAN-05501: aborting duplication of target database
RMAN-05001: auxiliary file name /u01/app/oracle/oradata/prim/example01.dbf conflicts with a file used by the target database
RMAN-05001: auxiliary file name /u01/app/oracle/oradata/prim/users01.dbf conflicts with a file used by the target database
RMAN-05001: auxiliary file name /u01/app/oracle/oradata/prim/undotbs01.dbf conflicts with a file used by the target database
RMAN-05001: auxiliary file name /u01/app/oracle/oradata/prim/sysaux01.dbf conflicts with a file used by the target database
The reason is: When you clone your database using RMAN, the auxiliary instance is started(IN NOMOUNT STATE) for this reason the services with the listener is not registered because to pick up the new service name the database must be at least in mount state. In our case we put the database in nomount state and hence the abover error pops-out when you run: DUPLICATE TARGET DATABASE TO <>
Soln:
To avoid this, connect Auxiliary database locally and connect to Target database remotely using services.
Set Oracle SID as AUX database and then logon to RMAN as shown below:
[oracle@eprax01 ~]$ rman target sys/XXXXX@prd auxiliary /
8. ORA-04021: timeout occurred while waiting to lock object
Not advicible every time, This is issue with shared_pool as it's not able to get this new code registered.
alter system flush shared_pool
9. Message 1053 not found; No message file for product=network, facility=TNS
Set ORACLE_HOME properly to the installation directory
10. TNS-12543: TNS:destination host unreachable
Network was down, the issue was with IP address. Set the ipaddress and the network parameters properly
11.RMAN-06217: not connected to auxiliary database with a net service name
Add the service entery in TNSNAMES.ORA
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = dba.asteroid.com)(PORT = 1523))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DR01)
)
)
12. RMAN-05538: WARNING: implicitly using DB_FILE_NAME_CONVERT
released channel: prm01
released channel: prm02
released channel: prm03
released channel: prm04
released channel: prm05
released channel: stb01
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 08/22/2011 05:53:58
RMAN-05501: aborting duplication of target database
RMAN-05001: auxiliary file name /u01/app/oracle/oradata/prim/example01.dbf conflicts with a file used by the target database
RMAN-05001: auxiliary file name /u01/app/oracle/oradata/prim/users01.dbf conflicts with a file used by the target database
RMAN-05001: auxiliary file name /u01/app/oracle/oradata/prim/undotbs01.dbf conflicts with a file used by the target database
RMAN-05001: auxiliary file name /u01/app/oracle/oradata/prim/sysaux01.dbf conflicts with a file used by the target database
RMAN-05001: auxiliary file name /u01/app/oracle/oradata/prim/system01.dbf conflicts with a file used by the target database
Add db_file_name_convert=('/u01/app/oracle/oradata/prim','/u01/app/oracle/oradata/DR01') in Duplicate command and then add log_file_name_convert in init.ora of STANDBY database.
14 October 2009
Dynamic performance views
SGA:
1. v$sgastat -- shows values of memory parameters n SGA.
SQL> select * from v$sgastat where pool = 'large pool';
POOL NAME BYTES
----------- -------------------------- ----------
large pool free memory 33554432
2. v$sga: Queries the size of instance for shared pool, large pool, Db buffer cache, fixed memory areas etc.
SQL> select * from v$sga;
NAME VALUE
-------------------- ----------
Fixed Size 741952
Variable Size 419430400
Database Buffers 805306368
Redo Buffers 5517312
3. v$instance: Queries the status of the instance
4. v$process and v$bgprocess: Queries server and background processes created for the instance.
SQL> desc v$bgprocess;
Name Null? Type
----------------------------------------- -------- ---------------------------- PADDR RAW(8)
NAME VARCHAR2(5)
DESCRIPTION VARCHAR2(64)
ERROR NUMBER
SQL> desc v$process;
Name Null? Type
----------------------------------------- -------- ---------------------------- ADDR RAW(8)
PID NUMBER
SPID VARCHAR2(12)
USERNAME VARCHAR2(15)
SERIAL# NUMBER
TERMINAL VARCHAR2(30)
PROGRAM VARCHAR2(48)
TRACEID VARCHAR2(255)
BACKGROUND VARCHAR2(1)
LATCHWAIT VARCHAR2(16)
LATCHSPIN VARCHAR2(16)
PGA_USED_MEM NUMBER
PGA_ALLOC_MEM NUMBER PGA_FREEABLE_MEM NUMBER PGA_MAX_MEM NUMBER
Archive log:
1. v$archive_dest: For current instance displays all archive log destinations including statuses etc-
SYS@DB01> SELECT destination, binding, target, statusFROM v$archive_dest; 2
DESTINATION
--------------------------------------------------------------------------------
BINDING TARGET STATUS
--------- ------- ---------
/u01/app/oracle/product/9.2.0.8/dbs/archMANDATORY PRIMARY VALID
OPTIONAL PRIMARY INACTIVE
OPTIONAL PRIMARY INACTIVE
DESTINATION
--------------------------------------------------------------------------------
BINDING TARGET STATUS
--------- ------- ---------
OPTIONAL PRIMARY INACTIVE
OPTIONAL PRIMARY INACTIVE
OPTIONAL PRIMARY INACTIVE
v$database: Current status of Archiving:
SYS@DB01> select NAME, LOG_MODE from v$database;
NAME LOG_MODE
--------- ------------
DB01 ARCHIVELOG
SYS@DB01>
V$archived_log: Displays archived log information from control file.
SYS@DB01> select * from v$archived_log;
RECID STAMP
---------- ----------
NAME
--------------------------------------------------------------------------------
DEST_ID THREAD# SEQUENCE# RESETLOGS_CHANGE# RESETLOGS FIRST_CHANGE#
---------- ---------- ---------- ----------------- --------- -------------
FIRST_TIM NEXT_CHANGE# NEXT_TIME BLOCKS BLOCK_SIZE CREATOR REGISTR STA ARC
--------- ------------ --------- ---------- ---------- ------- ------- --- ---
APP DEL S COMPLETIO DIC DIC END BACKUP_COUNT ARCHIVAL_THREAD# ACTIVATION#
--- --- - --------- --- --- --- ------------ ---------------- -----------
1
700221743/u01/app/oracle/product/9.2.0.8/dbs/arch1_18.dbf
v$archive_processes- Displays information of all the ARCH processes
SYS@DB01> select * from v$archive_processes;
PROCESS STATUS LOG_SEQUENCE STAT
---------- ---------- ------------ ----
0 ACTIVE 0 IDLE
1 ACTIVE 0 IDLE
2 STOPPED 0 IDLE
3 STOPPED 0 IDLE
4 STOPPED 0 IDLE
5 STOPPED 0 IDLE
6 STOPPED 0 IDLE
7 STOPPED 0 IDLE
8 STOPPED 0 IDLE
9 STOPPED 0 IDLE
1. v$sgastat -- shows values of memory parameters n SGA.
SQL> select * from v$sgastat where pool = 'large pool';
POOL NAME BYTES
----------- -------------------------- ----------
large pool free memory 33554432
2. v$sga: Queries the size of instance for shared pool, large pool, Db buffer cache, fixed memory areas etc.
SQL> select * from v$sga;
NAME VALUE
-------------------- ----------
Fixed Size 741952
Variable Size 419430400
Database Buffers 805306368
Redo Buffers 5517312
3. v$instance: Queries the status of the instance
4. v$process and v$bgprocess: Queries server and background processes created for the instance.
SQL> desc v$bgprocess;
Name Null? Type
----------------------------------------- -------- ---------------------------- PADDR RAW(8)
NAME VARCHAR2(5)
DESCRIPTION VARCHAR2(64)
ERROR NUMBER
SQL> desc v$process;
Name Null? Type
----------------------------------------- -------- ---------------------------- ADDR RAW(8)
PID NUMBER
SPID VARCHAR2(12)
USERNAME VARCHAR2(15)
SERIAL# NUMBER
TERMINAL VARCHAR2(30)
PROGRAM VARCHAR2(48)
TRACEID VARCHAR2(255)
BACKGROUND VARCHAR2(1)
LATCHWAIT VARCHAR2(16)
LATCHSPIN VARCHAR2(16)
PGA_USED_MEM NUMBER
PGA_ALLOC_MEM NUMBER PGA_FREEABLE_MEM NUMBER PGA_MAX_MEM NUMBER
Archive log:
1. v$archive_dest: For current instance displays all archive log destinations including statuses etc-
SYS@DB01> SELECT destination, binding, target, statusFROM v$archive_dest; 2
DESTINATION
--------------------------------------------------------------------------------
BINDING TARGET STATUS
--------- ------- ---------
/u01/app/oracle/product/9.2.0.8/dbs/archMANDATORY PRIMARY VALID
OPTIONAL PRIMARY INACTIVE
OPTIONAL PRIMARY INACTIVE
DESTINATION
--------------------------------------------------------------------------------
BINDING TARGET STATUS
--------- ------- ---------
OPTIONAL PRIMARY INACTIVE
OPTIONAL PRIMARY INACTIVE
OPTIONAL PRIMARY INACTIVE
v$database: Current status of Archiving:
SYS@DB01> select NAME, LOG_MODE from v$database;
NAME LOG_MODE
--------- ------------
DB01 ARCHIVELOG
SYS@DB01>
V$archived_log: Displays archived log information from control file.
SYS@DB01> select * from v$archived_log;
RECID STAMP
---------- ----------
NAME
--------------------------------------------------------------------------------
DEST_ID THREAD# SEQUENCE# RESETLOGS_CHANGE# RESETLOGS FIRST_CHANGE#
---------- ---------- ---------- ----------------- --------- -------------
FIRST_TIM NEXT_CHANGE# NEXT_TIME BLOCKS BLOCK_SIZE CREATOR REGISTR STA ARC
--------- ------------ --------- ---------- ---------- ------- ------- --- ---
APP DEL S COMPLETIO DIC DIC END BACKUP_COUNT ARCHIVAL_THREAD# ACTIVATION#
--- --- - --------- --- --- --- ------------ ---------------- -----------
1
700221743/u01/app/oracle/product/9.2.0.8/dbs/arch1_18.dbf
v$archive_processes- Displays information of all the ARCH processes
SYS@DB01> select * from v$archive_processes;
PROCESS STATUS LOG_SEQUENCE STAT
---------- ---------- ------------ ----
0 ACTIVE 0 IDLE
1 ACTIVE 0 IDLE
2 STOPPED 0 IDLE
3 STOPPED 0 IDLE
4 STOPPED 0 IDLE
5 STOPPED 0 IDLE
6 STOPPED 0 IDLE
7 STOPPED 0 IDLE
8 STOPPED 0 IDLE
9 STOPPED 0 IDLE
Basics Part 1 - Create Datafiles, Redologs, Tablespaces, Datafiles.
Redologs:
1. Add Online redolog file group:
SQL> alter database DB01 add logfile group 4 ('/u01/app/oracle/oradata/DB01/DB01/redo4a.rdo'), ('/u01/app/oracle/oradata/DB01/DB01/redo4b.rdo') size 10M;
2. Add Onilne redo log members to Groups:
SQL>alter database DB01 add logfile member '/u01/app/oracle/oradata/DB01/DB01/redo4c.rdo' to GROUP 4;
3. Drop Redo log group:
SQL>alter database drop logfile group 4;
SQL> Database altered.
4. Drop Redo log member:
SQL>alter database drop logfile member
'/u01/app/oracle/oradata/DB01/DB01/redo01c.log';
5. Renaming and Relocationg a Redolog file:
SYS@DB01> shutdown immediate;
.mailto:.SYS@DB01> startup mount;
ORACLE instance started.
SYS@DB01> alter database rename file '/u01/app/oracle/oradata/DB01/DB01/redo01c.log' to '/u01/app/oracle/oradata/redo01cc.log';
Database altered.
mailto:SYS@DB01 database open;
6. Clearing Logfile group in case of corruption:
SQL> alter database clear logfile group 2;
Tablespaces:
1. Create tablespace options:
SQL>create tablespace ts1 datafile '/u01/app/oracle/DB01/data/df1.dbf' size 100M;
Options:
DATAFILE: Name and location of datafiles.
MINIMUM EXTENT: This ensures every used extent size in tablespace is a multiple of integer.
LOGGING: By default, all the tables, indexes in the tablespaces have all the changes recorded in the redo logs.
NOLOGGING: All the changes aren't recorded.
EXTENT MANAGEMENT: LOCAL/DICTIONARY
Archivelog mode:
Filled online redo log files are backed up using archiver(automatic or manual).
Put the database in archivelog mode:
1. SYS@DB01> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /u01/app/oracle/product/9.2.0.8/dbs/arch
Oldest online log sequence 16
Current log sequence 18
2. SYS@DB01> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
3. SYS@DB01> startup mount
ORACLE instance started.
Total System Global Area 235999648
bytesFixed Size 450976
bytesVariable Size 201326592
bytesDatabase Buffers 33554432
bytesRedo Buffers 667648
bytes
Database mounted.
4.
SYS@DB01> alter database archivelog;
Database altered.
SYS@DB01> alter database open;
Database altered.
Smailto:SYS@DB01>
5. Backup your previous database as this will not be valid in future(This is because the database was in NoArchivelog mode previously and that backup is invalid)
6 Enable Archiver.
At this point we can decide if the redo log files needs are copied manually or automatically.
Automatic: In automatic archiving, Archivern background process are enables and they copy filled online redo files.
Manual: Done manually by OEM or SQL Plus.
Enabling ARCHIVER at instance startup:
LOG_ARCHIVE_START=TRUE;
Enabling ARCHIVER after the instance is started:
SYS@DB01> archive log list;
Database log mode Archive Mode
Automatic archival DisabledArchive destination /u01/app/oracle/product/9.2.0.8/dbs/arch
Oldest online log sequence 16
Next log sequence to archive 18
Current log sequence 18
SYS@DB01> alter system archive log start to '/u01/app/oracle/product/9.2.0.8/dbs/arch';
System altered.
SYS@DB01> archive log list;
Database log mode Archive Mode
Automatic archival EnabledArchive destination /u01/app/oracle/product/9.2.0.8/dbs/arch
Oldest online log sequence 16
Next log sequence to archive 18
Current log sequence 18
SYS@DB01>
Manually Archive:
SQL> alter system archive log current
1. Add Online redolog file group:
SQL> alter database DB01 add logfile group 4 ('/u01/app/oracle/oradata/DB01/DB01/redo4a.rdo'), ('/u01/app/oracle/oradata/DB01/DB01/redo4b.rdo') size 10M;
2. Add Onilne redo log members to Groups:
SQL>alter database DB01 add logfile member '/u01/app/oracle/oradata/DB01/DB01/redo4c.rdo' to GROUP 4;
3. Drop Redo log group:
SQL>alter database drop logfile group 4;
SQL> Database altered.
4. Drop Redo log member:
SQL>alter database drop logfile member
'/u01/app/oracle/oradata/DB01/DB01/redo01c.log';
5. Renaming and Relocationg a Redolog file:
SYS@DB01> shutdown immediate;
.mailto:.SYS@DB01> startup mount;
ORACLE instance started.
SYS@DB01> alter database rename file '/u01/app/oracle/oradata/DB01/DB01/redo01c.log' to '/u01/app/oracle/oradata/redo01cc.log';
Database altered.
mailto:SYS@DB01 database open;
6. Clearing Logfile group in case of corruption:
SQL> alter database clear logfile group 2;
Tablespaces:
1. Create tablespace options:
SQL>create tablespace ts1 datafile '/u01/app/oracle/DB01/data/df1.dbf' size 100M;
Options:
DATAFILE: Name and location of datafiles.
MINIMUM EXTENT: This ensures every used extent size in tablespace is a multiple of integer.
LOGGING: By default, all the tables, indexes in the tablespaces have all the changes recorded in the redo logs.
NOLOGGING: All the changes aren't recorded.
EXTENT MANAGEMENT: LOCAL/DICTIONARY
Archivelog mode:
Filled online redo log files are backed up using archiver(automatic or manual).
Put the database in archivelog mode:
1. SYS@DB01> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /u01/app/oracle/product/9.2.0.8/dbs/arch
Oldest online log sequence 16
Current log sequence 18
2. SYS@DB01> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
3. SYS@DB01> startup mount
ORACLE instance started.
Total System Global Area 235999648
bytesFixed Size 450976
bytesVariable Size 201326592
bytesDatabase Buffers 33554432
bytesRedo Buffers 667648
bytes
Database mounted.
4.
SYS@DB01> alter database archivelog;
Database altered.
SYS@DB01> alter database open;
Database altered.
Smailto:SYS@DB01>
5. Backup your previous database as this will not be valid in future(This is because the database was in NoArchivelog mode previously and that backup is invalid)
6 Enable Archiver.
At this point we can decide if the redo log files needs are copied manually or automatically.
Automatic: In automatic archiving, Archivern background process are enables and they copy filled online redo files.
Manual: Done manually by OEM or SQL Plus.
Enabling ARCHIVER at instance startup:
LOG_ARCHIVE_START=TRUE;
Enabling ARCHIVER after the instance is started:
SYS@DB01> archive log list;
Database log mode Archive Mode
Automatic archival DisabledArchive destination /u01/app/oracle/product/9.2.0.8/dbs/arch
Oldest online log sequence 16
Next log sequence to archive 18
Current log sequence 18
SYS@DB01> alter system archive log start to '/u01/app/oracle/product/9.2.0.8/dbs/arch';
System altered.
SYS@DB01> archive log list;
Database log mode Archive Mode
Automatic archival EnabledArchive destination /u01/app/oracle/product/9.2.0.8/dbs/arch
Oldest online log sequence 16
Next log sequence to archive 18
Current log sequence 18
SYS@DB01>
Manually Archive:
SQL> alter system archive log current
13 October 2009
Oracle Background Processes
Database Writer(DBWR):
DBWR writes the dirty buffers from database buffer cache to the datafiles on disk.
It ensures several number of free buffers are available in the database buffer cache.
It will stop writing to the datafiles until one of the following event occurs:
1. CHECK POINT HAPPENS.
2. NO. OF DIRTY BUFFERS REACHES A THRESHOLD VALUE.
3. NO FREE BLOCKS AVAILABLE.
4. PING REQUEST ON RAC ENVI.
5. PUT THE TABLESPACE OFFLINE, READ-ONLY, ALTER TABLESPACE BEGIN BACKUP MODE.
6. DELETE/TRUNCATE A TABLE.
Log Writer
LGWR performs sequential writes from Redo log buffer to the online redo log files.
LGWR writes in the following situations:
1. WHEN A TRANSACTION COMMITS.
2. REDO LOG BUFFER IS 1/3 RD FULL.
3. MORE THAN 1MB OF CHANGES RECORDED IN REDO LOG.
4. BEFORE DBWR WRITES DIRTY BUFFERS.
5. EVERY 3 SECONDS.
SMON-System Monitor:
1. SMON PERFORMS INSTANCE RECOVERY.
2. SPACE MANAGEMENT OF DATAFILES-COELASCING, DE-ALLOCATE TEMPOERARY SEGMENTS
PMON-Process Monitor
CLEANS UP FAILED PROCESSES:
1. ROLLING BACK USER TRANSACTION.
2. RELEASING CURRENTLY HELD TABLE OR ROW LOCKS
3. RESTARTS DEAD DISPATCHERS.
CKPT-Checkpoint
EVERY 3 SEC, CKPT STORES DATA IN CONTROLFILE TO INDICATE AT WHAT POINT THE RECOVERY SHOULD BEGIN IN CASE OF A FAILURE-- CALLED CHECKPOINT.
THE PURPOSE OF THIS IS TO ENSURE ALL THE DIRTY BUFFERS THAT WERE MODIFIED AT A PRIOR POINT IN TIME HAVE BEEN WRITTEN TO THE DATAFILES.
CKPT REDUCES TIME NEEDED FOR RECOVERY-ONLY THE ONLINE REDO LOG ENTRIES FOLLOWING THE LAST CHEPOINT IS NEEDED FOR RECOVERY.
ENSURES ALL THE COMMITED DATA IS WRITTEN TO DATA FILES.
Archiver:(Optional)
1. ARCHIVER PROCESS INITIATES BACKING UP, ARCHIVING OF FILLED ONLINE REDO LOG GROUP AT EVERY LOG SWITCH.
2. MAKES AUTOMATIC ARCHIVING OF FILLED REDO LOG GROUP BEFORE IT CAN BE REUSED.
DBWR writes the dirty buffers from database buffer cache to the datafiles on disk.
It ensures several number of free buffers are available in the database buffer cache.
It will stop writing to the datafiles until one of the following event occurs:
1. CHECK POINT HAPPENS.
2. NO. OF DIRTY BUFFERS REACHES A THRESHOLD VALUE.
3. NO FREE BLOCKS AVAILABLE.
4. PING REQUEST ON RAC ENVI.
5. PUT THE TABLESPACE OFFLINE, READ-ONLY, ALTER TABLESPACE BEGIN BACKUP MODE.
6. DELETE/TRUNCATE A TABLE.
Log Writer
LGWR performs sequential writes from Redo log buffer to the online redo log files.
LGWR writes in the following situations:
1. WHEN A TRANSACTION COMMITS.
2. REDO LOG BUFFER IS 1/3 RD FULL.
3. MORE THAN 1MB OF CHANGES RECORDED IN REDO LOG.
4. BEFORE DBWR WRITES DIRTY BUFFERS.
5. EVERY 3 SECONDS.
SMON-System Monitor:
1. SMON PERFORMS INSTANCE RECOVERY.
2. SPACE MANAGEMENT OF DATAFILES-COELASCING, DE-ALLOCATE TEMPOERARY SEGMENTS
PMON-Process Monitor
CLEANS UP FAILED PROCESSES:
1. ROLLING BACK USER TRANSACTION.
2. RELEASING CURRENTLY HELD TABLE OR ROW LOCKS
3. RESTARTS DEAD DISPATCHERS.
CKPT-Checkpoint
EVERY 3 SEC, CKPT STORES DATA IN CONTROLFILE TO INDICATE AT WHAT POINT THE RECOVERY SHOULD BEGIN IN CASE OF A FAILURE-- CALLED CHECKPOINT.
THE PURPOSE OF THIS IS TO ENSURE ALL THE DIRTY BUFFERS THAT WERE MODIFIED AT A PRIOR POINT IN TIME HAVE BEEN WRITTEN TO THE DATAFILES.
CKPT REDUCES TIME NEEDED FOR RECOVERY-ONLY THE ONLINE REDO LOG ENTRIES FOLLOWING THE LAST CHEPOINT IS NEEDED FOR RECOVERY.
ENSURES ALL THE COMMITED DATA IS WRITTEN TO DATA FILES.
Archiver:(Optional)
1. ARCHIVER PROCESS INITIATES BACKING UP, ARCHIVING OF FILLED ONLINE REDO LOG GROUP AT EVERY LOG SWITCH.
2. MAKES AUTOMATIC ARCHIVING OF FILLED REDO LOG GROUP BEFORE IT CAN BE REUSED.
11 October 2009
Init parameters of Oracle(9i/10g/11g) -- Updated Daily
Oracle 9i:
SGA and its memory structures:
SGA_MAX_SIZE: Used to Size SGA
DB_BLOCK_SIZE: Size of primary blocks that are used for SYSTEM tablespace.
DB_CACHE_SIZE: Size of cache of standard blocks
DB_KEEP_CACHE_SIZE: Sizes of keep buffer cahce-This is used to keep the blocks in memory which are more likely to be reused.
SHARED_POOL_SIZE: Memory allocation for shared pool.
DB_CACHE_ADVICE: The Buffer Cache advisory(BFA) is enabled through this parameter-BFA provides statistics which can be used to size the Db buffer cache.
BACKUP_TAPE_IO_SLAVE and BACKUP_DISK_IO=n: RMAN uses Large pool to store information if the value of BACKUP_TAPE_IO_SLAVE is set to TRUE
OPEN_CURSORS: The number of private SQL AREAs that a user process can allocate.
PGA_AGGREGATE_TARGET: Is set to specify the target aggrgate amount of PGA memory available to the instance.
DBWR_IO_SLAVES: Specifies number of I/O slaves specified by DBWR. Default is 0.
If set to a nonzero value, then No. of I/O slaves used by DBWR, ARCn, LGWR and RMAN is 4.
Note: I/O slaves are used to simulate asynchronous I/O on platform that cannoot do this efficiently.
SQL> sho parameter DBWR_IO_SLAVES
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
dbwr_io_slaves integer 0
BACKUP Related:
1. LOG_ARCHIVE_MAX_PROCESSES: Control maximum number of archiver processes started at insrance start up. When log_archive_start parameter is set to TRUE then instance will startup as many ARCn processes depending on this parameter.
SGA and its memory structures:
SGA_MAX_SIZE: Used to Size SGA
DB_BLOCK_SIZE: Size of primary blocks that are used for SYSTEM tablespace.
DB_CACHE_SIZE: Size of cache of standard blocks
DB_KEEP_CACHE_SIZE: Sizes of keep buffer cahce-This is used to keep the blocks in memory which are more likely to be reused.
DB_RECYCLE_CACHE_SIZE: Size of Recycle cache-This is used to remove blocks from memory that have little chance to be reused.
LOG_BUFFER: Number of bytes allocated for redo log buffer.SHARED_POOL_SIZE: Memory allocation for shared pool.
DB_CACHE_ADVICE: The Buffer Cache advisory(BFA) is enabled through this parameter-BFA provides statistics which can be used to size the Db buffer cache.
BACKUP_TAPE_IO_SLAVE and BACKUP_DISK_IO=n: RMAN uses Large pool to store information if the value of BACKUP_TAPE_IO_SLAVE is set to TRUE
OPEN_CURSORS: The number of private SQL AREAs that a user process can allocate.
PGA_AGGREGATE_TARGET: Is set to specify the target aggrgate amount of PGA memory available to the instance.
DBWR_IO_SLAVES: Specifies number of I/O slaves specified by DBWR. Default is 0.
If set to a nonzero value, then No. of I/O slaves used by DBWR, ARCn, LGWR and RMAN is 4.
Note: I/O slaves are used to simulate asynchronous I/O on platform that cannoot do this efficiently.
SQL> sho parameter DBWR_IO_SLAVES
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
dbwr_io_slaves integer 0
BACKUP Related:
1. LOG_ARCHIVE_MAX_PROCESSES: Control maximum number of archiver processes started at insrance start up. When log_archive_start parameter is set to TRUE then instance will startup as many ARCn processes depending on this parameter.
07 October 2009
How to take Hot Backups without RMAN?
Source --> Production database
Target --> Test Database
Follow the Steps to perform Hot Backup:
1. On Production database or Source
Login to SQLPLUS:
sqlplus '/ as sysdba'
SQL>set echo on termout on
SQL>spool /home/oracle/hotbackup.log --- This is a good practice to trace all the errors or issues encountered during hot backup -- Optinal
SQL>select 'Hot Backup to disk started at '||to_char(sysdate, 'DD-MON-YY HH24:MI:SS') StartTime from dual; ---- Backup start indication time -- Optional
SQL> select tablespace_name from dba_data_files; -- To list all the tablespaces in the database.
SQL> Alter tablespace SYSTEM begin backup;
SQL>!cp /home/oracle/.../system.dbf /home/backup/.../
SQL>alter tablespace end backup;
Repeat and list this step until you do it for all the tablespaces in your database.
Please note that this is a manual and very basic method of doing a hot backup on the database. We can automate this process but that will be complicated for beginners.
SQL> select 'Hot Backup to disk finished at '||to_char(sysdate, 'DD-MON-YY HH24:MI:SS') FinishTime from dual; -- End of Hot backup
SQL> EXIT;
2. Copy all the Archive logs:
SQL> alter system switch logfile; -- Ensure archive switch before copy of archive files for backup.
Copy over all the archived log files from log_archive_dest location and place them in the backup directory.
3. Backup and Copy over controlfile:
SQL> alter database backup controlfile to trace as '/home/oracle/bkctl.sql';
4. Backup Init parameter from $ORACLE_HOME/dbs directory
If you use pfile then directly copy over to backup directory
If you use spfile then use:
SQL> create pfile from spfile='/home/oracle/../sptest.ora
Then copy the resulting init file to the backup area.
End of Hot Backup(Simple)
Note:
1. Make sure you have appropriate disk space for storing backups. If possible clear the backups which are very old or move it to the Tapes.
2. If the database is very huge, then it is suggested to use RMAN where in we can perfoerm incremental backups.
Coming up.. To take backup using RMAN and create RMAN repository...
Keep checking....
Target --> Test Database
Follow the Steps to perform Hot Backup:
1. On Production database or Source
Login to SQLPLUS:
sqlplus '/ as sysdba'
SQL>set echo on termout on
SQL>spool /home/oracle/hotbackup.log --- This is a good practice to trace all the errors or issues encountered during hot backup -- Optinal
SQL>select 'Hot Backup to disk started at '||to_char(sysdate, 'DD-MON-YY HH24:MI:SS') StartTime from dual; ---- Backup start indication time -- Optional
SQL> select tablespace_name from dba_data_files; -- To list all the tablespaces in the database.
SQL> Alter tablespace SYSTEM begin backup;
SQL>!cp /home/oracle/.../system.dbf /home/backup/.../
SQL>alter tablespace end backup;
Repeat and list this step until you do it for all the tablespaces in your database.
Please note that this is a manual and very basic method of doing a hot backup on the database. We can automate this process but that will be complicated for beginners.
SQL> select 'Hot Backup to disk finished at '||to_char(sysdate, 'DD-MON-YY HH24:MI:SS') FinishTime from dual; -- End of Hot backup
SQL> EXIT;
2. Copy all the Archive logs:
SQL> alter system switch logfile; -- Ensure archive switch before copy of archive files for backup.
Copy over all the archived log files from log_archive_dest location and place them in the backup directory.
3. Backup and Copy over controlfile:
SQL> alter database backup controlfile to trace as '/home/oracle/bkctl.sql';
4. Backup Init parameter from $ORACLE_HOME/dbs directory
If you use pfile then directly copy over to backup directory
If you use spfile then use:
SQL> create pfile from spfile='/home/oracle/../sptest.ora
Then copy the resulting init file to the backup area.
End of Hot Backup(Simple)
Note:
1. Make sure you have appropriate disk space for storing backups. If possible clear the backups which are very old or move it to the Tapes.
2. If the database is very huge, then it is suggested to use RMAN where in we can perfoerm incremental backups.
Coming up.. To take backup using RMAN and create RMAN repository...
Keep checking....
30 September 2009
Oracle database conventional HOT Refresh
Hot Refresh Process (ACTUAL):
In this note we often refer Source as Production server and Target as Test Server
1. Take Hot Backup of the datafiles on the Production(Source)database -- To know how to get Hot Backups refer my post: How to take Hot Backups without RMAN?
2. Copy over the datafiles using SCP or SFTP.
On Production database : Prepare the Control file script:
SQL> alter database backup controlfile to trace as '.sql’
Copy the control file script to the destination server.
4. After the applications are shutdown, bring down the database on Test(Target).
Sqlplus ‘/ as sysdba’
Shutdown immediate;
Make a back of the existing datafiles if needed(Based on disk space)
Please be sure of the space on server,
5. No need to create a new init.ora file the old one will do; Delete the old Controlfiles(get the location of control files from initSID.ora)
7. Edit the Controlfile script that was copied from production – make sure you do necessary changes regarding the location of datafiles and redo logs(marked BOLD below are the ones that needs to be changed, change REUSE to SET and set SID to TEST, change NORESETLOGS to RESETLOGS and Change it to NOARCHIVELOG for time being)
After you edit the script it should look something as below:
CREATE CONTROLFILE SET DATABASE "TEST" RESETLOGS NOARCHIVELOG
MAXLOGFILES 50
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 9983
LOGFILE
GROUP 11 '/home/oradata/TEST/redo/redo11.rdo' SIZE 1000M,
GROUP 12 '/home/oradata/TEST/redo/redo12.rdo' SIZE 1000M,
GROUP 13 '/home/oradata/TEST/redo/redo13.rdo' SIZE 1000M,
GROUP 14 '/home/oradata/TEST/redo/redo14.rdo' SIZE 1000M
DATAFILE
'/home/oradata/TEST/data/system01.dbf',
'/home/oradata/TEST/data/undotbs01.dbf',
'/home/oradata/TEST/data/cwmlite01.dbf',
'/home/oradata/TEST/data/drsys01.dbf',
'/home/oradata/TEST/data/example01.dbf',
'/home/oradata/TEST/data/indx01.dbf',
'/home/oradata/TEST/data/odm01.dbf',
'/home/oradata/TEST/data/tools01.dbf',
'/home/oradata/TEST/data/data_ts07.dbf',
'/home/oradata/TEST/data/xdb01.dbf',
'/home/oradata/TEST/data/data_ts01.dbf',
'/home/oradata/TEST/data/index_ts01.dbf',
'/home/oradata/TEST/data/index_ts02.dbf',
'/home/oradata/TEST/data/xdb02.dbf',
'/home/oradata/TEST/data/system02.dbf',
'/home/oradata/TEST/data/data_ts02.dbf',
'/home/oradata/TEST/data/sbill_data01.dbf',
'/home/oradata/TEST/data/data_ts03.dbf'
CHARACTER SET WE8ISO8859P1
;
8. Start the database in nomount state, run the Controlfile script you just created.
9. Control file gets created.
10. The datafiles will not be consistent at this point and hence copy over the archive logs from Production server and plce them accordingly on the test server. The location to put the fileson test server would be in the initTEST.ora with the parameter log_archive_dest
11. Based on the time you need to recover the database, choose the archive logs that you need to copy over and place them in the appropriate archiving directory on Test server
You can refer the timing(Archive log sequence number) from the alert log of production database.
12. There are various ways to recover database it can be of DBA’s choice and the need of data as well. I have chosen this to work out:
Once the archive logs are copied, run
SQL> recover database using backup controlfile until time ‘2009-04-21:22:59:04'
Make sure you have copied appropriate log sequence number till the specified time.
13. Once media recovery gets completed, you can see: Media Recovery Complete
message on SQL prompt.
Then run -alter database open resetlogs.
14. After database is open, it’s up to the DBA’s choice to put back the database in ARCHIVELOG mode or NOARCHIVELOG mode.
15. Add Temp Files to the TEMP Tablespaces(If you already have temporay tablespace):
alter tablespace "TEMP_TBS" add tempfile '/home/oradata/TEST/data/temp_02.dbf' size 4000M reuse autoextend off;
Or you can create a new tablespace based on your requirements.
Move on to POST REFRESH Tasks if there are any.
In this note we often refer Source as Production server and Target as Test Server
1. Take Hot Backup of the datafiles on the Production(Source)database -- To know how to get Hot Backups refer my post: How to take Hot Backups without RMAN?
2. Copy over the datafiles using SCP or SFTP.
On Production database : Prepare the Control file script:
SQL> alter database backup controlfile to trace as '
Copy the control file script to the destination server.
4. After the applications are shutdown, bring down the database on Test(Target).
Sqlplus ‘/ as sysdba’
Shutdown immediate;
Make a back of the existing datafiles if needed(Based on disk space)
Please be sure of the space on server,
5. No need to create a new init.ora file the old one will do; Delete the old Controlfiles(get the location of control files from initSID.ora)
7. Edit the Controlfile script that was copied from production – make sure you do necessary changes regarding the location of datafiles and redo logs(marked BOLD below are the ones that needs to be changed, change REUSE to SET and set SID to TEST, change NORESETLOGS to RESETLOGS and Change it to NOARCHIVELOG for time being)
After you edit the script it should look something as below:
CREATE CONTROLFILE SET DATABASE "TEST" RESETLOGS NOARCHIVELOG
MAXLOGFILES 50
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 9983
LOGFILE
GROUP 11 '/home/oradata/TEST/redo/redo11.rdo' SIZE 1000M,
GROUP 12 '/home/oradata/TEST/redo/redo12.rdo' SIZE 1000M,
GROUP 13 '/home/oradata/TEST/redo/redo13.rdo' SIZE 1000M,
GROUP 14 '/home/oradata/TEST/redo/redo14.rdo' SIZE 1000M
DATAFILE
'/home/oradata/TEST/data/system01.dbf',
'/home/oradata/TEST/data/undotbs01.dbf',
'/home/oradata/TEST/data/cwmlite01.dbf',
'/home/oradata/TEST/data/drsys01.dbf',
'/home/oradata/TEST/data/example01.dbf',
'/home/oradata/TEST/data/indx01.dbf',
'/home/oradata/TEST/data/odm01.dbf',
'/home/oradata/TEST/data/tools01.dbf',
'/home/oradata/TEST/data/data_ts07.dbf',
'/home/oradata/TEST/data/xdb01.dbf',
'/home/oradata/TEST/data/data_ts01.dbf',
'/home/oradata/TEST/data/index_ts01.dbf',
'/home/oradata/TEST/data/index_ts02.dbf',
'/home/oradata/TEST/data/xdb02.dbf',
'/home/oradata/TEST/data/system02.dbf',
'/home/oradata/TEST/data/data_ts02.dbf',
'/home/oradata/TEST/data/sbill_data01.dbf',
'/home/oradata/TEST/data/data_ts03.dbf'
CHARACTER SET WE8ISO8859P1
;
8. Start the database in nomount state, run the Controlfile script you just created.
9. Control file gets created.
10. The datafiles will not be consistent at this point and hence copy over the archive logs from Production server and plce them accordingly on the test server. The location to put the fileson test server would be in the initTEST.ora with the parameter log_archive_dest
11. Based on the time you need to recover the database, choose the archive logs that you need to copy over and place them in the appropriate archiving directory on Test server
You can refer the timing(Archive log sequence number) from the alert log of production database.
12. There are various ways to recover database it can be of DBA’s choice and the need of data as well. I have chosen this to work out:
Once the archive logs are copied, run
SQL> recover database using backup controlfile until time ‘2009-04-21:22:59:04'
Make sure you have copied appropriate log sequence number till the specified time.
13. Once media recovery gets completed, you can see: Media Recovery Complete
message on SQL prompt.
Then run -alter database open resetlogs.
14. After database is open, it’s up to the DBA’s choice to put back the database in ARCHIVELOG mode or NOARCHIVELOG mode.
15. Add Temp Files to the TEMP Tablespaces(If you already have temporay tablespace):
alter tablespace "TEMP_TBS" add tempfile '/home/oradata/TEST/data/temp_02.dbf' size 4000M reuse autoextend off;
Or you can create a new tablespace based on your requirements.
Move on to POST REFRESH Tasks if there are any.
Subscribe to:
Posts (Atom)