19 December 2010

RAC Install on Linux Part5

Post Clusterware installation:


[root@node1 bin]# ./crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[root@node1 bin]# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262144
Used space (kbytes) : 1980
Available space (kbytes) : 260164
ID : 853875004
Device/File Name : /u03/oracrs/ocr
Device/File integrity check succeeded

Device/File not configured

Cluster registry integrity check succeeded

[root@node1 bin]# olsnodes
bash: olsnodes: command not found
[root@node1 bin]# ./olsnodes
node1
node2
[root@node1 bin]#



[root@node2 ~]# cd /home/oracle/oracle/product/10.2.0/crs/bin/
[root@node2 bin]# ./crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[root@node2 bin]# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262144
Used space (kbytes) : 1980
Available space (kbytes) : 260164
ID : 853875004
Device/File Name : /u03/oracrs/ocr
Device/File integrity check succeeded

Device/File not configured

Cluster registry integrity check succeeded

[root@node2 bin]# ./olsnodes
node1
node2
[root@node2 bin]#


[oracle@node1 ~]$ cd /home/oracle/oracle/product/10.2.0/crs/cfgtoollogs
[oracle@node1 cfgtoollogs]$ ls
cfgfw configToolAllCommands configToolFailedCommands oui vipca
[oracle@node1 cfgtoollogs]$ ./configToolAllCommands
WARNING: node1:6200 already configured.
WARNING: node2:6200 already configured.
PRIF-50: duplicate interface is given in the input
PRIF-50: duplicate interface is given in the input

Performing post-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "node1".


Checking user equivalence...
User equivalence check passed for user "oracle".

Checking Cluster manager integrity...


Checking CSS daemon...
Daemon status check passed for "CSS daemon".

Cluster manager integrity check passed.

Checking cluster integrity...


Cluster integrity check passed


Checking OCR integrity...

Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.

Uniqueness check for OCR device passed.

Checking the version of OCR...
OCR of correct Version "2" exists.

Checking data integrity of OCR...
Data integrity check for OCR passed.

OCR integrity check passed.

Checking CRS integrity...

Checking daemon liveness...
Liveness check passed for "CRS daemon".

Checking daemon liveness...
Liveness check passed for "CSS daemon".

Checking daemon liveness...
Liveness check passed for "EVM daemon".

Checking CRS health...
CRS health check passed.

CRS integrity check passed.

Checking node application existence...


Checking existence of VIP node application (required)
Check passed.

Checking existence of ONS node application (optional)
Check passed.

Checking existence of GSD node application (optional)
Check passed.


Post-check for cluster services setup was successful.
[oracle@node1 cfgtoollogs]$


ASM

root.sh

The following J2EE Applications have been deployed and are accessible at the URLs listed below.

iSQL*Plus URL:
http://node1:5560/isqlplus

iSQL*Plus DBA URL:
http://node1:5560/isqlplus/dba

Enterprise Manager 10g Database Control URL:



DB

The following J2EE Applications have been deployed and are accessible at the URLs listed below.

iSQL*Plus URL:
http://node1:5560/isqlplus

iSQL*Plus DBA URL:
http://node1:5560/isqlplus/dba

Enterprise Manager 10g Database Control URL:
http://node1:1158/em


Final check

[root@node1 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2
[root@node1 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node1 ~]# cd /home/oracle/oracle/product/10.2.0/crs/bin/
[root@node1 bin]# ./crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[root@node1 bin]# su - oracle
[oracle@node1 ~]$ ps -ef | grep pmon
oracle 12294 11729 0 17:26 pts/1 00:00:00 grep pmon
[oracle@node1 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1
[oracle@node1 ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Jan 13 17:26:20 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ASM instance started

Total System Global Area 92274688 bytes
Fixed Size 1217884 bytes
Variable Size 65890980 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
SQL> Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
[oracle@node1 ~]$ asmcmd
ASMCMD> lsdg
State Type Rebal Unbal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
MOUNTED NORMAL N N 512 4096 1048576 9208 6688 0 3344 0 DATA/
ASMCMD> exit


[oracle@node1 ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/10.2.0/asm
[oracle@node1 ~]$ . oraenv
ORACLE_SID = [+ASM1] ? rac1
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/10.2.0/db_1
[oracle@node1 ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Jan 13 17:28:19 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 285212672 bytes
Fixed Size 1218992 bytes
Variable Size 104859216 bytes
Database Buffers 176160768 bytes
Redo Buffers 2973696 bytes
Database mounted.
Database opened.
SQL> select * from v$active_instances;

INST_NUMBER INST_NAME
----------- ------------------------------------------------------------
1 node1:rac1
2 node2:rac2

SQL>

No comments:

Post a Comment