17 June 2011

Install 11gR2 RAC on RHEL -- PART4

Clone RAC1 to RAC2
Change the display name to rac2 in VMX file.

Make sure IP address is changed in RAC2 and both the nodes can communicate with each other.

[root@rac2 ~]# ping rac1
PING rac1.asteroid.com (192.168.1.150) 56(84) bytes of data.
64 bytes from rac1.asteroid.com (192.168.1.150): icmp_seq=1 ttl=64 time=15.0 ms
64 bytes from rac1.asteroid.com (192.168.1.150): icmp_seq=2 ttl=64 time=0.573 ms
64 bytes from rac1.asteroid.com (192.168.1.150): icmp_seq=3 ttl=64 time=1.80 ms

--- rac1.asteroid.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.573/5.798/15.018/6.538 ms
[root@rac2 ~]# ping rac1-priv
PING rac1-priv.asteroid.com (192.168.2.150) 56(84) bytes of data.
64 bytes from rac1-priv.asteroid.com (192.168.2.150): icmp_seq=1 ttl=64 time=0.383 ms
64 bytes from rac1-priv.asteroid.com (192.168.2.150): icmp_seq=2 ttl=64 time=1.55 ms
64 bytes from rac1-priv.asteroid.com (192.168.2.150): icmp_seq=3 ttl=64 time=0.860 ms

--- rac1-priv.asteroid.com ping statistics ---



Setup User Equivalency
Perform this on both the nodes:

[oracle@rac1 .ssh]$ /usr/bin/ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
23:88:ba:0c:37:42:82:21:d6:57:24:ab:41:d4:c4:ce oracle@rac1
[oracle@rac1 .ssh]$ /usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
2c:85:7b:ea:f3:29:d8:54:28:d2:b2:88:b5:35:67:b1 oracle@rac1
[oracle@rac1 .ssh]$

Move node1.pub to rac2

Peform below on rac2

[oracle@rac2 .ssh]$ ls
id_dsa id_dsa.pub id_rsa id_rsa.pub node1.pub node2.pub
[oracle@rac2 .ssh]$ cat node1.pub node2.pub > authorized_keys
[oracle@rac2 .ssh]$ chmod 644 authorized_keys
[oracle@rac2 .ssh]$ scp authorized_keys rac1:/home/oracle/.ssh
The authenticity of host 'rac1 (192.168.1.150)' can't be established.
RSA key fingerprint is f0:d1:e2:09:0b:b4:45:d3:94:10:2d:85:42:0d:82:b1.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1,192.168.1.150' (RSA) to the list of known hosts.
oracle@rac1's password:
authorized_keys 100% 1988 1.9KB/ss 00:00
[oracle@rac2 .ssh]$ ssh rac1 date
Wed Jun 8 23:27:23 IST 2011
[oracle@rac2 .ssh]$ ssh rac1-priv date
The authenticity of host 'rac1-priv (192.168.2.150)' can't be established.
RSA key fingerprint is f0:d1:e2:09:0b:b4:45:d3:94:10:2d:85:42:0d:82:b1.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1-priv,192.168.2.150' (RSA) to the list of known hosts.
Wed Jun 8 23:28:00 IST 2011
[oracle@rac2 .ssh]$


Download and Install Clusterware:

Install cvuqdisks package located in clusterware’s rpm dir.
This is needed for Cluster Verification Utility to detect shared disks.


[root@rac1 rpm]# rpm -i cvuqdisk-1.0.7-1.rpm
Using default group oinstall to install package
Run the Pre CVU:
[oracle@rac1 cluster]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2


Performing pre-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "rac1"


Checking user equivalence...
User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Node connectivity passed for subnet "192.168.1.0" with node(s) rac2,rac1
TCP connectivity check passed for subnet "192.168.1.0"

Node connectivity passed for subnet "192.168.2.0" with node(s) rac2,rac1
TCP connectivity check passed for subnet "192.168.2.0"


Interfaces found on subnet "192.168.1.0" that are likely candidates for VIP are:
rac2 eth0:192.168.1.200
rac1 eth0:192.168.1.150

Interfaces found on subnet "192.168.2.0" that are likely candidates for a private interconnect are:
rac2 eth1:192.168.2.200
rac1 eth1:192.168.2.150

Node connectivity check passed

Total memory check failed
Check failed on nodes:
rac2,rac1
Available memory check passed
Swap space check failed
Free disk space check failed for "rac2:/tmp"
Check failed on nodes:
rac2
Free disk space check failed for "rac1:/tmp"
Check failed on nodes:
rac1
User existence check passed for "oracle"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "oracle" in group "oinstall" [as Primary] passed
Membership check for user "oracle" in group "dba" passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81"
Package existence check passed for "binutils-2.17.50.0.6"
Package existence check passed for "gcc-4.1.2"
Package existence check passed for "gcc-c++-4.1.2"
Package existence check passed for "libgomp-4.1.2"
Package existence check passed for "libaio-0.3.106"
Package existence check failed for "glibc-2.5-24"
Check failed on nodes:
rac2,rac1
Package existence check passed for "compat-libstdc++-33-3.2.3"
Package existence check passed for "elfutils-libelf-0.125"
Package existence check passed for "elfutils-libelf-devel-0.125"
Package existence check passed for "glibc-common-2.5"
Package existence check passed for "glibc-devel-2.5"
Package existence check passed for "glibc-headers-2.5"
Package existence check passed for "libaio-devel-0.3.106"
Package existence check passed for "libgcc-4.1.2"
Package existence check passed for "libstdc++-4.1.2"
Package existence check passed for "libstdc++-devel-4.1.2"
Package existence check failed for "sysstat-7.0.2"
Check failed on nodes:
rac2,rac1
Package existence check passed for "unixODBC-2.2.11"
Package existence check passed for "unixODBC-devel-2.2.11"
Package existence check passed for "ksh-20060214"
Check for multiple users with UID value 0 passed
Current group ID check passed
Core file name pattern consistency check passed.

User "oracle" is not part of "root" group. Check passed
Default user file creation mask check passed


Start the installation, mount CD or get to the /mnt/hgfs/crs
[oracle@rac1 cluster]$ ./runInstaller



Click Next
Select english and click next





Enter scan name in the /etc/hosts as shown below
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
# Public
192.168.1.150 rac1.asteroid.com rac1
192.168.1.200 rac2.asteroid.com rac2

#Virtual
192.168.1.100 rac1-vip.asteroid.com rac1-vip
192.168.1.300 rac2-vip.asteroid.com rac2-vip

#Private
192.168.2.150 rac1-priv.asteroid.com rac1-priv
192.168.2.200 rac2-priv.asteroid.com rac2-priv

#SCAN
192.168.1.111 rac1-scan.asteroid.com rac1-scan
::1 localhost6.localdomain6 localhost6


What is a SCAN name ?

Oracle introduced this new concept on 11gR2 version: it stands for Single Client Access Name (SCAN). Its purpose is to eliminate the need to change tns name entry when nodes are added to or removed from the Cluster. What does it happen when you have several client and you decide to add/remove a node? So you have to configure your SCAN name




enter scan name as either rac1 or rac2(basically it should be one of the nodes of the cluster and click on Configure GNS



Click next




Click add button and add the second node




click next


Click Next






Click Next


select ASM



Enter the Disk group name and the candidate disks needed for discovery path






Click next



select the appropriate passwords for SYSASM user.





click next, DO NOT CHOOSE THE FAILURE ISOLATION OPTION FOR THIS INSTALL PURPOSE




Click next




Click next, you may get warning message depending the complexity of the password chosen.
http://www.blogger.com/img/blank.gif
Continue with warning message






continuation in Part5

No comments:

Post a Comment