(Agent‑based Linux PNs) Enroll protected nodes
onQ supports both agent‑based and
agent‑less PNs. Nodes that are not hosted by VMware require that an agent (onQ Service) be installed. Agent‑based enrollment enables you to use all of the operation and monitoring features that onQ has to offer.
Linux PN preparation is very important for RN builds to succeed. In some cases, different RHEL release distributions require unique tasks:
• install xen-aware kernel package. Unlike with RHEL 5.x, RHEL 7 and RHEL 6.x have a xen-aware kernel built-in; therefore, there’s no need to install xen-aware kernel package on theses versions.
• install agent software. This task is required for all supported versions.
• enforce grub boot menu. This task is required for all supported versions in order to boot the RN successfully, though the specific steps might vary for each version.
(RHEL 7.0) To enroll an agent‑based Linux PN:
Use this procedure to enroll an agent‑based Linux PN running RHEL 7.0.
If you remove orphan data for a given PN from the HA—but not from the DR, then later re-enroll that same PN on the same HA, the DR fails to add the future snapshots for this PN thereby compromising disaster recovery.
2. Log on to the HA’s onQ Portal as varadmin.
3. SSH to the to the server that you want to enroll as a PN, then log on as root.
4. Install the agent software:
a. Launch the installer:
| Note: By default, wget preserves the original files. Newly retrieved install.py is saved with extensions .1, .2, etc. Use the -r option (see wget man page) to overwrite the existing file. |
b. From within a folder (/tmp) where you want to save the install script, run the following command:
# wget -r http://<onQ-IP-address>/install.py |
c. Start the installer:
# cd <onQ-IP-address> # python ./install.py |
d. Type the credentials for either onQ varadmin or admin user.
e. (Optional) Specify all valid values, excluding file system type and capacity because these values aren’t editable, in the
node parameter fields provided and modify defaults as needed. (The install utility lists all available volumes/partitions/mount points and the onQ portal enforces any file system requirements as outlined in
Linux Filesystem Format Requirements.) You can make these node changes at any point after enrollment via the
Protection Config tab in the onQ Portal.
f. Select
Save to update/install the client node package on the Linux node.
g. Wait! Do not press Enter. Was the installation successful? Use the error messages below to evaluate the screen output.
• If yes, exit the script.
• If no, press
Enter to launch the installer UI again. Correct the problem, then repeat
Step e through
Step g.
Table 2: (Agent-based Linux PNs) Problems Before Enrollment
Completed Successfully The iptables firewall is enabled on this system.... | Message appears in the shell, after the GUI curser exits. In addition, you’ll be instructed to open up ports. |
Incorrect/invalid values entered | Install utility stops if you type incorrect/invalid values. Correct the problem and Save again or Cancel. |
not authorized | You either typed the credentials incorrectly or the user account does not have root privileges. |
5. Install the Netcat (nc) utility, if not already installed. On your yum‑enabled PN, run the following command, then verify that the package was installed. This utility reads and writes data across network connections using TCP or UDP. onQ depends on this utility for FLR restores.
# yum install nc # which nc nc is /usr/bin/nc # rpm -qf /usr/bin/nc nmap-ncat-6.40-4.el7.x86_64 |
6. Create the grub boot menu. In order to boot the RN successfully, the PN needs to prepare the init ram disk image with the required drivers and legacy grub boot menu.
a. Verify OS version:
# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.0 (Maipo) |
b. Make sure ext2/ext3/ext4 file systems utilities are installed.
# rpm -qa | grep e2fsprogs e2fsprogs-libs-1.42.9-4.el7.x86_64 e2fsprogs-1.42.9-4.el7.x86_64 |
If not installed, do so now:
c. Generate the init ram disk with xen drivers and ext4 file system modules.
Print the kernel release:
# uname -r 3.10.0-123.13.2.el7.x86_64 |
Where the 3.10.0-123.13.2.el7.x86_64 is the kernel release by default, change it to match PN's kernel release:
# cd /boot # mkdir -p /boot/grub # dracut --force --filesystems "ext4 ext3" \ --add-drivers "xen:vbd xen:vif" \ initramfs-3.10.0-123.13.2.el7xen.x86_64.img |
d. Verify the legacy grub boot loader:
# vi /boot/grub/grub.conf.xvf5 |
Where the 3.10.0-123.13.2.el7.x86_64 is the kernel release by default, change vmlinuz and initramfs to match PN's kernel release. The kernel parameters are on a single line. Simply copy and paste from following screen.
Where the root=UUID=855cd484-3852-4984-b568-ee0408c6b590, the 855cd... (UUID) is a temporary placeholder and will be replaced by read “/”'s UUID during the RN build. Do not make any changes to this parameter.
For example: The contents of /boot/grub/grub.conf.xvf5:
default=0 timeout=5 title onQ Red Hat Enterprise Linux (3.10.0-123.13.2.el7.x86_64) root (hd0,0) kernel /vmlinuz-3.10.0-123.13.2.el7.x86_64 ro root=UUID=855cd484-3852-4984-b568-ee0408c6b590 plymouth.enable=0 console=hvc0 loglvl=all cgroup_disable=memory sync_console console_to_ring earlyprintk=xen nomodeset net.ifnames=0 biosdevname=0 LANG=en_US.UTF-8 initrd /initramfs-3.10.0-123.13.2el7xen.x86_64.img |
From /boot, validate that vmlinuz-3.10.0-123.13.2.el7.x86_64 and initramfs-3.10.0-123.13.2.el7xen.x86_64.img exist in /boot folder as shown in the example below:
# ls /boot/vmlinuz-3.10.0-123.13.2.el7.x86_64 /boot/vmlinuz-3.10.0-123.13.2.el7.x86_64 # ls /boot/initramfs-3.10.0-123.13.2.el7xen.x86_64.img /boot/initramfs-3.10.0-123.13.2.el7xen.x86_64.img |
7.
8. (Recommended) Disable the Network Manager service, if installed, for self-tests to work correctly. This service is not useful in a server environment. Due to an RHEL 7.0 bug, turning off Network Manager can prevent NICs from showing up; if this occurs, re-enable the Network Manager service.
# systemctl disable NetworkManager.service # systemctl stop NetworkManager.service # systemctl status NetworkManager.service |
9. Wait for the RN to build, then perform a self‑test.
Troubleshooting RN Build or Self‑test Problems
Mistakes with the grub boot menu enforcement can prevent the RN from booting. The following list represents the most common errors.
• Kernel parameters are not on one single line. Some file editors wrap long parameters.
• You have a typo in grub.con or grub.conf.xvf5 file name.
• You have a typo in the kernel file name or the initramfs file name, or these files don’t exist.
• There is a mismatch, on the boot menu, between the kernel versions and the initramfs version. If the kernel's version does not match the contents of initramfs, the RN won't boot.The system could have more than one kernel installed:
7.0:
vmlinuz-3.10.0-123.13.2.el7.x86_64
should match
initramfs-3.10.0-123.13.2.el7xen.x86_64.img
6.x:
vmlinuz-2.6.32-279.el6.x86_64
should match
initramfs-2.6.32-279.el6.x86_64.img
5.x:
vmlinuz-2.6.18-371.el5xen
should match
initrd-2.6.18-371.el5xen.img.5
To find the driver versions packed inside the init ram file system (initramfs) of the boot menu: Locate the initramfs and kernel name from the boot menu prepared for the RN (you’ll find it under /boot), then use the following command to peek the contents of initramfs. For example:
RHEL 6.x or 7.0:
# grep kernel /boot/grub/grub.conf.xvf5 kernel /vmlinuz-3.10.0-123.el7.x86_64 ro root=UUID=9002ec24-fb30-4d16-8a78-b352a807e82b plymouth.enable=0 console=hvc0 loglvl=all cgroup_disable=memory sync_console console_to_ring earlyprintk=xen nomodeset net.ifnames=0 biosdevname=0 LANG=en_US.UTF-8 # grep initrd /boot/grub/grub.conf.xvf5 initrd /initramfs-3.10.0-123.el7xen.x86_64.img # lsinitrd /boot/initramsfs-3.10.0-123.el7xen.x86_64.img|grep modules rw-r--r-- 1 root root 1446 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.dep -rw-r--r-- 1 root root 2450 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.dep.bin -rw-r--r-- 1 root root 52 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.devname -rw-r--r-- 1 root root 82512 Jun 30 2014 usr/lib/modules/3.10.0-123.el7.x86_64/modules.order -rw-r--r-- 1 root root 165 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.softdep -rw-r--r-- 1 root root 28132 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.symbols -rw-r--r-- 1 root root 34833 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.symbols.bin |
RHEL 5.x:
# grep kernel /boot/grub/grub.conf.xvf5 kernel /vmlinuz-2.6.18-371.el5xen ro root=/dev/xvda1 rd_NO_LUKS rd_NO_MD rhgb crashkernel=auto rd_NO_LVM # grep initrd /boot/grub/grub.conf.xvf5 initrd /initrd-2.6.18-371.el5xen.img.5 # zcat /tmp/initrd-2.6.18-371.el5xen.img.5|cpio ‑t|grep ‑E “xen|ext” 16524 blocks lib/ext3.ko lib/xennet.ko lib/xenblk.ko |
10. Log on to the PN and open the following ports on the firewall in order for onQ to communicate with the PN.
• UDP port 5990
• TCP ports 5000 and 5990
Firewalld:
By default, RHEL 7.0 has introduced a new firewall service, a dynamic firewall daemon known as
firewalld, instead of iptables service by default; however, the traditional iptables service is supported if installed. For details, see the
Red Hat Linux 7 Security Guide. If you choose to disable firewalld, there is no need to configure firewalld firewall rules: simply skip this procedure.
firewalld daemon and service and iptables service are using iptables commands to configure the netfilter in the kernel to separate and filter the network traffic. firewalld stores it in various XML files in /usr/lib/firewalld/ and /etc/firewalld/.
The firewalld firewall defines how networks zones can be used to separate networks into different zones. Based on the level of trust, you can decide to place devices and traffic within a particular network zone. Each mutable network zone can have a different combination of firewall rules.
a. Verify that firewalld is in a running state.
b. Check the service status:
[root@RHEL70x64-17-167 services]# systemctl status firewalld.service firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled) Active: inactive (dead) |
c. Enable the service, if not already enabled:
[root@RHEL70x64-17-167 services]# systemctl enable firewalld ln -s '/usr/lib/systemd/system/firewalld.service' '/etc/systemd/system/dbusorg.edoraproject.FirewallD1.service' ln -s '/usr/lib/systemd/system/firewalld.service' '/etc/systemd/system /basic.target.wants/firewalld.service' |
d. Start the service, if not already running:
[root@RHEL70x64-17-167 services]# systemctl start firewalld |
e. On the PN, find the network interface that is used to communicate with onQ. In this example, that NIC is ens32.
[root@RHEL70x64-17-167 services]# ifconfig ens32: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.20.17.167 netmask 255.255.248.0 broadcast 10.20.23.255 inet6 fe80::250:56ff:fe9d:2121 prefixlen 64 scopeid 0x20<link> ether 00:50:56:9d:21:21 txqueuelen 1000 (Ethernet) RX packets 7115713 bytes 476287831 (454.2 MiB) RX errors 0 dropped 149791 overruns 0 frame 0 TX packets 924966 bytes 1305413839 (1.2 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 10 bytes 980 (980.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 10 bytes 980 (980.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 |
f. For the network interface that you identified above, find the network interface’s network zone. In this example, the network zone is work.
[root@RHEL70x64-17-167 services]# firewall-cmd --get-zone-of-interface=ens32 work |
Determine your default zone. In the following example, the default zone is Public.
[root@RHEL70x64-17-167 services]# firewall-cmd --get-default-zone public |
g. Associate the zone(s) with the following firewall rules. The same rules can be applied to many zones as needed. In the following example, dcrm-node service is associated with work zone for ens32. The dcrm-node.xml is located at /usr/lib/firewalld/services.
[root@RHEL70x64-17-167 services]# firewall-cmd --add-service=dcrm-node --permanent --zone=work success |
h. Activate the latest firewall rules:
[root@RHEL70x64-17-167 services]# firewall-cmd --reload success |
Now the PN can communicate with onQ.
i. Set up the rule for RN on the PN site.
The RN will be equipped with eth0 interface, so apply the rules to eth0 interface's zone if different from PN's zone. The PN might not have eth0 interface; in such a case, the RN's eth0 will be in the default zone.
Find eth0 network interface's network zone. In this example, it is not set:
[root@RHEL70x64-17-167 services]# firewall-cmd --get-zone-of-interface=eth0 no zone |
Determine your default zone. In this example default zone is Public. Since eth0 has no zone dcm-node is associated with Public zone:
[root@RHEL70x64-17-167 services]# firewall-cmd --get-default-zone public |
j. Associate the zone(s) with the following firewall rules. The same rules can be applied to many zones as needed:
[root@RHEL70x64-17-167 services]# firewall-cmd --add-service=dcrm-node --permanent --zone=public success |
k. Active the latest firewall rules:
[root@RHEL70x64-17-167 services]# firewall-cmd --reload success |
Now the RN can communicate with onQ, mainly for self-tests.
l. Confirm the firewall rules. The public zone and work zone have TCP ports (5000/5990) and UDP port 5990 opened in this case.
[root@RHEL70x64-17-167 services]# iptables -L -n Chain IN_public (2 references) target prot opt source destination IN_public_log all -- 0.0.0.0/0 0.0.0.0/0 IN_public_deny all -- 0.0.0.0/0 0.0.0.0/0 IN_public_allow all -- 0.0.0.0/0 0.0.0.0/0 Chain IN_public_allow (1 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5990 ctstate NEW ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5000 ctstate NEW ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5990 ctstate NEW ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ctstate NEW Chain IN_work (0 references) target prot opt source destination IN_work_log all -- 0.0.0.0/0 0.0.0.0/0 IN_work_deny all -- 0.0.0.0/0 0.0.0.0/0 IN_work_allow all -- 0.0.0.0/0 0.0.0.0/0 Chain IN_work_allow (1 references) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:631 ctstate NEW ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5990 ctstate NEW ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5000 ctstate NEW ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5990 ctstate NEW ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ctstate NEW |
iptables:
a. Verify that udp 5990 and tcp 5000 and 5990 ports are open and above the REJECT line:
# iptables -L –n | grep -E "5900|5000" ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5990 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5990 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited |
b. If these ports are not open, open them.
Find the input chain name, which is reported in the line that appears after INPUT:
# iptables -L -line numbers Chain INPUT (policy ACCEPT) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- anywhere anywhere ... 10 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited |
Using the line number of the REJECT line and the chain name (line 10 and line 1, respectively, in the step above), insert the onQ ports:
# iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5990 -j ACCEPT # iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5000 -j ACCEPT # iptables -I RH-Firewall-1-INPUT 10 -p udp --dport 5990 -j ACCEPT |
c. Save and restart iptables.
Afterward, verify that the ports are open and above the REJECT line as outlined earlier.
# service iptables save Saving firewall rules to /etc/sysconfig/iptables: [ OK ] # service iptables restart Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_n [ OK ] |
11. (Oracle database) Install RMAN scripts.
(RHEL 6.x) To enroll an agent‑based Linux PN:
Use this procedure to enroll an agent‑based Linux PN running RHEL 6.x.
If you remove orphan data for a given PN from the HA—but not from the DR, then later re-enroll that same PN on the same HA, the DR fails to add the future snapshots for this PN thereby compromising disaster recovery.
2. Log on to the HA’s onQ Portal as varadmin.
3. SSH to the to the server that you want to enroll as a PN, then log on as root.
4. Install the agent software:
a. Launch the installer:
| Note: By default, wget preserves the original files. Newly retrieved install.py is saved with extensions .1, .2, etc. Use the -r option (see wget man page) to overwrite the existing file. |
b. From within a folder (/tmp) where you want to save the install script, run the following command:
# wget -r http://<onQ-IP-address>/install.py |
c. Start the installer:
# cd <onQ-IP-address> # python ./install.py |
d. Type the credentials for either onQ varadmin or admin user.
e. (Optional) Specify all valid values, excluding file system type and capacity because these values aren’t editable, in the
node parameter fields provided and modify defaults as needed. (The install utility lists all available volumes/partitions/mount points and the onQ portal enforces any file system requirements as outlined in
Linux Filesystem Format Requirements.) You can make these node changes at any point after enrollment via the
Protection Config tab in the onQ Portal.
f. Select
Save to update/install the client node package on the Linux node.
g. Wait! Do not press Enter. Was the installation successful? Use the error messages below to evaluate the screen output.
• If yes, exit the script.
• If no, press
Enter to launch the installer UI again. Correct the problem, then repeat
Step e through
Step g.
Table 3: (Agent-based Linux PNs) Problems Before Enrollment
Completed Successfully The iptables firewall is enabled on this system.... | Message appears in the shell, after the GUI curser exits. In addition, you’ll be instructed to open up ports. |
Incorrect/invalid values entered | Install utility stops if you type incorrect/invalid values. Correct the problem and Save again or Cancel. |
not authorized | You either typed the credentials incorrectly or the user account does not have root privileges. |
5. Install the Netcat (nc) utility, if not already installed. On your yum‑enabled PN, run the following command, then verify that the package was installed. This utility reads and writes data across network connections using TCP or UDP. onQ depends on this utility for FLR restores.
# yum install nc # which nc nc is /usr/bin/nc # rpm -qf /usr/bin/nc nc-1.84-22.el6.x86_64 |
6. Wait for the RN to build, then perform a self‑test.
Troubleshooting RN Build or Self‑test Problems
Mistakes with the grub boot menu enforcement can prevent the RN from booting. The following list represents the most common errors.
• Kernel parameters are not on one single line. Some file editors wrap long parameters.
• You have a typo in grub.con or grub.conf.xvf5 file name.
• You have a typo in the kernel file name or the initramfs file name, or these files don’t exist.
• There is a mismatch, on the boot menu, between the kernel versions and the initramfs version. If the kernel's version does not match the contents of initramfs, the RN won't boot.The system could have more than one kernel installed:
7.0:
vmlinuz-3.10.0-123.13.2.el7.x86_64
should match
initramfs-3.10.0-123.13.2.el7xen.x86_64.img
6.x:
vmlinuz-2.6.32-279.el6.x86_64
should match
initramfs-2.6.32-279.el6.x86_64.img
5.x:
vmlinuz-2.6.18-371.el5xen
should match
initrd-2.6.18-371.el5xen.img.5
To find the driver versions packed inside the init ram file system (initramfs) of the boot menu: Locate the initramfs and kernel name from the boot menu prepared for the RN (you’ll find it under /boot), then use the following command to peek the contents of initramfs. For example:
RHEL 6.x or 7.0:
# grep kernel /boot/grub/grub.conf.xvf5 kernel /vmlinuz-3.10.0-123.el7.x86_64 ro root=UUID=9002ec24-fb30-4d16-8a78-b352a807e82b plymouth.enable=0 console=hvc0 loglvl=all cgroup_disable=memory sync_console console_to_ring earlyprintk=xen nomodeset net.ifnames=0 biosdevname=0 LANG=en_US.UTF-8 # grep initrd /boot/grub/grub.conf.xvf5 initrd /initramfs-3.10.0-123.el7xen.x86_64.img # lsinitrd /boot/initramsfs-3.10.0-123.el7xen.x86_64.img|grep modules rw-r--r-- 1 root root 1446 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.dep -rw-r--r-- 1 root root 2450 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.dep.bin -rw-r--r-- 1 root root 52 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.devname -rw-r--r-- 1 root root 82512 Jun 30 2014 usr/lib/modules/3.10.0-123.el7.x86_64/modules.order -rw-r--r-- 1 root root 165 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.softdep -rw-r--r-- 1 root root 28132 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.symbols -rw-r--r-- 1 root root 34833 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.symbols.bin |
RHEL 5.x:
# grep kernel /boot/grub/grub.conf.xvf5 kernel /vmlinuz-2.6.18-371.el5xen ro root=/dev/xvda1 rd_NO_LUKS rd_NO_MD rhgb crashkernel=auto rd_NO_LVM # grep initrd /boot/grub/grub.conf.xvf5 initrd /initrd-2.6.18-371.el5xen.img.5 # zcat /tmp/initrd-2.6.18-371.el5xen.img.5|cpio ‑t|grep ‑E “xen|ext” 16524 blocks lib/ext3.ko lib/xennet.ko lib/xenblk.ko |
7.
8. (Recommended) Disable the Network Manager service, if installed, for self-tests to work correctly. This service is not useful in a server environment.
# service NetworkManager stop # service NetworkManager status # chkconfig NetworkManager off # chkconfig --list NetworkManager NetworkManager 0:off 1:off 2:off 3:off 4:off 5:off 6:off |
9. Log on to the PN and open the following ports on the firewall in order for onQ to communicate with the PN.
• UDP port 5990
• TCP ports 5000 and 5990
10.
iptables:
a. Verify that udp 5990 and tcp 5000 and 5990 ports are open and above the REJECT line:
# iptables -L –n | grep -E "5900|5000" ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5990 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5990 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited |
b. If these ports are not open, open them.
Find the input chain name, which is reported in the line that appears after INPUT:
# iptables -L -line numbers Chain INPUT (policy ACCEPT) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- anywhere anywhere ... 10 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited |
Using the line number of the REJECT line and the chain name (line 10 and line 1, respectively, in the step above), insert the onQ ports:
# iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5990 -j ACCEPT # iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5000 -j ACCEPT # iptables -I RH-Firewall-1-INPUT 10 -p udp --dport 5990 -j ACCEPT |
c. Save and restart iptables.
Afterward, verify that the ports are open and above the REJECT line as outlined earlier.
# service iptables save Saving firewall rules to /etc/sysconfig/iptables: [ OK ] # service iptables restart Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_n [ OK ] |
11. (Oracle database) Install RMAN scripts.
12.
(RHEL 5.x) To enroll an agent‑based Linux PN:
Use this procedure to enroll an agent‑based Linux PN running RHEL 5.x.
If you remove orphan data for a given PN from the HA—but not from the DR, then later re-enroll that same PN on the same HA, the DR fails to add the future snapshots for this PN thereby compromising disaster recovery.
2. Log on to the HA’s onQ Portal as varadmin.
3. SSH to the to the server that you want to enroll as a PN, then log on as root.
4. Install the agent software:
a. Launch the installer:
| Note: By default, wget preserves the original files. Newly retrieved install.py is saved with extensions .1, .2, etc. Use the -r option (see wget man page) to overwrite the existing file. |
b. From within a folder (/tmp) where you want to save the install script, run the following command:
# wget -r http://<onQ-IP-address>/install.py |
c. Start the installer:
# cd <onQ-IP-address> # python ./install.py |
d. Type the credentials for either onQ varadmin or admin user.
e. (Optional) Specify all valid values, excluding file system type and capacity because these values aren’t editable, in the
node parameter fields provided and modify defaults as needed. (The install utility lists all available volumes/partitions/mount points and the onQ portal enforces any file system requirements as outlined in
Linux Filesystem Format Requirements.) You can make these node changes at any point after enrollment via the
Protection Config tab in the onQ Portal.
f. Select
Save to update/install the client node package on the Linux node.
g. Wait! Do not press Enter. Was the installation successful? Use the error messages below to evaluate the screen output.
• If yes, exit the script.
• If no, press
Enter to launch the installer UI again. Correct the problem, then repeat
Step e through
Step g.
Table 4: (Agent-based Linux PNs) Problems Before Enrollment
Completed Successfully The iptables firewall is enabled on this system.... | Message appears in the shell, after the GUI curser exits. In addition, you’ll be instructed to open up ports. |
Incorrect/invalid values entered | Install utility stops if you type incorrect/invalid values. Correct the problem and Save again or Cancel. |
not authorized | You either typed the credentials incorrectly or the user account does not have root privileges. |
6. Copy and modify /boot/grub/menu.lst:
default=0 timeout=5 hiddenmenu title Red Hat Enterprise Linux Server by Quorum onQ (2.6.18-371.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-371.el5xen ro root=/dev/xvda1 rd_NO_LUKS rd_NO_MD rhgb crashkernel=auto rd_NO_LVM initrd /initrd-2.6.18-371.el5xen.img.5 |
7. Wait for the RN to build, then perform a self‑test.
Troubleshooting RN Build or Self‑test Problems
Mistakes with the grub boot menu enforcement can prevent the RN from booting. The following list represents the most common errors.
• Kernel parameters are not on one single line. Some file editors wrap long parameters.
• You have a typo in grub.con or grub.conf.xvf5 file name.
• You have a typo in the kernel file name or the initramfs file name, or these files don’t exist.
• There is a mismatch, on the boot menu, between the kernel versions and the initramfs version. If the kernel's version does not match the contents of initramfs, the RN won't boot.The system could have more than one kernel installed:
7.0:
vmlinuz-3.10.0-123.13.2.el7.x86_64
should match
initramfs-3.10.0-123.13.2.el7xen.x86_64.img
6.x:
vmlinuz-2.6.32-279.el6.x86_64
should match
initramfs-2.6.32-279.el6.x86_64.img
5.x:
vmlinuz-2.6.18-371.el5xen
should match
initrd-2.6.18-371.el5xen.img.5
To find the driver versions packed inside the init ram file system (initramfs) of the boot menu: Locate the initramfs and kernel name from the boot menu prepared for the RN (you’ll find it under /boot), then use the following command to peek the contents of initramfs. For example:
RHEL 6.x or 7.0:
# grep kernel /boot/grub/grub.conf.xvf5 kernel /vmlinuz-3.10.0-123.el7.x86_64 ro root=UUID=9002ec24-fb30-4d16-8a78-b352a807e82b plymouth.enable=0 console=hvc0 loglvl=all cgroup_disable=memory sync_console console_to_ring earlyprintk=xen nomodeset net.ifnames=0 biosdevname=0 LANG=en_US.UTF-8 # grep initrd /boot/grub/grub.conf.xvf5 initrd /initramfs-3.10.0-123.el7xen.x86_64.img # lsinitrd /boot/initramsfs-3.10.0-123.el7xen.x86_64.img|grep modules rw-r--r-- 1 root root 1446 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.dep -rw-r--r-- 1 root root 2450 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.dep.bin -rw-r--r-- 1 root root 52 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.devname -rw-r--r-- 1 root root 82512 Jun 30 2014 usr/lib/modules/3.10.0-123.el7.x86_64/modules.order -rw-r--r-- 1 root root 165 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.softdep -rw-r--r-- 1 root root 28132 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.symbols -rw-r--r-- 1 root root 34833 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.symbols.bin |
RHEL 5.x:
# grep kernel /boot/grub/grub.conf.xvf5 kernel /vmlinuz-2.6.18-371.el5xen ro root=/dev/xvda1 rd_NO_LUKS rd_NO_MD rhgb crashkernel=auto rd_NO_LVM # grep initrd /boot/grub/grub.conf.xvf5 initrd /initrd-2.6.18-371.el5xen.img.5 # zcat /tmp/initrd-2.6.18-371.el5xen.img.5|cpio ‑t|grep ‑E “xen|ext” 16524 blocks lib/ext3.ko lib/xennet.ko lib/xenblk.ko |
8.
9. (Recommended) Disable the Network Manager service and Kudzu services, if installed, for self-tests to work correctly. This service is not useful in a server environment.
# service NetworkManager stop # service NetworkManager status # chkconfig NetworkManager off # chkconfig --list NetworkManager NetworkManager 0:off 1:off 2:off 3:off 4:off 5:off 6:off # service kudzu stop # service kudzu status # chkconfig kudzu off # chkconfig --list kudzu kudzu 0:off 1:off 2:off 3:off 4:off 5:off 6:off |
10. Log on to the PN and open the following ports on the firewall in order for onQ to communicate with the PN.
• UDP port 5990
• TCP ports 5000 and 5990
11.
iptables:
a. Verify that udp 5990 and tcp 5000 and 5990 ports are open and above the REJECT line:
# iptables -L –n | grep -E "5900|5000" ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5990 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5990 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited |
b. If these ports are not open, open them.
Find the input chain name, which is reported in the line that appears after INPUT:
# iptables -L -line numbers Chain INPUT (policy ACCEPT) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- anywhere anywhere ... 10 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited |
Using the line number of the REJECT line and the chain name (line 10 and line 1, respectively, in the step above), insert the onQ ports:
# iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5990 -j ACCEPT # iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5000 -j ACCEPT # iptables -I RH-Firewall-1-INPUT 10 -p udp --dport 5990 -j ACCEPT |
c. Save and restart iptables.
Afterward, verify that the ports are open and above the REJECT line as outlined earlier.
# service iptables save Saving firewall rules to /etc/sysconfig/iptables: [ OK ] # service iptables restart Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_n [ OK ] |
12. (Oracle database) Install RMAN scripts.
13.
(Optional) To enforce a network setting:
Unlike with Windows RNs, Linux RNs cannot be assigned to networks as outlined in
Assign RNs to Networks. However, you can still enforce network settings for RN self-tests.
The Linux RN build should have a working NIC. If you have special need to set the RN’s NIC to a different IP, you can place a file, /opt/quorum/bin/xvf.dat, to generate /etc/sysconfig/network-scripts/ifcfg-eth0.
For example:
# cat /opt/quorum/bin/xvf.dat @@XV_PN_IP 10.20.16.74 @@XV_PN_MASK 255.255.248.0 @@XV_PN_GW 10.20.16.1 |