Enrollment in non‑cluster Environment : (Agent‑less Linux/Windows PNs) Enroll protected nodes
  
(Agent‑less Linux/Windows PNs) Enroll protected nodes
onQ supports both agent‑based and agent‑less PNs. onQ’s VMware Real Agentless protection provides the following benefits:
No PN CPU utilization.
Easier deployment in a VMware environment.
No need to test or revalidate servers due to the introduction of new software.
No need to worry about voiding vendor contracts or support due to installing agent‑based software.
However, before you deploy agent‑less PNs, consider the trade-offs outlined in Agent‑less PN Enrollment Limitations.
To protect agent‑less PNs, you must indirectly enroll PNs by enrolling one of the following hosts:
vCenter Server. Choose this option if your virtual machines are being hosted by multiple ESXi hosts and managed by vCenter; in this case, a vCenter enrollment provides the quickest deployment possible. This option is also advantageous if you want the ability to migrate those virtual machines to a different ESXi host using vSphere vMotion live migration; after a vMotion migration, onQ continues to back up those PNs, provided that you take into account the following requirements:
Proxy installation. Every ESXi host in a vMotion cluster must have a proxy installed from the onQ where the PN is enrolled. If you vMotion a PN on to an ESXi host that doesn't have a proxy installed, onQ cannot back up that PN and cannot notify you of this failure because onQ is unaware of that host.
Shared datastore. During a vMotion migration, virtual machines aren’t running. As a best practice, virtual machines should use a shared datastore, not a local datastore, so that they are available as soon as possible (and, therefore, available to be backed up by onQ), especially virtual machines with multi‑terabytes of data. Moreover, if you don’t use shared storage, QuorumDisk.vmdk, which onQ creates for agent‑less PNs to have persistent backup records in terms of scan and blocksums folders, doesn’t get automatically migrated by vMotion; therefore, onQ must perform a full scan, not a delta, after the migration. (QuorumDisk.vmdk is created in the VM folder, but isn't part of the VM itself.) If you prefer local storage, manually migrate QuorumDisk.vmdk to the destination datastore before you perform the vMotion migration.
ESX/ESXi Server. Choose this option if your virtual machines are being hosted by a single ESXi host.
VMware‑hosted nodes, whether Windows or Linux, do not require that an agent (onQ Service) be installed on the PN. Instead, on each ESX/ESXi server (aka Proxy Host) that you enroll, the installer deploys 1 to 3 proxies per operating system type (Linux or Windows), per ESX/ESXi server, and per onQ. If all the PNs that you enroll have the same operating system, onQ doesn’t deploy any PN proxies for the extraneous operating system. All PNs on a given ESX/ESXi server and having the same operating system (Linux or Windows) share the same PN proxies.
Using these proxies, the onQ’s backup utility works with VMware Snapshot to back up the PNs to the HA.
 
Note:  If you need to rename the onQ host name, you will need to perform the steps outlined in Configure Appliance’s network settings.
Note the following:
Whether you enroll using vCenter Server or ESX/ESXi Server, you can configure 1 to 3 proxies per operating system type per onQ, though you must configure at least one proxy per ESXi server and per onQ if the ESXi server is part of a vSphere cluster. The onQ enrollment process simply deploys the requested proxy type and number of proxies that you configure. Quorum recommends that you configure the maximum to realize onQ’s ability to perform three concurrent backups and to facilitate timely backups (see Stop in‑progress backups).
Whether you enroll using vCenter Server or ESX/ESXi Server, all proxies deployed from a specific onQ are removed only upon deletion of the last PN from that onQ and will be reflected in the onQ’s configuration after such deletion occurs.
For information about how onQ upgrades these proxies, go to Update Appliance software.
(Windows/vCenter) To enroll an agent‑less Windows PN:
Use this procedure to enroll an agent‑less Windows PN that’s being managed by vCenter.
 
Warning:  If your Windows 2012 PN is running on the VMware host and uses the VMware E1000E network interface adapter, squirt-server might receive a corrupted source.info from squirtcopy due to a problem with VMware’s NIC, as outlined in VMware KB 2058692. To prevent this problem, on the VMware‑hosted PNs, switch the NIC type from E1000E to E1000.
1. Install the PN proxies:
a. Enable ssh on each ESXi/ESX server in the vMotion cluster.
b. Reserve 1 to 3 unique static IP addresses for each PN proxy type (Linux and Windows). The more proxies you configure, the more concurrent backups onQ can perform.
c. Log on to the HA’s onQ Portal.
d. Click the PROTECTION CONFIG tab > double‑plus button (++).
The Add Protected Nodes via Host dialog appears.
e. In the Server Type field, select the vCenter radio button.
f. Provide the vCenter credentials for any user with root privileges and either the hostname or the IP address for vCenter, then GET LIST. The Add Protected Nodes via vCenter dialog appears.
The onQ Portal queries the virtual machines managed by vCenter, then displays the inventory.
g. Provide ESXi host credentials for any user with root privileges, specify proxy information for each proxy type, and select the check boxes for the PNs that you want to enroll.
If an expected virtual machine does not appear in the list, it’s likely that the virtual machine does not have VMware Tools installed.
Provide the network properties for each proxy. Use the unique IP address(es) that you reserved above. If more than one IP per proxy type, provide a comma‑separated list.
Windows proxy address. The static IP address that you reserved for the Windows PN proxy.
Linux proxy address. The static IP address that you reserved for the Linux PN proxy.
Proxy subnet mask. The PN proxy’s subnet mask.
Proxy gateway. The PN proxy’s gateway.
Proxy VM Data store. The directory in which the vm’s files are stored.
Proxy VM Network. Your network configuration can include a vCenter-level Distributed Virtual Switches (DVS) or an ESXi host-level virtual switch. When you enroll an ESXi host, all available networks that are visible to that host are listed and available for selection.
In the following example, all PNs on all ESXi hosts in the vMotion cluster are being enrolled and using the maximum number of proxies allowed for each proxy type:
Click the ENROLL button, then OKAY.
The PNs that you selected appear in the Protected Nodes list; however, they are unverified as evidenced by the status in the PROTECTED NODES page > Protection Disabled column.
h. From the ESXi/ESX host, power on the PN proxies.
i. Activate (aka verify) the PNs so that onQ can protect them.
Go to PROTECTION CONFIG tab, select the PN, then click the MODIFY button.
Specify values for the node parameters, then SAVE. The act of saving a PN’s configuration instructs onQ to activate that PN’s configuration. If the PN is a Linux PN, onQ cannot enroll these XFS mount points automatically, so you must do so now. See Linux Filesystem Format Requirements.
The onQ Portal groups the PNs by ESXi server. If you do not want these PNs in such a group, clear the default Group Name field to remove the PNs from this group and place them in the shared pool.
 
 
2. Log on to the PN and open the following ports on the firewall in order for onQ to communicate with the PN.
UDP port 5990
TCP ports 5000 and 5990
 
3.  
4. (Oracle database) Install RMAN scripts.
If the PN has an Oracle database, install the RMAN scripts so that onQ can execute a hot backup of your database as outlined in Back up and restore Oracle 10g+ database on Windows.
(Windows/ESXi) To enroll an agent‑less Windows PN:
Use this procedure to enroll an agent‑less Windows PN that’s being hosted my an ESXi server.
 
Warning:  If your Windows 2012 PN is running on the VMware host and uses the VMware E1000E network interface adapter, squirt-server might receive a corrupted source.info from squirtcopy due to a problem with VMware’s NIC, as outlined in VMware KB 2058692. To prevent this problem, on the VMware‑hosted PNs, switch the NIC type from E1000E to E1000.
1. Install the PN proxies:
a. Enable ssh on the ESXi/ESX server.
b. Reserve 1 to 3 unique static IP addresses for each PN proxy type (Linux and Windows). The more proxies you configure, the more concurrent backups onQ can perform.
c. Log on to the HA’s onQ Portal.
d. Click the PROTECTION CONFIG tab > double‑plus button (++).
The Add Protected Nodes via Host dialog appears.
e. In the Server Type field, select the ESX Host radio button.
f. In the User Name and Password fields, provide the ESXi/ESX host credentials for any user with root privileges.
g. In the Server field, provide either the hostname or the IP address for the ESXi/ESX host, then GET LIST.
The onQ Portal queries the virtual machines hosted by the ESXi/ESX server, then displays the inventory.
h. In the virtual machine list, select the check boxes (or ALL button) for the PNs that you want to enroll.
If an expected virtual machine does not appear in the list, it’s likely that the virtual machine does not have VMware Tools installed.
Provide the network properties for each proxy. Use the unique IP addresses that you reserved above.
Windows proxy address. The static IP address that you reserved for the Windows PN proxy.
Linux proxy address. The static IP address that you reserved for the Linux PN proxy.
Proxy subnet mask. The PN proxy’s subnet mask.
Proxy gateway. The PN proxy’s gateway.
Proxy VM Data store. The directory in which the vm’s files are stored.
Proxy VM Network. Your network configuration can include a vCenter-level Distributed Virtual Switches (DVS) or an ESXi host-level virtual switch. When you enroll an ESXi host, all available networks that are visible to that host are listed and available for selection.
In the following example, three Windows PN are being enrolled, so only one static IP address (for the Windows PN proxy WPY_<onQhostname>_<ESXhostname>) is needed:
Click the ENROLL button, then OKAY.
The PNs that you selected appear in the Protected Nodes list; however, they are unverified as evidenced by the status in the PROTECTED NODES page > Protection Disabled column.
i. From the ESXi/ESX host, power on the PN proxies.
j. Activate (aka verify) the PNs so that onQ can protect them.
Go to PROTECTION CONFIG tab, select the PN, then click the MODIFY button.
Specify values for the node parameters, then SAVE. The act of saving a PN’s configuration instructs onQ to activate that PN’s configuration. If the PN is a Linux PN, onQ cannot enroll these XFS mount points automatically, so you must do so now. See Linux Filesystem Format Requirements.
The onQ Portal groups the PNs by ESXi server. If you do not want these PNs in such a group, clear the default Group Name field to remove the PNs from this group and place them in the shared pool.
 
 
2. Log on to the PN and open the following ports on the firewall in order for onQ to communicate with the PN.
UDP port 5990
TCP ports 5000 and 5990
 
3. (Oracle database) Install RMAN scripts.
If the PN has an Oracle database, install the RMAN scripts so that onQ can execute a hot backup of your database as outlined in Back up and restore Oracle 10g+ database on Windows.
(RHEL 7.0/vCenter) To enroll an agent‑less Linux PN:
Use this procedure to enroll an agent‑less Linux PN running RHEL 7.0 and that’s being managed by vCenter.
1. Install the PN proxies:
a. Enable ssh on each ESXi/ESX server in the vMotion cluster.
b. Reserve 1 to 3 unique static IP addresses for each PN proxy type (Linux and Windows). The more proxies you configure, the more concurrent backups onQ can perform.
c. Log on to the HA’s onQ Portal.
d. Click the PROTECTION CONFIG tab > double‑plus button (++).
The Add Protected Nodes via Host dialog appears.
e. In the Server Type field, select the vCenter radio button.
f. Provide the vCenter credentials for any user with root privileges and either the hostname or the IP address for vCenter, then GET LIST. The Add Protected Nodes via vCenter dialog appears.
The onQ Portal queries the virtual machines managed by vCenter, then displays the inventory.
g. Provide ESXi host credentials for any user with root privileges, specify proxy information for each proxy type, and select the check boxes for the PNs that you want to enroll.
If an expected virtual machine does not appear in the list, it’s likely that the virtual machine does not have VMware Tools installed.
Provide the network properties for each proxy. Use the unique IP address(es) that you reserved above. If more than one IP per proxy type, provide a comma‑separated list.
Windows proxy address. The static IP address that you reserved for the Windows PN proxy.
Linux proxy address. The static IP address that you reserved for the Linux PN proxy.
Proxy subnet mask. The PN proxy’s subnet mask.
Proxy gateway. The PN proxy’s gateway.
Proxy VM Data store. The directory in which the vm’s files are stored.
Proxy VM Network. Your network configuration can include a vCenter-level Distributed Virtual Switches (DVS) or an ESXi host-level virtual switch. When you enroll an ESXi host, all available networks that are visible to that host are listed and available for selection.
In the following example, all PNs on all ESXi hosts in the vMotion cluster are being enrolled and using the maximum number of proxies allowed for each proxy type:
Click the ENROLL button, then OKAY.
The PNs that you selected appear in the Protected Nodes list; however, they are unverified as evidenced by the status in the PROTECTED NODES page > Protection Disabled column.
h. From the ESXi/ESX host, power on the PN proxies.
i. Activate (aka verify) the PNs so that onQ can protect them.
Go to PROTECTION CONFIG tab, select the PN, then click the MODIFY button.
Specify values for the node parameters, then SAVE. The act of saving a PN’s configuration instructs onQ to activate that PN’s configuration. If the PN is a Linux PN, onQ cannot enroll these XFS mount points automatically, so you must do so now. See Linux Filesystem Format Requirements.
The onQ Portal groups the PNs by ESXi server. If you do not want these PNs in such a group, clear the default Group Name field to remove the PNs from this group and place them in the shared pool.
 
 
2. Create the grub boot menu. In order to boot the RN successfully, the PN needs to prepare the init ram disk image with the required drivers and legacy grub boot menu.
a. Verify OS version:
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.0 (Maipo)
b. Make sure ext2/ext3/ext4 file systems utilities are installed.
# rpm -qa | grep e2fsprogs
e2fsprogs-libs-1.42.9-4.el7.x86_64
e2fsprogs-1.42.9-4.el7.x86_64
If not installed, do so now:
# yum install e2fsprogs
c. Generate the init ram disk with xen drivers and ext4 file system modules.
Print the kernel release:
# uname -r
3.10.0-123.13.2.el7.x86_64
Where the 3.10.0-123.13.2.el7.x86_64 is the kernel release by default, change it to match PN's kernel release:
# cd /boot
# mkdir -p /boot/grub
# dracut --force --filesystems "ext4 ext3" \
--add-drivers "xen:vbd xen:vif" \
initramfs-3.10.0-123.13.2.el7xen.x86_64.img
d. Verify the legacy grub boot loader:
# vi /boot/grub/grub.conf.xvf5
Where the 3.10.0-123.13.2.el7.x86_64 is the kernel release by default, change vmlinuz and initramfs to match PN's kernel release. The kernel parameters are on a single line. Simply copy and paste from following screen.
Where the root=UUID=855cd484-3852-4984-b568-ee0408c6b590, the 855cd... (UUID) is a temporary placeholder and will be replaced by read “/”'s UUID during the RN build. Do not make any changes to this parameter.
For example: The contents of /boot/grub/grub.conf.xvf5:
default=0
timeout=5
title onQ Red Hat Enterprise Linux (3.10.0-123.13.2.el7.x86_64)
root (hd0,0)
kernel /vmlinuz-3.10.0-123.13.2.el7.x86_64 ro root=UUID=855cd484-3852-4984-b568-ee0408c6b590 plymouth.enable=0 console=hvc0 loglvl=all cgroup_disable=memory sync_console console_to_ring earlyprintk=xen nomodeset net.ifnames=0 biosdevname=0 LANG=en_US.UTF-8
initrd /initramfs-3.10.0-123.13.2el7xen.x86_64.img
From /boot, validate that vmlinuz-3.10.0-123.13.2.el7.x86_64 and initramfs-3.10.0-123.13.2.el7xen.x86_64.img exist in /boot folder as shown in the example below:
# ls /boot/vmlinuz-3.10.0-123.13.2.el7.x86_64
/boot/vmlinuz-3.10.0-123.13.2.el7.x86_64
# ls /boot/initramfs-3.10.0-123.13.2.el7xen.x86_64.img
/boot/initramfs-3.10.0-123.13.2.el7xen.x86_64.img
 
3. Wait for the RN to build, then perform a self‑test.
Troubleshooting RN Build or Self‑test Problems
Mistakes with the grub boot menu enforcement can prevent the RN from booting. The following list represents the most common errors.
Kernel parameters are not on one single line. Some file editors wrap long parameters.
You have a typo in grub.con or grub.conf.xvf5 file name.
You have a typo in the kernel file name or the initramfs file name, or these files don’t exist.
There is a mismatch, on the boot menu, between the kernel versions and the initramfs version. If the kernel's version does not match the contents of initramfs, the RN won't boot.The system could have more than one kernel installed:
7.0:
vmlinuz-3.10.0-123.13.2.el7.x86_64
should match
initramfs-3.10.0-123.13.2.el7xen.x86_64.img
6.x:
vmlinuz-2.6.32-279.el6.x86_64
should match
initramfs-2.6.32-279.el6.x86_64.img
5.x:
vmlinuz-2.6.18-371.el5xen
should match
initrd-2.6.18-371.el5xen.img.5
To find the driver versions packed inside the init ram file system (initramfs) of the boot menu: Locate the initramfs and kernel name from the boot menu prepared for the RN (you’ll find it under /boot), then use the following command to peek the contents of initramfs. For example:
RHEL 6.x or 7.0:
# grep kernel /boot/grub/grub.conf.xvf5
kernel /vmlinuz-3.10.0-123.el7.x86_64 ro root=UUID=9002ec24-fb30-4d16-8a78-b352a807e82b plymouth.enable=0 console=hvc0 loglvl=all cgroup_disable=memory sync_console console_to_ring earlyprintk=xen nomodeset net.ifnames=0 biosdevname=0 LANG=en_US.UTF-8
# grep initrd /boot/grub/grub.conf.xvf5
initrd /initramfs-3.10.0-123.el7xen.x86_64.img
# lsinitrd /boot/initramsfs-3.10.0-123.el7xen.x86_64.img|grep modules
rw-r--r-- 1 root root 1446 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.dep
-rw-r--r-- 1 root root 2450 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.dep.bin
-rw-r--r-- 1 root root 52 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.devname
-rw-r--r-- 1 root root 82512 Jun 30 2014 usr/lib/modules/3.10.0-123.el7.x86_64/modules.order
-rw-r--r-- 1 root root 165 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.softdep
-rw-r--r-- 1 root root 28132 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.symbols
-rw-r--r-- 1 root root 34833 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.symbols.bin
RHEL 5.x:
# grep kernel /boot/grub/grub.conf.xvf5
kernel /vmlinuz-2.6.18-371.el5xen ro root=/dev/xvda1 rd_NO_LUKS rd_NO_MD rhgb crashkernel=auto rd_NO_LVM
# grep initrd /boot/grub/grub.conf.xvf5
initrd /initrd-2.6.18-371.el5xen.img.5
# zcat /tmp/initrd-2.6.18-371.el5xen.img.5|cpio ‑t|grep ‑E “xen|ext”
16524 blocks
lib/ext3.ko
lib/xennet.ko
lib/xenblk.ko
 
4. Log on to the PN and open the following ports on the firewall in order for onQ to communicate with the PN.
UDP port 5990
TCP ports 5000 and 5990
 
Firewalld:
By default, RHEL 7.0 has introduced a new firewall service, a dynamic firewall daemon known as firewalld, instead of iptables service by default; however, the traditional iptables service is supported if installed. For details, see the Red Hat Linux 7 Security Guide. If you choose to disable firewalld, there is no need to configure firewalld firewall rules: simply skip this procedure.
firewalld daemon and service and iptables service are using iptables commands to configure the netfilter in the kernel to separate and filter the network traffic. firewalld stores it in various XML files in /usr/lib/firewalld/ and /etc/firewalld/.
The firewalld firewall defines how networks zones can be used to separate networks into different zones. Based on the level of trust, you can decide to place devices and traffic within a particular network zone. Each mutable network zone can have a different combination of firewall rules.
a. Verify that firewalld is in a running state.
b. Check the service status:
[root@RHEL70x64-17-167 services]# systemctl status firewalld.service
firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled)
Active: inactive (dead)
c. Enable the service, if not already enabled:
[root@RHEL70x64-17-167 services]# systemctl enable firewalld
ln -s '/usr/lib/systemd/system/firewalld.service' '/etc/systemd/system/dbusorg.edoraproject.FirewallD1.service'
ln -s '/usr/lib/systemd/system/firewalld.service' '/etc/systemd/system
/basic.target.wants/firewalld.service'
d. Start the service, if not already running:
[root@RHEL70x64-17-167 services]# systemctl start firewalld
e. On the PN, find the network interface that is used to communicate with onQ. In this example, that NIC is ens32.
[root@RHEL70x64-17-167 services]# ifconfig
ens32: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.20.17.167 netmask 255.255.248.0 broadcast 10.20.23.255
inet6 fe80::250:56ff:fe9d:2121 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:9d:21:21 txqueuelen 1000 (Ethernet)
RX packets 7115713 bytes 476287831 (454.2 MiB)
RX errors 0 dropped 149791 overruns 0 frame 0
TX packets 924966 bytes 1305413839 (1.2 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
 
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 10 bytes 980 (980.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10 bytes 980 (980.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
f. For the network interface that you identified above, find the network interface’s network zone. In this example, the network zone is work.
[root@RHEL70x64-17-167 services]# firewall-cmd --get-zone-of-interface=ens32
work
Determine your default zone. In the following example, the default zone is Public.
[root@RHEL70x64-17-167 services]# firewall-cmd --get-default-zone
public
g. Associate the zone(s) with the following firewall rules. The same rules can be applied to many zones as needed. In the following example, dcrm-node service is associated with work zone for ens32. The dcrm-node.xml is located at /usr/lib/firewalld/services.
[root@RHEL70x64-17-167 services]# firewall-cmd --add-service=dcrm-node --permanent --zone=work
success
h. Activate the latest firewall rules:
[root@RHEL70x64-17-167 services]# firewall-cmd --reload
success
Now the PN can communicate with onQ.
i. Set up the rule for RN on the PN site.
The RN will be equipped with eth0 interface, so apply the rules to eth0 interface's zone if different from PN's zone. The PN might not have eth0 interface; in such a case, the RN's eth0 will be in the default zone.
Find eth0 network interface's network zone. In this example, it is not set:
[root@RHEL70x64-17-167 services]# firewall-cmd --get-zone-of-interface=eth0
no zone
Determine your default zone. In this example default zone is Public. Since eth0 has no zone dcm-node is associated with Public zone:
[root@RHEL70x64-17-167 services]# firewall-cmd --get-default-zone
public
j. Associate the zone(s) with the following firewall rules. The same rules can be applied to many zones as needed:
[root@RHEL70x64-17-167 services]# firewall-cmd --add-service=dcrm-node --permanent --zone=public
success
k. Active the latest firewall rules:
[root@RHEL70x64-17-167 services]# firewall-cmd --reload
success
Now the RN can communicate with onQ, mainly for self-tests.
l. Confirm the firewall rules. The public zone and work zone have TCP ports (5000/5990) and UDP port 5990 opened in this case.
[root@RHEL70x64-17-167 services]# iptables -L -n
Chain IN_public (2 references)
target prot opt source destination
IN_public_log all -- 0.0.0.0/0 0.0.0.0/0
IN_public_deny all -- 0.0.0.0/0 0.0.0.0/0
IN_public_allow all -- 0.0.0.0/0 0.0.0.0/0
Chain IN_public_allow (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5990 ctstate NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5000 ctstate NEW
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5990 ctstate NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ctstate NEW
Chain IN_work (0 references)
target prot opt source destination
IN_work_log all -- 0.0.0.0/0 0.0.0.0/0
IN_work_deny all -- 0.0.0.0/0 0.0.0.0/0
IN_work_allow all -- 0.0.0.0/0 0.0.0.0/0
Chain IN_work_allow (1 references)
target prot opt source destination
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:631 ctstate NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5990 ctstate NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5000 ctstate NEW
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5990 ctstate NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ctstate NEW
 
 
iptables:
a. Verify that udp 5990 and tcp 5000 and 5990 ports are open and above the REJECT line:
# iptables -L –n | grep -E "5900|5000"
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5990
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5000
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5990
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
b. If these ports are not open, open them.
Find the input chain name, which is reported in the line that appears after INPUT:
# iptables -L -line numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 RH-Firewall-1-INPUT all -- anywhere anywhere
...
10 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Using the line number of the REJECT line and the chain name (line 10 and line 1, respectively, in the step above), insert the onQ ports:
# iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5990 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5000 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 10 -p udp --dport 5990 -j ACCEPT
c. Save and restart iptables.
Afterward, verify that the ports are open and above the REJECT line as outlined earlier.
# service iptables save
Saving firewall rules to /etc/sysconfig/iptables: [ OK ]
# service iptables restart
Flushing firewall rules: [ OK ]
Setting chains to policy ACCEPT: filter [ OK ]
Unloading iptables modules: [ OK ]
Applying iptables firewall rules: [ OK ]
Loading additional iptables modules: ip_conntrack_netbios_n [ OK ]
 
 
 
 
5. (Oracle database) Install RMAN scripts.
If the PN has an Oracle database, install the RMAN scripts so that onQ can execute a hot backup of your database as outlined in Back up and restore Oracle 11g database on Linux.
(RHEL 7.0/ESXi) To enroll an agent‑less Linux PN:
Use this procedure to enroll an agent‑less Linux PN running RHEL 7.0 and that’s being hosted by an ESXi server.
1. Install the PN proxies:
a. Enable ssh on the ESXi/ESX server.
b. Reserve 1 to 3 unique static IP addresses for each PN proxy type (Linux and Windows). The more proxies you configure, the more concurrent backups onQ can perform.
c. Log on to the HA’s onQ Portal.
d. Click the PROTECTION CONFIG tab > double‑plus button (++).
The Add Protected Nodes via Host dialog appears.
e. In the Server Type field, select the ESX Host radio button.
f. In the User Name and Password fields, provide the ESXi/ESX host credentials for any user with root privileges.
g. In the Server field, provide either the hostname or the IP address for the ESXi/ESX host, then GET LIST.
The onQ Portal queries the virtual machines hosted by the ESXi/ESX server, then displays the inventory.
h. In the virtual machine list, select the check boxes (or ALL button) for the PNs that you want to enroll.
If an expected virtual machine does not appear in the list, it’s likely that the virtual machine does not have VMware Tools installed.
Provide the network properties for each proxy. Use the unique IP addresses that you reserved above.
Windows proxy address. The static IP address that you reserved for the Windows PN proxy.
Linux proxy address. The static IP address that you reserved for the Linux PN proxy.
Proxy subnet mask. The PN proxy’s subnet mask.
Proxy gateway. The PN proxy’s gateway.
Proxy VM Data store. The directory in which the vm’s files are stored.
Proxy VM Network. Your network configuration can include a vCenter-level Distributed Virtual Switches (DVS) or an ESXi host-level virtual switch. When you enroll an ESXi host, all available networks that are visible to that host are listed and available for selection.
In the following example, three Windows PN are being enrolled, so only one static IP address (for the Windows PN proxy WPY_<onQhostname>_<ESXhostname>) is needed:
Click the ENROLL button, then OKAY.
The PNs that you selected appear in the Protected Nodes list; however, they are unverified as evidenced by the status in the PROTECTED NODES page > Protection Disabled column.
i. From the ESXi/ESX host, power on the PN proxies.
j. Activate (aka verify) the PNs so that onQ can protect them.
Go to PROTECTION CONFIG tab, select the PN, then click the MODIFY button.
Specify values for the node parameters, then SAVE. The act of saving a PN’s configuration instructs onQ to activate that PN’s configuration. If the PN is a Linux PN, onQ cannot enroll these XFS mount points automatically, so you must do so now. See Linux Filesystem Format Requirements.
The onQ Portal groups the PNs by ESXi server. If you do not want these PNs in such a group, clear the default Group Name field to remove the PNs from this group and place them in the shared pool.
 
 
2. Create the grub boot menu. In order to boot the RN successfully, the PN needs to prepare the init ram disk image with the required drivers and legacy grub boot menu.
a. Verify OS version:
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.0 (Maipo)
b. Make sure ext2/ext3/ext4 file systems utilities are installed.
# rpm -qa | grep e2fsprogs
e2fsprogs-libs-1.42.9-4.el7.x86_64
e2fsprogs-1.42.9-4.el7.x86_64
If not installed, do so now:
# yum install e2fsprogs
c. Generate the init ram disk with xen drivers and ext4 file system modules.
Print the kernel release:
# uname -r
3.10.0-123.13.2.el7.x86_64
Where the 3.10.0-123.13.2.el7.x86_64 is the kernel release by default, change it to match PN's kernel release:
# cd /boot
# mkdir -p /boot/grub
# dracut --force --filesystems "ext4 ext3" \
--add-drivers "xen:vbd xen:vif" \
initramfs-3.10.0-123.13.2.el7xen.x86_64.img
d. Verify the legacy grub boot loader:
# vi /boot/grub/grub.conf.xvf5
Where the 3.10.0-123.13.2.el7.x86_64 is the kernel release by default, change vmlinuz and initramfs to match PN's kernel release. The kernel parameters are on a single line. Simply copy and paste from following screen.
Where the root=UUID=855cd484-3852-4984-b568-ee0408c6b590, the 855cd... (UUID) is a temporary placeholder and will be replaced by read “/”'s UUID during the RN build. Do not make any changes to this parameter.
For example: The contents of /boot/grub/grub.conf.xvf5:
default=0
timeout=5
title onQ Red Hat Enterprise Linux (3.10.0-123.13.2.el7.x86_64)
root (hd0,0)
kernel /vmlinuz-3.10.0-123.13.2.el7.x86_64 ro root=UUID=855cd484-3852-4984-b568-ee0408c6b590 plymouth.enable=0 console=hvc0 loglvl=all cgroup_disable=memory sync_console console_to_ring earlyprintk=xen nomodeset net.ifnames=0 biosdevname=0 LANG=en_US.UTF-8
initrd /initramfs-3.10.0-123.13.2el7xen.x86_64.img
From /boot, validate that vmlinuz-3.10.0-123.13.2.el7.x86_64 and initramfs-3.10.0-123.13.2.el7xen.x86_64.img exist in /boot folder as shown in the example below:
# ls /boot/vmlinuz-3.10.0-123.13.2.el7.x86_64
/boot/vmlinuz-3.10.0-123.13.2.el7.x86_64
# ls /boot/initramfs-3.10.0-123.13.2.el7xen.x86_64.img
/boot/initramfs-3.10.0-123.13.2.el7xen.x86_64.img
 
3. Wait for the RN to build, then perform a self‑test.
Troubleshooting RN Build or Self‑test Problems
Mistakes with the grub boot menu enforcement can prevent the RN from booting. The following list represents the most common errors.
Kernel parameters are not on one single line. Some file editors wrap long parameters.
You have a typo in grub.con or grub.conf.xvf5 file name.
You have a typo in the kernel file name or the initramfs file name, or these files don’t exist.
There is a mismatch, on the boot menu, between the kernel versions and the initramfs version. If the kernel's version does not match the contents of initramfs, the RN won't boot.The system could have more than one kernel installed:
7.0:
vmlinuz-3.10.0-123.13.2.el7.x86_64
should match
initramfs-3.10.0-123.13.2.el7xen.x86_64.img
6.x:
vmlinuz-2.6.32-279.el6.x86_64
should match
initramfs-2.6.32-279.el6.x86_64.img
5.x:
vmlinuz-2.6.18-371.el5xen
should match
initrd-2.6.18-371.el5xen.img.5
To find the driver versions packed inside the init ram file system (initramfs) of the boot menu: Locate the initramfs and kernel name from the boot menu prepared for the RN (you’ll find it under /boot), then use the following command to peek the contents of initramfs. For example:
RHEL 6.x or 7.0:
# grep kernel /boot/grub/grub.conf.xvf5
kernel /vmlinuz-3.10.0-123.el7.x86_64 ro root=UUID=9002ec24-fb30-4d16-8a78-b352a807e82b plymouth.enable=0 console=hvc0 loglvl=all cgroup_disable=memory sync_console console_to_ring earlyprintk=xen nomodeset net.ifnames=0 biosdevname=0 LANG=en_US.UTF-8
# grep initrd /boot/grub/grub.conf.xvf5
initrd /initramfs-3.10.0-123.el7xen.x86_64.img
# lsinitrd /boot/initramsfs-3.10.0-123.el7xen.x86_64.img|grep modules
rw-r--r-- 1 root root 1446 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.dep
-rw-r--r-- 1 root root 2450 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.dep.bin
-rw-r--r-- 1 root root 52 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.devname
-rw-r--r-- 1 root root 82512 Jun 30 2014 usr/lib/modules/3.10.0-123.el7.x86_64/modules.order
-rw-r--r-- 1 root root 165 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.softdep
-rw-r--r-- 1 root root 28132 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.symbols
-rw-r--r-- 1 root root 34833 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.symbols.bin
RHEL 5.x:
# grep kernel /boot/grub/grub.conf.xvf5
kernel /vmlinuz-2.6.18-371.el5xen ro root=/dev/xvda1 rd_NO_LUKS rd_NO_MD rhgb crashkernel=auto rd_NO_LVM
# grep initrd /boot/grub/grub.conf.xvf5
initrd /initrd-2.6.18-371.el5xen.img.5
# zcat /tmp/initrd-2.6.18-371.el5xen.img.5|cpio ‑t|grep ‑E “xen|ext”
16524 blocks
lib/ext3.ko
lib/xennet.ko
lib/xenblk.ko
 
4. Log on to the PN and open the following ports on the firewall in order for onQ to communicate with the PN.
UDP port 5990
TCP ports 5000 and 5990
 
Firewalld:
By default, RHEL 7.0 has introduced a new firewall service, a dynamic firewall daemon known as firewalld, instead of iptables service by default; however, the traditional iptables service is supported if installed. For details, see the Red Hat Linux 7 Security Guide. If you choose to disable firewalld, there is no need to configure firewalld firewall rules: simply skip this procedure.
firewalld daemon and service and iptables service are using iptables commands to configure the netfilter in the kernel to separate and filter the network traffic. firewalld stores it in various XML files in /usr/lib/firewalld/ and /etc/firewalld/.
The firewalld firewall defines how networks zones can be used to separate networks into different zones. Based on the level of trust, you can decide to place devices and traffic within a particular network zone. Each mutable network zone can have a different combination of firewall rules.
a. Verify that firewalld is in a running state.
b. Check the service status:
[root@RHEL70x64-17-167 services]# systemctl status firewalld.service
firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled)
Active: inactive (dead)
c. Enable the service, if not already enabled:
[root@RHEL70x64-17-167 services]# systemctl enable firewalld
ln -s '/usr/lib/systemd/system/firewalld.service' '/etc/systemd/system/dbusorg.edoraproject.FirewallD1.service'
ln -s '/usr/lib/systemd/system/firewalld.service' '/etc/systemd/system
/basic.target.wants/firewalld.service'
d. Start the service, if not already running:
[root@RHEL70x64-17-167 services]# systemctl start firewalld
e. On the PN, find the network interface that is used to communicate with onQ. In this example, that NIC is ens32.
[root@RHEL70x64-17-167 services]# ifconfig
ens32: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.20.17.167 netmask 255.255.248.0 broadcast 10.20.23.255
inet6 fe80::250:56ff:fe9d:2121 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:9d:21:21 txqueuelen 1000 (Ethernet)
RX packets 7115713 bytes 476287831 (454.2 MiB)
RX errors 0 dropped 149791 overruns 0 frame 0
TX packets 924966 bytes 1305413839 (1.2 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
 
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 10 bytes 980 (980.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10 bytes 980 (980.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
f. For the network interface that you identified above, find the network interface’s network zone. In this example, the network zone is work.
[root@RHEL70x64-17-167 services]# firewall-cmd --get-zone-of-interface=ens32
work
Determine your default zone. In the following example, the default zone is Public.
[root@RHEL70x64-17-167 services]# firewall-cmd --get-default-zone
public
g. Associate the zone(s) with the following firewall rules. The same rules can be applied to many zones as needed. In the following example, dcrm-node service is associated with work zone for ens32. The dcrm-node.xml is located at /usr/lib/firewalld/services.
[root@RHEL70x64-17-167 services]# firewall-cmd --add-service=dcrm-node --permanent --zone=work
success
h. Activate the latest firewall rules:
[root@RHEL70x64-17-167 services]# firewall-cmd --reload
success
Now the PN can communicate with onQ.
i. Set up the rule for RN on the PN site.
The RN will be equipped with eth0 interface, so apply the rules to eth0 interface's zone if different from PN's zone. The PN might not have eth0 interface; in such a case, the RN's eth0 will be in the default zone.
Find eth0 network interface's network zone. In this example, it is not set:
[root@RHEL70x64-17-167 services]# firewall-cmd --get-zone-of-interface=eth0
no zone
Determine your default zone. In this example default zone is Public. Since eth0 has no zone dcm-node is associated with Public zone:
[root@RHEL70x64-17-167 services]# firewall-cmd --get-default-zone
public
j. Associate the zone(s) with the following firewall rules. The same rules can be applied to many zones as needed:
[root@RHEL70x64-17-167 services]# firewall-cmd --add-service=dcrm-node --permanent --zone=public
success
k. Active the latest firewall rules:
[root@RHEL70x64-17-167 services]# firewall-cmd --reload
success
Now the RN can communicate with onQ, mainly for self-tests.
l. Confirm the firewall rules. The public zone and work zone have TCP ports (5000/5990) and UDP port 5990 opened in this case.
[root@RHEL70x64-17-167 services]# iptables -L -n
Chain IN_public (2 references)
target prot opt source destination
IN_public_log all -- 0.0.0.0/0 0.0.0.0/0
IN_public_deny all -- 0.0.0.0/0 0.0.0.0/0
IN_public_allow all -- 0.0.0.0/0 0.0.0.0/0
Chain IN_public_allow (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5990 ctstate NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5000 ctstate NEW
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5990 ctstate NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ctstate NEW
Chain IN_work (0 references)
target prot opt source destination
IN_work_log all -- 0.0.0.0/0 0.0.0.0/0
IN_work_deny all -- 0.0.0.0/0 0.0.0.0/0
IN_work_allow all -- 0.0.0.0/0 0.0.0.0/0
Chain IN_work_allow (1 references)
target prot opt source destination
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:631 ctstate NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5990 ctstate NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5000 ctstate NEW
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5990 ctstate NEW
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ctstate NEW
 
 
iptables:
a. Verify that udp 5990 and tcp 5000 and 5990 ports are open and above the REJECT line:
# iptables -L –n | grep -E "5900|5000"
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5990
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5000
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5990
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
b. If these ports are not open, open them.
Find the input chain name, which is reported in the line that appears after INPUT:
# iptables -L -line numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 RH-Firewall-1-INPUT all -- anywhere anywhere
...
10 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Using the line number of the REJECT line and the chain name (line 10 and line 1, respectively, in the step above), insert the onQ ports:
# iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5990 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5000 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 10 -p udp --dport 5990 -j ACCEPT
c. Save and restart iptables.
Afterward, verify that the ports are open and above the REJECT line as outlined earlier.
# service iptables save
Saving firewall rules to /etc/sysconfig/iptables: [ OK ]
# service iptables restart
Flushing firewall rules: [ OK ]
Setting chains to policy ACCEPT: filter [ OK ]
Unloading iptables modules: [ OK ]
Applying iptables firewall rules: [ OK ]
Loading additional iptables modules: ip_conntrack_netbios_n [ OK ]
 
 
 
 
 
5. (Oracle database) Install RMAN scripts.
If the PN has an Oracle database, install the RMAN scripts so that onQ can execute a hot backup of your database as outlined in Back up and restore Oracle 11g database on Linux.
(RHEL 6.x/vCenter) To enroll an agent‑less Linux PN:
Use this procedure to enroll an agent‑less Linux PN running RHEL 6.x and that’s being managed by vCenter.
1. Install the PN proxies:
a. Enable ssh on each ESXi/ESX server in the vMotion cluster.
b. Reserve 1 to 3 unique static IP addresses for each PN proxy type (Linux and Windows). The more proxies you configure, the more concurrent backups onQ can perform.
c. Log on to the HA’s onQ Portal.
d. Click the PROTECTION CONFIG tab > double‑plus button (++).
The Add Protected Nodes via Host dialog appears.
e. In the Server Type field, select the vCenter radio button.
f. Provide the vCenter credentials for any user with root privileges and either the hostname or the IP address for vCenter, then GET LIST. The Add Protected Nodes via vCenter dialog appears.
The onQ Portal queries the virtual machines managed by vCenter, then displays the inventory.
g. Provide ESXi host credentials for any user with root privileges, specify proxy information for each proxy type, and select the check boxes for the PNs that you want to enroll.
If an expected virtual machine does not appear in the list, it’s likely that the virtual machine does not have VMware Tools installed.
Provide the network properties for each proxy. Use the unique IP address(es) that you reserved above. If more than one IP per proxy type, provide a comma‑separated list.
Windows proxy address. The static IP address that you reserved for the Windows PN proxy.
Linux proxy address. The static IP address that you reserved for the Linux PN proxy.
Proxy subnet mask. The PN proxy’s subnet mask.
Proxy gateway. The PN proxy’s gateway.
Proxy VM Data store. The directory in which the vm’s files are stored.
Proxy VM Network. Your network configuration can include a vCenter-level Distributed Virtual Switches (DVS) or an ESXi host-level virtual switch. When you enroll an ESXi host, all available networks that are visible to that host are listed and available for selection.
In the following example, all PNs on all ESXi hosts in the vMotion cluster are being enrolled and using the maximum number of proxies allowed for each proxy type:
Click the ENROLL button, then OKAY.
The PNs that you selected appear in the Protected Nodes list; however, they are unverified as evidenced by the status in the PROTECTED NODES page > Protection Disabled column.
h. From the ESXi/ESX host, power on the PN proxies.
i. Activate (aka verify) the PNs so that onQ can protect them.
Go to PROTECTION CONFIG tab, select the PN, then click the MODIFY button.
Specify values for the node parameters, then SAVE. The act of saving a PN’s configuration instructs onQ to activate that PN’s configuration. If the PN is a Linux PN, onQ cannot enroll these XFS mount points automatically, so you must do so now. See Linux Filesystem Format Requirements.
The onQ Portal groups the PNs by ESXi server. If you do not want these PNs in such a group, clear the default Group Name field to remove the PNs from this group and place them in the shared pool.
 
 
1.  
2. Copy and modify /boot/grub/menu.lst:
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux (2.6.32-279.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-279.el6.x86_64 ro root=/dev/mapper/VolGroup-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=VolGroup/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd /initramfs-2.6.32-279.el6.x86_64.img
 
Note:  If you want a custom boot menu, create a /boot/grub/grub.conf.xvf5 file. If .xvf1 thru .xvf4 exist, delete them because those files have a higher priority than .xvf5.
 
3. Wait for the RN to build, then perform a self‑test.
Troubleshooting RN Build or Self‑test Problems
Mistakes with the grub boot menu enforcement can prevent the RN from booting. The following list represents the most common errors.
Kernel parameters are not on one single line. Some file editors wrap long parameters.
You have a typo in grub.con or grub.conf.xvf5 file name.
You have a typo in the kernel file name or the initramfs file name, or these files don’t exist.
There is a mismatch, on the boot menu, between the kernel versions and the initramfs version. If the kernel's version does not match the contents of initramfs, the RN won't boot.The system could have more than one kernel installed:
7.0:
vmlinuz-3.10.0-123.13.2.el7.x86_64
should match
initramfs-3.10.0-123.13.2.el7xen.x86_64.img
6.x:
vmlinuz-2.6.32-279.el6.x86_64
should match
initramfs-2.6.32-279.el6.x86_64.img
5.x:
vmlinuz-2.6.18-371.el5xen
should match
initrd-2.6.18-371.el5xen.img.5
To find the driver versions packed inside the init ram file system (initramfs) of the boot menu: Locate the initramfs and kernel name from the boot menu prepared for the RN (you’ll find it under /boot), then use the following command to peek the contents of initramfs. For example:
RHEL 6.x or 7.0:
# grep kernel /boot/grub/grub.conf.xvf5
kernel /vmlinuz-3.10.0-123.el7.x86_64 ro root=UUID=9002ec24-fb30-4d16-8a78-b352a807e82b plymouth.enable=0 console=hvc0 loglvl=all cgroup_disable=memory sync_console console_to_ring earlyprintk=xen nomodeset net.ifnames=0 biosdevname=0 LANG=en_US.UTF-8
# grep initrd /boot/grub/grub.conf.xvf5
initrd /initramfs-3.10.0-123.el7xen.x86_64.img
# lsinitrd /boot/initramsfs-3.10.0-123.el7xen.x86_64.img|grep modules
rw-r--r-- 1 root root 1446 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.dep
-rw-r--r-- 1 root root 2450 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.dep.bin
-rw-r--r-- 1 root root 52 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.devname
-rw-r--r-- 1 root root 82512 Jun 30 2014 usr/lib/modules/3.10.0-123.el7.x86_64/modules.order
-rw-r--r-- 1 root root 165 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.softdep
-rw-r--r-- 1 root root 28132 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.symbols
-rw-r--r-- 1 root root 34833 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.symbols.bin
RHEL 5.x:
# grep kernel /boot/grub/grub.conf.xvf5
kernel /vmlinuz-2.6.18-371.el5xen ro root=/dev/xvda1 rd_NO_LUKS rd_NO_MD rhgb crashkernel=auto rd_NO_LVM
# grep initrd /boot/grub/grub.conf.xvf5
initrd /initrd-2.6.18-371.el5xen.img.5
# zcat /tmp/initrd-2.6.18-371.el5xen.img.5|cpio ‑t|grep ‑E “xen|ext”
16524 blocks
lib/ext3.ko
lib/xennet.ko
lib/xenblk.ko
 
4. Log on to the PN and open the following ports on the firewall in order for onQ to communicate with the PN.
UDP port 5990
TCP ports 5000 and 5990
 
1.  
iptables:
a. Verify that udp 5990 and tcp 5000 and 5990 ports are open and above the REJECT line:
# iptables -L –n | grep -E "5900|5000"
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5990
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5000
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5990
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
b. If these ports are not open, open them.
Find the input chain name, which is reported in the line that appears after INPUT:
# iptables -L -line numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 RH-Firewall-1-INPUT all -- anywhere anywhere
...
10 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Using the line number of the REJECT line and the chain name (line 10 and line 1, respectively, in the step above), insert the onQ ports:
# iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5990 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5000 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 10 -p udp --dport 5990 -j ACCEPT
c. Save and restart iptables.
Afterward, verify that the ports are open and above the REJECT line as outlined earlier.
# service iptables save
Saving firewall rules to /etc/sysconfig/iptables: [ OK ]
# service iptables restart
Flushing firewall rules: [ OK ]
Setting chains to policy ACCEPT: filter [ OK ]
Unloading iptables modules: [ OK ]
Applying iptables firewall rules: [ OK ]
Loading additional iptables modules: ip_conntrack_netbios_n [ OK ]
 
 
 
 
2. (Oracle database) Install RMAN scripts.
If the PN has an Oracle database, install the RMAN scripts so that onQ can execute a hot backup of your database as outlined in Back up and restore Oracle 11g database on Linux.
1.  
(RHEL 6.x/ESXi) To enroll an agent‑less Linux PN:
Use this procedure to enroll an agent‑less Linux PN running RHEL 6.x and that’s being hosted by an ESXi server.
1. Install the PN proxies:
a. Enable ssh on the ESXi/ESX server.
b. Reserve 1 to 3 unique static IP addresses for each PN proxy type (Linux and Windows). The more proxies you configure, the more concurrent backups onQ can perform.
c. Log on to the HA’s onQ Portal.
d. Click the PROTECTION CONFIG tab > double‑plus button (++).
The Add Protected Nodes via Host dialog appears.
e. In the Server Type field, select the ESX Host radio button.
f. In the User Name and Password fields, provide the ESXi/ESX host credentials for any user with root privileges.
g. In the Server field, provide either the hostname or the IP address for the ESXi/ESX host, then GET LIST.
The onQ Portal queries the virtual machines hosted by the ESXi/ESX server, then displays the inventory.
h. In the virtual machine list, select the check boxes (or ALL button) for the PNs that you want to enroll.
If an expected virtual machine does not appear in the list, it’s likely that the virtual machine does not have VMware Tools installed.
Provide the network properties for each proxy. Use the unique IP addresses that you reserved above.
Windows proxy address. The static IP address that you reserved for the Windows PN proxy.
Linux proxy address. The static IP address that you reserved for the Linux PN proxy.
Proxy subnet mask. The PN proxy’s subnet mask.
Proxy gateway. The PN proxy’s gateway.
Proxy VM Data store. The directory in which the vm’s files are stored.
Proxy VM Network. Your network configuration can include a vCenter-level Distributed Virtual Switches (DVS) or an ESXi host-level virtual switch. When you enroll an ESXi host, all available networks that are visible to that host are listed and available for selection.
In the following example, three Windows PN are being enrolled, so only one static IP address (for the Windows PN proxy WPY_<onQhostname>_<ESXhostname>) is needed:
Click the ENROLL button, then OKAY.
The PNs that you selected appear in the Protected Nodes list; however, they are unverified as evidenced by the status in the PROTECTED NODES page > Protection Disabled column.
i. From the ESXi/ESX host, power on the PN proxies.
j. Activate (aka verify) the PNs so that onQ can protect them.
Go to PROTECTION CONFIG tab, select the PN, then click the MODIFY button.
Specify values for the node parameters, then SAVE. The act of saving a PN’s configuration instructs onQ to activate that PN’s configuration. If the PN is a Linux PN, onQ cannot enroll these XFS mount points automatically, so you must do so now. See Linux Filesystem Format Requirements.
The onQ Portal groups the PNs by ESXi server. If you do not want these PNs in such a group, clear the default Group Name field to remove the PNs from this group and place them in the shared pool.
 
 
1.  
2. Copy and modify /boot/grub/menu.lst:
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux (2.6.32-279.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-279.el6.x86_64 ro root=/dev/mapper/VolGroup-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=VolGroup/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
initrd /initramfs-2.6.32-279.el6.x86_64.img
 
Note:  If you want a custom boot menu, create a /boot/grub/grub.conf.xvf5 file. If .xvf1 thru .xvf4 exist, delete them because those files have a higher priority than .xvf5.
 
3. Wait for the RN to build, then perform a self‑test.
Troubleshooting RN Build or Self‑test Problems
Mistakes with the grub boot menu enforcement can prevent the RN from booting. The following list represents the most common errors.
Kernel parameters are not on one single line. Some file editors wrap long parameters.
You have a typo in grub.con or grub.conf.xvf5 file name.
You have a typo in the kernel file name or the initramfs file name, or these files don’t exist.
There is a mismatch, on the boot menu, between the kernel versions and the initramfs version. If the kernel's version does not match the contents of initramfs, the RN won't boot.The system could have more than one kernel installed:
7.0:
vmlinuz-3.10.0-123.13.2.el7.x86_64
should match
initramfs-3.10.0-123.13.2.el7xen.x86_64.img
6.x:
vmlinuz-2.6.32-279.el6.x86_64
should match
initramfs-2.6.32-279.el6.x86_64.img
5.x:
vmlinuz-2.6.18-371.el5xen
should match
initrd-2.6.18-371.el5xen.img.5
To find the driver versions packed inside the init ram file system (initramfs) of the boot menu: Locate the initramfs and kernel name from the boot menu prepared for the RN (you’ll find it under /boot), then use the following command to peek the contents of initramfs. For example:
RHEL 6.x or 7.0:
# grep kernel /boot/grub/grub.conf.xvf5
kernel /vmlinuz-3.10.0-123.el7.x86_64 ro root=UUID=9002ec24-fb30-4d16-8a78-b352a807e82b plymouth.enable=0 console=hvc0 loglvl=all cgroup_disable=memory sync_console console_to_ring earlyprintk=xen nomodeset net.ifnames=0 biosdevname=0 LANG=en_US.UTF-8
# grep initrd /boot/grub/grub.conf.xvf5
initrd /initramfs-3.10.0-123.el7xen.x86_64.img
# lsinitrd /boot/initramsfs-3.10.0-123.el7xen.x86_64.img|grep modules
rw-r--r-- 1 root root 1446 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.dep
-rw-r--r-- 1 root root 2450 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.dep.bin
-rw-r--r-- 1 root root 52 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.devname
-rw-r--r-- 1 root root 82512 Jun 30 2014 usr/lib/modules/3.10.0-123.el7.x86_64/modules.order
-rw-r--r-- 1 root root 165 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.softdep
-rw-r--r-- 1 root root 28132 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.symbols
-rw-r--r-- 1 root root 34833 Jan 14 07:06 usr/lib/modules/3.10.0-123.el7.x86_64/modules.symbols.bin
RHEL 5.x:
# grep kernel /boot/grub/grub.conf.xvf5
kernel /vmlinuz-2.6.18-371.el5xen ro root=/dev/xvda1 rd_NO_LUKS rd_NO_MD rhgb crashkernel=auto rd_NO_LVM
# grep initrd /boot/grub/grub.conf.xvf5
initrd /initrd-2.6.18-371.el5xen.img.5
# zcat /tmp/initrd-2.6.18-371.el5xen.img.5|cpio ‑t|grep ‑E “xen|ext”
16524 blocks
lib/ext3.ko
lib/xennet.ko
lib/xenblk.ko
 
4. Log on to the PN and open the following ports on the firewall in order for onQ to communicate with the PN.
UDP port 5990
TCP ports 5000 and 5990
 
1.  
iptables:
a. Verify that udp 5990 and tcp 5000 and 5990 ports are open and above the REJECT line:
# iptables -L –n | grep -E "5900|5000"
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5990
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5000
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5990
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
b. If these ports are not open, open them.
Find the input chain name, which is reported in the line that appears after INPUT:
# iptables -L -line numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 RH-Firewall-1-INPUT all -- anywhere anywhere
...
10 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Using the line number of the REJECT line and the chain name (line 10 and line 1, respectively, in the step above), insert the onQ ports:
# iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5990 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5000 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 10 -p udp --dport 5990 -j ACCEPT
c. Save and restart iptables.
Afterward, verify that the ports are open and above the REJECT line as outlined earlier.
# service iptables save
Saving firewall rules to /etc/sysconfig/iptables: [ OK ]
# service iptables restart
Flushing firewall rules: [ OK ]
Setting chains to policy ACCEPT: filter [ OK ]
Unloading iptables modules: [ OK ]
Applying iptables firewall rules: [ OK ]
Loading additional iptables modules: ip_conntrack_netbios_n [ OK ]
 
 
 
 
2. (Oracle database) Install RMAN scripts.
If the PN has an Oracle database, install the RMAN scripts so that onQ can execute a hot backup of your database as outlined in Back up and restore Oracle 11g database on Linux.
1.
(RHEL 5.x/vCenter) To enroll an agent‑less Linux PN:
Use this procedure to enroll an agent‑less Linux PN running RHEL 5.x and that’s being managed by vCenter.
1. Install the PN proxies:
a. Enable ssh on each ESXi/ESX server in the vMotion cluster.
b. Reserve 1 to 3 unique static IP addresses for each PN proxy type (Linux and Windows). The more proxies you configure, the more concurrent backups onQ can perform.
c. Log on to the HA’s onQ Portal.
d. Click the PROTECTION CONFIG tab > double‑plus button (++).
The Add Protected Nodes via Host dialog appears.
e. In the Server Type field, select the vCenter radio button.
f. Provide the vCenter credentials for any user with root privileges and either the hostname or the IP address for vCenter, then GET LIST. The Add Protected Nodes via vCenter dialog appears.
The onQ Portal queries the virtual machines managed by vCenter, then displays the inventory.
g. Provide ESXi host credentials for any user with root privileges, specify proxy information for each proxy type, and select the check boxes for the PNs that you want to enroll.
If an expected virtual machine does not appear in the list, it’s likely that the virtual machine does not have VMware Tools installed.
Provide the network properties for each proxy. Use the unique IP address(es) that you reserved above. If more than one IP per proxy type, provide a comma‑separated list.
Windows proxy address. The static IP address that you reserved for the Windows PN proxy.
Linux proxy address. The static IP address that you reserved for the Linux PN proxy.
Proxy subnet mask. The PN proxy’s subnet mask.
Proxy gateway. The PN proxy’s gateway.
Proxy VM Data store. The directory in which the vm’s files are stored.
Proxy VM Network. Your network configuration can include a vCenter-level Distributed Virtual Switches (DVS) or an ESXi host-level virtual switch. When you enroll an ESXi host, all available networks that are visible to that host are listed and available for selection.
In the following example, all PNs on all ESXi hosts in the vMotion cluster are being enrolled and using the maximum number of proxies allowed for each proxy type:
Click the ENROLL button, then OKAY.
The PNs that you selected appear in the Protected Nodes list; however, they are unverified as evidenced by the status in the PROTECTED NODES page > Protection Disabled column.
h. From the ESXi/ESX host, power on the PN proxies.
i. Activate (aka verify) the PNs so that onQ can protect them.
Go to PROTECTION CONFIG tab, select the PN, then click the MODIFY button.
Specify values for the node parameters, then SAVE. The act of saving a PN’s configuration instructs onQ to activate that PN’s configuration. If the PN is a Linux PN, onQ cannot enroll these XFS mount points automatically, so you must do so now. See Linux Filesystem Format Requirements.
The onQ Portal groups the PNs by ESXi server. If you do not want these PNs in such a group, clear the default Group Name field to remove the PNs from this group and place them in the shared pool.
 
 
1.  
2. Copy and modify /boot/grub/menu.lst:
default=0
timeout=5
hiddenmenu
title Red Hat Enterprise Linux Server by Quorum onQ (2.6.18-371.el5xen)
root (hd0,0)
kernel /vmlinuz-2.6.18-371.el5xen ro root=/dev/xvda1 rd_NO_LUKS rd_NO_MD rhgb crashkernel=auto rd_NO_LVM
initrd /initrd-2.6.18-371.el5xen.img.5
 
3. Log on to the PN and open the following ports on the firewall in order for onQ to communicate with the PN.
UDP port 5990
TCP ports 5000 and 5990
 
1.  
iptables:
a. Verify that udp 5990 and tcp 5000 and 5990 ports are open and above the REJECT line:
# iptables -L –n | grep -E "5900|5000"
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5990
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5000
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5990
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
b. If these ports are not open, open them.
Find the input chain name, which is reported in the line that appears after INPUT:
# iptables -L -line numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 RH-Firewall-1-INPUT all -- anywhere anywhere
...
10 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Using the line number of the REJECT line and the chain name (line 10 and line 1, respectively, in the step above), insert the onQ ports:
# iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5990 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5000 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 10 -p udp --dport 5990 -j ACCEPT
c. Save and restart iptables.
Afterward, verify that the ports are open and above the REJECT line as outlined earlier.
# service iptables save
Saving firewall rules to /etc/sysconfig/iptables: [ OK ]
# service iptables restart
Flushing firewall rules: [ OK ]
Setting chains to policy ACCEPT: filter [ OK ]
Unloading iptables modules: [ OK ]
Applying iptables firewall rules: [ OK ]
Loading additional iptables modules: ip_conntrack_netbios_n [ OK ]
 
 
 
 
2. (Oracle database) Install RMAN scripts.
If the PN has an Oracle database, install the RMAN scripts so that onQ can execute a hot backup of your database as outlined in Back up and restore Oracle 11g database on Linux.
1.  
(RHEL 5.x/ESXi) To enroll an agent‑less Linux PN:
Use this procedure to enroll an agent‑less Linux PN running RHEL 5.x and that’s being hosted by an ESXi server.
1. Install the PN proxies:
a. Enable ssh on the ESXi/ESX server.
b. Reserve 1 to 3 unique static IP addresses for each PN proxy type (Linux and Windows). The more proxies you configure, the more concurrent backups onQ can perform.
c. Log on to the HA’s onQ Portal.
d. Click the PROTECTION CONFIG tab > double‑plus button (++).
The Add Protected Nodes via Host dialog appears.
e. In the Server Type field, select the ESX Host radio button.
f. In the User Name and Password fields, provide the ESXi/ESX host credentials for any user with root privileges.
g. In the Server field, provide either the hostname or the IP address for the ESXi/ESX host, then GET LIST.
The onQ Portal queries the virtual machines hosted by the ESXi/ESX server, then displays the inventory.
h. In the virtual machine list, select the check boxes (or ALL button) for the PNs that you want to enroll.
If an expected virtual machine does not appear in the list, it’s likely that the virtual machine does not have VMware Tools installed.
Provide the network properties for each proxy. Use the unique IP addresses that you reserved above.
Windows proxy address. The static IP address that you reserved for the Windows PN proxy.
Linux proxy address. The static IP address that you reserved for the Linux PN proxy.
Proxy subnet mask. The PN proxy’s subnet mask.
Proxy gateway. The PN proxy’s gateway.
Proxy VM Data store. The directory in which the vm’s files are stored.
Proxy VM Network. Your network configuration can include a vCenter-level Distributed Virtual Switches (DVS) or an ESXi host-level virtual switch. When you enroll an ESXi host, all available networks that are visible to that host are listed and available for selection.
In the following example, three Windows PN are being enrolled, so only one static IP address (for the Windows PN proxy WPY_<onQhostname>_<ESXhostname>) is needed:
Click the ENROLL button, then OKAY.
The PNs that you selected appear in the Protected Nodes list; however, they are unverified as evidenced by the status in the PROTECTED NODES page > Protection Disabled column.
i. From the ESXi/ESX host, power on the PN proxies.
j. Activate (aka verify) the PNs so that onQ can protect them.
Go to PROTECTION CONFIG tab, select the PN, then click the MODIFY button.
Specify values for the node parameters, then SAVE. The act of saving a PN’s configuration instructs onQ to activate that PN’s configuration. If the PN is a Linux PN, onQ cannot enroll these XFS mount points automatically, so you must do so now. See Linux Filesystem Format Requirements.
The onQ Portal groups the PNs by ESXi server. If you do not want these PNs in such a group, clear the default Group Name field to remove the PNs from this group and place them in the shared pool.
 
 
1.  
2. Copy and modify /boot/grub/menu.lst:
default=0
timeout=5
hiddenmenu
title Red Hat Enterprise Linux Server by Quorum onQ (2.6.18-371.el5xen)
root (hd0,0)
kernel /vmlinuz-2.6.18-371.el5xen ro root=/dev/xvda1 rd_NO_LUKS rd_NO_MD rhgb crashkernel=auto rd_NO_LVM
initrd /initrd-2.6.18-371.el5xen.img.5
 
3. Log on to the PN and open the following ports on the firewall in order for onQ to communicate with the PN.
UDP port 5990
TCP ports 5000 and 5990
 
4.  
iptables:
a. Verify that udp 5990 and tcp 5000 and 5990 ports are open and above the REJECT line:
# iptables -L –n | grep -E "5900|5000"
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5990
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5000
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5990
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
b. If these ports are not open, open them.
Find the input chain name, which is reported in the line that appears after INPUT:
# iptables -L -line numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 RH-Firewall-1-INPUT all -- anywhere anywhere
...
10 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Using the line number of the REJECT line and the chain name (line 10 and line 1, respectively, in the step above), insert the onQ ports:
# iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5990 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 10 -m state --state NEW -p tcp --dport 5000 -j ACCEPT
# iptables -I RH-Firewall-1-INPUT 10 -p udp --dport 5990 -j ACCEPT
c. Save and restart iptables.
Afterward, verify that the ports are open and above the REJECT line as outlined earlier.
# service iptables save
Saving firewall rules to /etc/sysconfig/iptables: [ OK ]
# service iptables restart
Flushing firewall rules: [ OK ]
Setting chains to policy ACCEPT: filter [ OK ]
Unloading iptables modules: [ OK ]
Applying iptables firewall rules: [ OK ]
Loading additional iptables modules: ip_conntrack_netbios_n [ OK ]
 
 
 
 
5. (Oracle database) Install RMAN scripts.
If the PN has an Oracle database, install the RMAN scripts so that onQ can execute a hot backup of your database as outlined in Back up and restore Oracle 11g database on Linux.
6.
To re‑enroll a proxy PN:
Use this procedure to re-enroll (aka reinstall) the Linux PN proxy or the Windows PN proxy or both. Perform this procedure on each vCenter server or ESX/ESXi host enrolled with the onQ.
1. Log on to the HA’s onQ Portal.
2. Click the PROTECTION CONFIG tab.
3. Click the double‑plus button (++).
The Add Protected Nodes via Host dialog appears.
4. Do one of the following, depending on the enrollment type:
In the Server Type field, select the ESX Host radio button.
In the Server Type field, select the vCenter radio button.
5. Provide the vCenter/ESXi root userid and password and either vCenter/ESXi host name or IP address, then GET LIST.
(vCenter enrollment) The Add Protected Nodes via vCenter dialog appears.
(ESXi Host enrollment) The Add Protected Nodes via vCenter dialog appears. The PN check boxes for enrolled PNs are greyed out.
6. In the Proxy VM Network field, reattach the virtual network adapter.
7. Select Re-install Windows proxy or Re-install Linux proxy, depending on the operating system of the proxy you’re trying to reinstall, then ENROLL.
8. Observe the messages that appear in the pop-up window to verify that the proxy reinstalled successfully.