oVirt 4.2 Self-Hosted Engine Install Notes
I installed oVirt a little over a year ago for the first time and everything went as planned except I didn’t RTFM closely enough to realize the self hosted engine required it’s own storage domain. So with that bit of info I wanted to give it another go – which resulted in multiple installation problems for me, inconsistent deployments, NFS permission battles, and typing hosted-engine –vm-shutdown entirely too much. Here are some notes from my install that is currently working at a base level with iSCSI MPIO and allowing self-hosted engine HA failover across two physcial hosts.
Lab Details
Physical Hosts
ovhost01 – Dell R310, 32GB flash drive for OS, 16GB RAM, 2 embedded 1Gbps NICs, 2x1Gbps PCIe adapter
ovhost02 – Dell R310, 32GB flash drive for OS, 16GB RAM, 2 embedded 1Gbps NICs, 2x1Gbps PCIe adapter
*oVirt Node will crash during install if local drive is less than ~60GB so CentOS 7 was used for base OS
Storage
NFS – Synology RS815: one shared folder for self-hosted oVirt engine, one shared folder for oVirt ISO domain
iSCSI – FreeNAS: two target LUNs in separate subnets
Networks
em1 = oVirt Managment vlan5 (trunk port)
em2 = server_2, server_3, client_9 (trunk port)
p2p1 = iscsi_10 (access port)
p2p2 = iscsi_11 (access port)
DNS
ovhost01.ad.infiniteloop.io, ovhost02.ad.infiniteloop.io, and ovengine01.ad.infiniteloop.io DNS records created
ovhost01 Setup
- Install CentOS 7
- Configured management network as tagged VLAN traffic (em1.5) using nmtui
- Ping tests from em1.5 to 8.8.8.8
- yum update -y
- yum install screen tmux -y
- yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm -y
- yum install -y ovirt-hosted-engine-setup
- yum install –y ovirt-engine-appliance
- screen -L
- setenforce 0
- I could not get a successful hosted engine deployment until I temporarily disabled SELinux
- sudo hosted-engine –deploy –noansible
- I had better luck using –noansible, during several fresh deployments the storage section was skipped over completely when the option was not included
- Point to rs815.ad.infiniteloop.io:/volume2/os-engine01 as storage location for engine
- if test mount through mount.nfs shows user and group ID as nobody on your hosts you need to revisit Synology export config, particulary verifying all_squash is set
- https://ovengine01.ad.infiniteloop.io/ovirt-engine
- Download CA certificate and import to workstation
- Install virt-viewer to open VM console windows
- Configure networks and add at least one data domain to cluster
- Compute > Clusters > yourCluster > Logical Networks > Manage Networks > uncheck required next to any networks you have created
ovhost02 Setup
- Install CentOS 7
- Configured management (em1.5)
- Ping tests from em1.5 to 8.8.8.8
- Ping test from p2p1 and p2p2 to FreeNAS NICs in same VLAN
- yum update -y
- yum install screen tmux -y
- yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm -y
- yum install -y ovirt-hosted-engine-setup
- Add ovhost02 to cluster
- Under Hosted Engine menu select deploy
Miscellaneous
Synology RS815 NFS Configuration
- Create shared folders in Synology web conolse [ov-iso01, ov-engine01]
- Set squash = no mapping and add your subnet as valid initiator
- ssh into RS815
- sudo chown 36:36 -R /yourvolume/ov-iso01
- sudo chown 36:36 -R /yourvolume/ov-engine01
- sudo cp /etc/exports /etc/exports.bak
- sudo vi /etc/exports
- For newly created folders change
- anonuid = 36
- anongid = 36
- no_root_squash = all_squash
- Synology Web Console > Control Panel > Info Center > Service > disable NFS Service > Save > enable NFS service > Save
oVirt iSCSI Round Robin with FreeNAS targets
- Set host to maintenance mode
- vi /etc/multipath.conf
- add # VDSM PRIVATE to second line of file
- under devices section add
device { vendor "FreeNAS" product "iSCSI Disk" path_grouping_policy multibus path_selector "round-robin 0" rr_min_io_rq 100 rr_weight "uniform" }
- Reboot host
- Activate host after reboot
- multipath -ll will not return results while host is in maitenance mode after reboot
- Verify status with multipath -ll
oVirt Host Missing Network Adapters
- Power off oVirt host missing network adapters
- Each time I saw this problem the problematic host was looping through cycles of being set to non-responsive
- Set host to maintenance mode
- Remove host from cluster
- Remove any required networks from cluster
- Re-add host back to cluster
oVirt Power Management with iDRAC6
- Compute > Hosts > targetHost > set to maintenance mode
- Select enable power management
- Host Power Managment menu
- Address: iDRAC IP
- Username: yourAdmin
- Password: yourPW
- Type: drac5
- Slot: <empty>
- Options: cmd_prompt=admin1->,login_timeout=10
- Secure: enabled
- Test connection
- Activate host
oVirt Nested Virtualization (run all commands per host)
- Compute > Hosts > targetHost > set to maintenance mode
- ssh into host
- echo “options kvm-intel nested=Y” > /etc/modprobe.d/kvm-intel.conf
- yum install vdsm-hook-nestedvt vdsm-hook-macspoof -y
- Reboot targetHost
- Refresh targetHost capabilities
- Shutdown any VM that should be able to run as virtualization host and power back up
- lscpu | grep vmx