oVirt 4.2 Self-Hosted Engine Install Notes

I installed oVirt a little over a year ago for the first time and everything went as planned except I didn’t RTFM closely enough to realize the self hosted engine required it’s own storage domain.  So with that bit of info I wanted to give it another go – which resulted in multiple installation problems for me, inconsistent deployments, NFS permission battles, and typing hosted-engine –vm-shutdown entirely too much.  Here are some notes from my install that is currently working at a base level with iSCSI MPIO and allowing self-hosted engine HA failover across two physcial hosts.

Lab Details

Physical Hosts

ovhost01 – Dell R310, 32GB flash drive for OS, 16GB RAM, 2 embedded 1Gbps NICs, 2x1Gbps PCIe adapter
ovhost02 – Dell R310, 32GB flash drive for OS, 16GB RAM, 2 embedded 1Gbps NICs, 2x1Gbps PCIe adapter
*oVirt Node will crash during install if local drive is less than ~60GB so CentOS 7 was used for base OS

Storage

NFS – Synology RS815:  one shared folder for self-hosted oVirt engine, one shared folder for oVirt ISO domain
iSCSI – FreeNAS:  two target LUNs in separate subnets

Networks

em1 = oVirt Managment vlan5 (trunk port)
em2 = server_2, server_3, client_9 (trunk port)
p2p1 = iscsi_10 (access port)
p2p2 = iscsi_11 (access port)

DNS

ovhost01.ad.infiniteloop.io, ovhost02.ad.infiniteloop.io, and ovengine01.ad.infiniteloop.io DNS records created

ovhost01 Setup

  1. Install CentOS 7
  2. Configured management network as tagged VLAN traffic (em1.5) using nmtui
  3. Ping tests from em1.5 to 8.8.8.8
  4. yum update -y
  5. yum install screen tmux -y
  6. yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm -y
  7. yum install -y ovirt-hosted-engine-setup
  8. yum install –y ovirt-engine-appliance
  9. screen -L
  10. setenforce 0
    1. I could not get a successful hosted engine deployment until I temporarily disabled SELinux
  11. sudo hosted-engine –deploy –noansible
    1. I had better luck using –noansible, during several fresh deployments the storage section was skipped over completely when the option was not included
  12. Point to rs815.ad.infiniteloop.io:/volume2/os-engine01 as storage location for engine
    1. if test mount through mount.nfs shows user and group ID as nobody on your hosts you need to revisit Synology export config, particulary verifying all_squash is set
  13. https://ovengine01.ad.infiniteloop.io/ovirt-engine
  14. Download CA certificate and import to workstation
  15. Install virt-viewer to open VM console windows
  16. Configure networks and add at least one data domain to cluster
  17. Compute > Clusters > yourCluster > Logical Networks > Manage Networks > uncheck required next to any networks you have created

 

ovhost02 Setup

  1. Install CentOS 7
  2. Configured management (em1.5)
  3. Ping tests from em1.5 to 8.8.8.8
  4. Ping test from p2p1 and p2p2 to FreeNAS NICs in same VLAN
  5. yum update -y
  6. yum install screen tmux -y
  7. yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm -y
  8. yum install -y ovirt-hosted-engine-setup
  9. Add ovhost02 to cluster
    1. Under Hosted Engine menu select deploy

 

Miscellaneous

 

Synology RS815 NFS Configuration

  1. Create shared folders in Synology web conolse [ov-iso01, ov-engine01]
  2. Set squash = no mapping and add your subnet as valid initiator
  3. ssh into RS815
  4. sudo chown 36:36 -R /yourvolume/ov-iso01
  5. sudo chown 36:36 -R /yourvolume/ov-engine01
  6. sudo cp /etc/exports /etc/exports.bak
  7. sudo vi /etc/exports
  8. For newly created folders change
    1. anonuid = 36
    2. anongid = 36
    3. no_root_squash = all_squash
  9. Synology Web Console > Control Panel > Info Center > Service > disable NFS Service > Save > enable NFS service > Save

 

oVirt iSCSI Round Robin with FreeNAS targets

  1. Set host to maintenance mode
  2. vi /etc/multipath.conf
  3. add # VDSM PRIVATE to second line of file
  4. under devices section add
        device {
            vendor "FreeNAS"
            product "iSCSI Disk"
            path_grouping_policy multibus
            path_selector "round-robin 0"
            rr_min_io_rq 100
            rr_weight "uniform"
        }
    
  5. Reboot host
  6. Activate host after reboot
    1. multipath -ll will not return results while host is in maitenance mode after reboot
  7. Verify status with multipath -ll

 

oVirt Host Missing Network Adapters

  1. Power off oVirt host missing network adapters
    1. Each time I saw this problem the problematic host was looping through cycles of being set to non-responsive
  2. Set host to maintenance mode
  3. Remove host from cluster
  4. Remove any required networks from cluster
  5. Re-add host back to cluster

 

oVirt Power Management with iDRAC6

  1. Compute > Hosts > targetHost > set to maintenance mode
  2. Select enable power management
  3. Host Power Managment menu
    1. Address:  iDRAC IP
    2. Username:  yourAdmin
    3. Password:  yourPW
    4. Type:  drac5
    5. Slot:  <empty>
    6. Options:  cmd_prompt=admin1->,login_timeout=10
    7. Secure:  enabled
  4. Test connection
  5. Activate host

 

oVirt Nested Virtualization (run all commands per host)

  1. Compute > Hosts > targetHost > set to maintenance mode
  2. ssh into host
  3. echo “options kvm-intel nested=Y” > /etc/modprobe.d/kvm-intel.conf
  4. yum install vdsm-hook-nestedvt vdsm-hook-macspoof -y
  5. Reboot targetHost
  6. Refresh targetHost capabilities
  7. Shutdown any VM that should be able to run as virtualization host and power back up
    1. lscpu | grep vmx