Proxmox 4.4 cluster with iSCSI MPIO, trunk ports, and USB boot drive
First things first – Rufus formatted USB sticks will not work properly with Proxmox ISO. Confirmed it myself after boot failure and then reading the manual.
I started this project because I wanted to poke around with a few hypervisors that I wouldn’t normally deal with and landed on oVirt and Proxmox. I had some challenges with oVirt finding incorrect number of iSCSI paths, not seeing certain iSCSI paths, and moving the oVirt hosted engine disk proved impossible with the documentation I found. I believe I could have got around some of the iSCSI issues with an external oVirt Engine vs. Hosted Engine model but I moved on to Proxmox. The Proxmox install was pretty quick to setup but I did have to cobble together a few articles for iSCSI MPIO and will detail them below. Source 1, Source 2
A few notes – the initial Proxmox 4.4 network config creates a bridge which I replace with some custom interface settings while using trunk port settings on the switch for each NIC. The setup assumes you have 2 PVE nodes, trunk ports configured, and iSCSI storage with NICs on two separate subnets.
eth0 – initial management NIC, cluster sync, VM migrations, and disk migration
eth1 – used for VM network traffic, could be multiple VLANs
eth2 – used for ISCSI traffic, path 1
eth3 – used for ISCSI traffic, path 2
Proxmox Node 1 (pve1)
1) After installing to USB the hypervisor can’t boot properly
a) type vgchange -ay
b) type exit
c) Proxmox should continue booting
2) Login as root
3) vi /boot/grub/grub.cfg
a) Search for Proxmox
b) Under the menu entry look for /boot/vmlinuz-4.4.35-1-pve root=/dev/mapper/pve-root ro quiet
c) add rootdelay=10 between ro and quiet, save file
d) Reboot and verify PVE is booting without interaction
Network Config (pve1)
1) login to console as root
2) cp /etc/network/interfaces /etc/network/interfaces.orig
3) ifdown -a
4) Update /etc/network/interfaces with
auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto eth1 iface eth1 inet manual auto eth2 iface eth2 inet manual auto eth3 iface eth3 inet manual auto eth0.5 iface eth0.5 inet static address 192.168.5.21 netmask 255.255.255.0 gateway 192.168.5.1 auto eth2.10 iface eth2.10 inet static address 192.168.10.21 netmask 255.255.255.0 gateway 192.168.10.1 auto eth3.11 iface eth3.11 inet static address 192.168.11.21 netmask 255.255.255.0 gateway 192.168.11.1
5) reboot
6) Add similar config to pve2 node using unique IP addresses. For example use 192.168.x.22 for the previous static IP addresses.
Create PVE cluster
1) [on pve1] pvecm create infiniteloop
2) [on pve2] pvecm add pve1IPaddress
3) accept key fingerprint
4) enter root pw for pve1
Add iSCSI Storage
1) apt-get update
2) apt-get install multipath-tools -y
3) vi /etc/iscsi/iscid.confg
a) update node.startup = automatic
b) update node.session.timeo.replacmement_timeout = 15
4) iscisadm -m discovery -t st -p <your iscsi portal>
5) iscsiadm -m node -l -T <target name> -p 192.168.10.99
6) iscsiadm -m node -l -T <target name> -p 192.168.11.99
7) iscsiadm -m session -P1 to view current sessions and verify static IP NICs are connecting to correct targets
8) lsblk should show you recently attached targets
9) lib/udev/scsi_id -g -d /dev/sdX to get wwid
10) create /etc/multipath.conf with the following
defaults { polling_interval 2 path_selector "round-robin 0" path_grouping_policy multibus uid_attribute ID_SERIAL rr_min_io 100 failback immediate no_path_retry queue user_friendly_names yes }blacklist { wwid .* } blacklist_exceptions { wwid "idFromAbove" } multipaths { multipath { wwid "idFromAbove" alias myStorage } }
11) systemctl restart multipath-tools
12) systemctl status mutlipath-tools
13) multipath -ll – this should display the alias and the two paths. Repeat steps above for pve2 node
14) Return to pve1
15) pvcreate /dev/mapper/myStorage
16) vgcreate iscsi_mp_myStorage /dev/mapper/myStorage
17) Log back into PVE web GUI
20) Datacenter | Sorage | Add | LVM
a) ID: myStorage
b) Volume Group: iscsi_mp_myStorage
Add VM Bridge
1) Datacenter | Nodes | pve1 | Network
2) Create Linux Bridge
a) Name: vmbr2
b) Autostart: enabled
c) Bridge Ports: eth1
3) Reboot host to pickup new network settings
4) Repeat for pve2 node
Create VM
1) Upload ISO to local storage
2) Create VM using local ISO and new vmbr2 for network
a) Assign VLAN allowed on trunk port and routable within your LAN