Wolverine is configured almost identically to Cyclops, the only difference is that it uses bonds (groups of NICs) to communicate instead of individual NICs. This is useful for both load balancing and redundancy. It has the following specs:
Dell R510 Server 2 (Wolverine)
- Primary Purpose: Hypervisor
- CPU: Dual L5520 Xeon CPUs (quad core)
- RAM: 32GB DDR3 ECC Registered
- First RAIDset: Two 160GB SATA drives in RAID1 (Operating System)
- Ethernet: Dual NIC (bond0) for storage network
- Ethernet: Dual NIC (bond1) for management network
- Ethernet: Single NIC for Dell iDRAC/IPMI
The volumes on the server as setup in the following way:
Operating system – CentOS 6.4 x64
First drive (/dev/sda) – This is a RAID1 stripe of the 160GB drives
Primary partition (dev/sda1) – the /boot partition which is just a 100MB
Second partition (/dev/sda2) – the swap partition which is 8GB
Third partition (/dev/sda3) – the / partition which is where the operating system is loaded
Configuring the ethernet ports
Networking in CentOS/RedHat (via commandline at least) is found in /etc/sysconfig/network-scripts/ folder location. The NICs will be labeled anything from eth0 and eth1 to em1/em2 or if you have PCI devices (added nic cards) will show up as p1p1 and p1p2 (or whichever PCI slots they are located in). As opposed to my Cyclops setup, I actually have two sets of NICs bonded as single ethernet ports here.
My particular bond0 setup (which is located at /etc/sysconfig/network-scripts/ifcfg-bond0) is:
DEVICE=bond0 BOOTPROTO=none ONBOOT=yes NETWORK=192.168.1.0 NETMASK=255.255.255.0 IPADDR=192.168.1.4 GATEWAY=192.168.1.1 BONDING_OPTS="mode=4 miimon=100" DNS1=8.8.8.8
Similarly my bond1 setup is:
DEVICE=bond1 BOOTPROTO=none ONBOOT=yes NETWORK=10.0.0.0 NETMASK=255.255.255.0 IPADDR=10.0.0.4 GATEWAY=192.168.1.1 BONDING_OPTS="mode=4 miimon=100"
The fields are pretty self-explanatory. The reason that bond1 does not have a gateway is that you generally only want one gateway per machine (unless you have more complex routing rules, but that is easier done at the router level, although you can create custom routes anywhere). More on this in the networking section. So in the above setup, I have one bond that goes to the storage network (bond0) and one NIC that goes to the management network. They don’t crosstalk with each other since they are on different vlans and in reality do not know about each other. Please note that these bonds HAVE NO MEANING until you assign actual physical NICs into the bonds (and further down, we will actually assign these bonds into bridges. It sounds more complicated than in it is, which is why I am going in steps).
The next step is to assign or NICs to each bond. If you do an ls command on what I have /etc/sysconfig/network-scripts you will get:
[root@wolverine network-scripts]# ls -ltr ifcfg* -rw-r--r-- 1 root root 254 Jan 9 06:13 ifcfg-lo -rw-r--r--. 1 root root 175 May 17 07:20 ifcfg-em2 -rw-r--r--. 1 root root 175 May 17 07:20 ifcfg-em1 -rw-r--r--. 1 root root 176 May 17 07:21 ifcfg-p3p1 -rw-r--r--. 1 root root 176 May 17 07:21 ifcfg-p3p2 -rw-r--r-- 1 root root 181 May 17 07:38 ifcfg-bond0 -rw-r--r-- 1 root root 176 May 17 07:39 ifcfg-bond1
So the NICs that the operating system discovered are em1 and em2 (which are the onboard NICs that come with Dell) and my dual Intel NIC card in PCI slot 3 (which is why you see p3p1 and p3p2, which are ethernet ports 1 and 2 on PCI slot 3). What we will do next is add em1 and em2 to our bond0, which is done by slaving them. We will alter ifcfg-em1 and ifcfg-em2 instead of having its default to these values:
DEVICE=em1 #HWADDR=A4:BA:DB:22:0C:17 #TYPE=Ethernet #UUID=5c6f7570-3d6c-47e5-8fc7-3ac7dcfda32f ONBOOT=yes #NM_CONTROLLED=yes BOOTPROTO=none SLAVE=yes MASTER=bond0 USERCTL=no
DEVICE=em2 #HWADDR=A4:BA:DB:22:0C:17 #TYPE=Ethernet #UUID=5c6f7570-3d6c-47e5-8fc7-3ac7dcfda32f ONBOOT=yes #NM_CONTROLLED=yes BOOTPROTO=none SLAVE=yes MASTER=bond0 USERCTL=no
What we did above is strip the HWADDR, type, and UUID fields from the NICs, as the bond will take over this data. The key line above is MASTER=bond0 which will link both of these NICs to the bond we created above. Similarly, we will take take ifcfg-p3p1 and ifcfg-p3p2 and link them the same way to bond1 (for the storage network):
DEVICE=p3p1 #HWADDR=A4:BA:DB:22:0C:17 #TYPE=Ethernet #UUID=5c6f7570-3d6c-47e5-8fc7-3ac7dcfda32f ONBOOT=yes #NM_CONTROLLED=yes BOOTPROTO=none SLAVE=yes MASTER=bond1 USERCTL=no
DEVICE=p3p2 #HWADDR=A4:BA:DB:22:0C:17 #TYPE=Ethernet #UUID=5c6f7570-3d6c-47e5-8fc7-3ac7dcfda32f ONBOOT=yes #NM_CONTROLLED=yes BOOTPROTO=none SLAVE=yes MASTER=bond1 USERCTL=no
Now that we have our network in place, the last step is to enable LACP on the switch so that it does not get confused by pairs of ethernet ports sending out similar data (the mode=4 in the bonding_opts line means that this bond will be using 802.3ad link aggregation negotiation with a switch). You must have a switch that supports LACP for this mode, however there are other options that will work with ARP swapping in a conventional/unmanaged swtich (but chances are if you are setting up this kind of setup, you want to use managed switches as that is what you will encounter in the field). In most network switches, this is done by created a LAG group (Link Aggregation Group) or LACP group of ports that are treated as one. In my HP 1800 Procurve, we will create two LAG groups, one assigned to VLAN 10 (storage) and one assigned to VLAN 1 (management). Will go into further detail on this in the Networking section (not just for this server but for all of the servers).
Installing KVM on Cyclops
Installing KVM on CentOS is pretty easy (as it is on most linux distributions with any kind of package manager). In my case, we run the following commands:
rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY* # This just permits downloading from the CentOS repos without asking for permission each time
yum install kvm libvirt python-virtinst qemu-kvm bridge-utils # This installs the required packages. You can install the virtualization group, but that installs other packages as well, and I was shooting for the most simple setup possible (to start at least).
One important note here is that you need to setup a virtual bridge before you can start building machines. Browse into /etc/sysconfig/network-scripts and create a file called ifcfg-br0 (or anything else you want to name it really, I just keep it simple; you can call it ifcfg-virtualbridge if you like). Modify this file so that it contains (in my case at least):
DEVICE=br0 ONBOOT=yes TYPE=Bridge BOOTPROTO=none IPADDR=192.168.2.4 NETMASK=255.255.255.0 GATEWAY=192.168.2.1 DNS1=8.8.8.8 #STP=on DELAY=0
I know what you’re thinking: this has the same IP as bond0! This will cause networking fail if you restart! That is correct, so we will need to modify the ifcfg-bond0 file so that it is a member of the bridge above. I actually will have a second bridge, in case I want to create a virtual machine that sits on my storage network only (with no access to the internet), but more on that later.We will modify our /etc/sysconfig/network-scripts/ifcfg-bond0 file to:
DEVICE=bond0 BOOTPROTO=none ONBOOT=yes BRIDGE=br0 #NETWORK=192.168.2.0 #NETMASK=255.255.255.0 #IPADDR=192.168.2.4 #GATEWAY=192.168.2.1 BONDING_OPTS="mode=4 miimon=100" DNS1=8.8.8.8
What I did above was comment out the IP information in bond0, and add the line BRIDGE=br0. This makes the interface a member of the bridge we just created. This way once the virtual machines are created, all their IP addresses will be created as part of the bridge above. To reactivate all the changes, just type service network restart. This will regenerate all the ethernet ports and network connections. Although not formally needed to continue, I create a second bridge on my storage network, in case I want to add a machine that just analyzes the storage subnet or want to run queries on a machine on its own island:
DEVICE=bond1 BOOTPROTO=none ONBOOT=yes BRIDGE=br1 #NETWORK=10.0.0.0 #NETMASK=255.255.255.0 #IPADDR=10.0.0.4 #GATEWAY=192.168.1.1 BONDING_OPTS="mode=4 miimon=100" #DNS1=8.8.8.8
DEVICE=br1 ONBOOT=yes TYPE=Bridge BOOTPROTO=none IPADDR=10.0.0.4 NETWORK=10.0.0.0 BROADCAST=10.0.0.255 NETMASK=255.255.255.0 #STP=on DELAY=0
Just like when we removed the IP addresses from the NICs so that the bond will storage the IP information, we did a similar step here. Since the bridges will be the final connections to the network, we assign networking information to the bridges and take them out of the bonds. I know this sounds like it has a lot of moving parts and a lot of things that can go wrong, but in reality it is a simple process, synopsis below:
- Browse to /etc/sysconfig/network-scripts and locate the ifcfg* files, as these the NIC cards the system has to use.
- Since we will be using ethernet bonding (otherwise known as teaming), we will create a file called ifcfg-bond0 and ifcfg-bond1.
- We will link the two onboard NICs to bond0 by using the MASTER=bond0 line in each of the ifcfg-em1 and ifcfg-em2 config files.
- We will link the two PCI card NICs to bond1 by using MASTER=bond1 line to each of the ifcfg-p3p1 and ifcfg-p3p2 config files.
- Configure the network switch to anticipate a LAG or LACP group when we plug these into the switch (test the bonds so that they are online before installing the bridge-utils package).
- Once you have good networking and can ping around, create at least the ifcfg-br0 bridge (you can name it anything) so that the hypervisor now has a device it can use to create virtual NICs.
- Add the proper bond to the bridge (in our case bond0 will be added to br0). You can ignore bond1 and br1 for now.
- When the hypervisor then uses br0 to create virtual nics, it will be using the bond0 we created, so it will have fault tolerance and load balancing between two physical NICs.
Firing it all up!
The last steps will be below:
chkconfig libvirtd on # Make sure that the KVM virtualization engine starts on boot.
chkconfig iptables off # This is ill-advised for production use. Since this is for my home lab, I can do this. This stops the firewall. In the networking section, we will re-enable iptables and add the proper rules so that qemu and libvirt can be accessed properly.
service libvirtd start # This starts the virtualization service.
ps -ef | grep libvirt # Make sure you can libvirt as a running process.
And voila! We now have a fully running hypervisor. The configuration for Wolverine (our second hypervisor) is similar, but slightly different as it uses bonds (multiple NICs bonded together as a single interface, and configured on the network switch using LACP). The configuration for that server will be found here (once I create that page :-))
Virtual Machines
This will be a moving target, but the servers I anticipate running on the Wolverine hypervisor are:
- Hawkeye- Ubuntu 12.04 Server – Nagios/Cacti/Racktables/ArpWatch/Rancid/Radius monitoring server
- Psylocke– Ubuntu 12.04 Server – Web server (for high availability with Warlock running on Cyclops)
- Magneto– Windows Server 2012 – Microsoft Active Directory Backup Domain Controller
- EmmaFrost – Ubuntu 12.04 Server – XenLB Load Balancer