I had previously run Arch Linux as a virtual host. While it did mean I had the features of the latest kernel and QEMU releases, the number of updates and stability issues a rolling release distro like Arch brings was a problem. With the more stable distros now having the features I wanted a couple of years ago we will use Centos 7 (build 1511) for this setup.

To complete this example access to the console of the server being configured is needed as the network configuration will be changed, and two spare network interfaces are needed for teaming.

Install

Complete a bare metal installation from the Minimal ISO available on the CentOS site. We want a minimum number of packages (and a specific QEMU version) installed, so the "Software Selection" section of the install can be left as the "Minimal Install" option -- and simply follow the rest of the setup procedure as required. For disk layout it will depend on what storage is available, but remember that in a default configuration Centos will expect virtual machine images to be under /var (though this is easily changed).

Now we have a Minimal install complete, lets make a few simple tweaks that are easy to miss:

Get the boot details back, if a boot fails it's nice to see why.

sudo plymouth-set-default-theme details
sudo dracut -f

Enable tab completion for commands (yes, your tab key wasn't broken).

sudo yum install bash-completion

Enable the TRIM timer for SSD maintenance (systemctl start fstrim.service will trigger a manual run).

sudo systemctl enable fstrim.timer

Network

To provide redundancy and potential throughput improvements we will configure a network team, NetworkManager is the preferred way to manage network configuration starting with Centos 7. However I experienced connectivity issues when a team was configured with NetworkManager enabled. After start up the team interface would only pass traffic after being "touched" (ping 127.0.0.1 was enough), so we will disable NetworkManager for this setup. In any case the benefits of NetworkManager for a server with interface and network configuration that rarely changes is small.

teamd is the preferred method for managing teaming/bonding in Centos 7. A mentioned above we will configure it using the sysconfig "ifcfg" files rather than NetworkManager. Details of the items that can be defined in an ifcfg file are found by running less /usr/share/doc/initscripts-*/sysconfig.txt and locating the "network-scripts" section.

So let's disable NetworkManager, mask will prevent other services from starting it again.

systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl mask NetworkManager

In this setup we will not have a separate management network, there will be a bridge (a vSwitch in vmWare or Hyper-V) on the default VLAN that management and VMs can share. If separation was needed it would be a case of using the team0 interface for management, or by making use of child interfaces.

Now create a bridge with the desired management IP address, the ZONE item is the firewalld zone to place the interface in (more on this later).

sudo vi /etc/sysconfig/network-scripts/ifcfg-brdefault
# device for management and access on the default VLAN
DEVICE=brdefault
TYPE=Bridge
BOOTPROTO=none
IPADDR=192.168.1.11
PREFIX=24
GATEWAY=192.168.1.254
DNS1=192.168.1.254
DELAY=0
ZONE=internal
ONBOOT=yes

Below we configure the interface team0 as an lacp team. The TEAM_CONFIG item passes the config on to teamd and BRIDGE places the interface into the bridge brdefault. This provides the bridge access to the physical network. Here we are using the lacp runner, however the loadbalance runner achieves similar functionality if LACP support it not available.

sudo vi /etc/sysconfig/network-scripts/ifcfg-team0
# team of eno1 and eno2 running with LACP
DEVICE=team0
DEVICETYPE=Team
BOOTPROTO=none
TEAM_CONFIG='{"runner":{"name": "lacp","active": true,"fast_rate": true},"link_watch":{"name": "ethtool"}}'
BRIDGE=brdefault
ONBOOT=yes

eno1 and eno2 are the physical interfaces, they need to be configured as members of the team team0 (physical interfaces may have different names based on hardware type).

sudo vi /etc/sysconfig/network-scripts/ifcfg-eno1
# member interface of team team0
NAME=eno1
DEVICE=eno1
HWADDR=0c:c4:7a:c8:18:48
DEVICETYPE=TeamPort
TEAM_MASTER=team0
TEAM_PORT_CONFIG='{}'
ONBOOT=yes
sudo vi /etc/sysconfig/network-scripts/ifcfg-eno2
# member interface of team team0
NAME=eno2
DEVICE=eno2
HWADDR=0c:c4:7a:c8:18:49
DEVICETYPE=TeamPort
TEAM_MASTER=team0
TEAM_PORT_CONFIG='{}'
ONBOOT=yes

Now reboot and check the sever is accessible via the brdefault interface address. The status of the team can be checked with sudo teamdctl team0 state view, it should show the runner as active: yes if the configuration is successful.

Virtualization

For a more up to date version of QEMU we will use version 2.3 from the Virtualiztion SIG repo as -- at the time of writing -- the default repo is still on version 1.5. A couple of useful tools to install alongside libvirt are virt-install for creating domains (virtual machines), and libguestfs-tools for managing virtual disks.

Enable the Virtualiztion SIG repo and install the packages.

sudo yum install centos-release-qemu-ev
yum repolist
sudo yum install qemu-kvm-ev libvirt virt-install libguestfs-tools

It is also helpful to enable nested virtualization, run either of the below for Intel and AMD hardware, 1 or Y will be retuned if it is supported by the CPU.

cat /sys/module/kvm_intel/parameters/nested
cat /sys/module/kvm_amd/parameters/nested

Enable it persistently by editing kvm-nested.conf with the line below and then rebooting.

sudo vi /etc/modprobe.d/kvm-nested.conf
options kvm_intel nested=1

A simple way to check the supported virtualization features is with the command virt-host-validate. If we run that in a guest machine hosted on a server with nesting enabled it will pass the "hardware virtualization" check.

Firewall and SELinux

Before we create a libvirt domain (VM) let's make sure we will be able to connect to the console and access the machine. Centos 7 has a firewall daemon (firewalld) running by default, it is managed on the command line with firewall-cmd. QEMU uses VNC to expose a domain's graphical console, this is done over port 5900 and increments for each additional domain running. To open this port range we can use the vnc-sever service from the available defaults.

firewall-cmd --get-services

We will associate the vnc-server service with the zone the management interface is in (the zone was set in the ifcfg-brdefault configuration file). The first command will apply the rule immediately, but to make it persistent the --permanent attribute is needed.

sudo firewall-cmd --zone=internal --add-service=vnc-server
sudo firewall-cmd --zone=internal --add-service=vnc-server --permanent

The vnc-server service only opens ports 5900-5904, so if more than 5 domains are running the 6th will reject VNC connections. The XML configuration files for the default firewalld services are located at /usr/lib/firewalld/services and custom services are placed in /etc/firewalld/services. We can use the current configuration as a base to create a custom service.

sudo cp /usr/lib/firewalld/services/vnc-server.xml /etc/firewalld/services/vnc-server-extra.xml
sudo vi /etc/firewalld/services/vnc-server-extra.xml

Edit the port attribute and open 20 ports for VNC use.

<port protocol="tcp" port="5900-5919"/>

Apply the changes and add the new service to the internal zone.

sudo firewall-cmd --reload
firewall-cmd --get-services
sudo firewall-cmd --zone=internal --add-service=vnc-server-extra
sudo firewall-cmd --zone=internal --add-service=vnc-server-extra --permanent
sudo firewall-cmd --zone=internal --remove-service=vnc-server
sudo firewall-cmd --zone=internal --remove-service=vnc-server --permanent

Now a client like TightVNC will be able to connect to a domain exposed on any of those 20 ports.

SELinux

By default virtual disks located on a mounted device will be restricted by SELinux. To enable the use of mounted locations we can set the equivalent SELinux variable.

NFS is a common method of mounting storage from a remote server, we can allow that with the command below.

sudo setsebool -P virt_use_nfs on

Create a Virtual Machine

To create a libvirt domain we will make use of two commands qemu-img and virt-install. The first will create the virtual disk, and the second will generate an XML file which defines the domain.

When creating the disk image preallocation= is the important choice. off will create a true sparse image; metadata creates a file with only the metadata, file systems that support it will report the full size; full creates an image padded with zeros.

sudo qemu-img create -f qcow2 -o preallocation=off,size=25G /var/lib/libvirt/images/windows7.qcow2

Now we have the disk image we can create the domain. If you tell virt-install which type of OS it will be installing via the --os-variant= parameter, it will choose some helpful defaults. For example with Windows guests it will set the correct hardware clock time.

Get a list of available values with the below.

osinfo-query os

We can also preview the generated XML first by using --print-xml and --dry-run, this is helpful to double check there are no errors or mistakes. Then when those parameters are omitted virt-install will create the domain and start it automatically, booting from the ISO that was chosen.

In the below example we will install Windows 7, and also enable virtio which provides better performance for disk and network devices -- but requires additional drivers. This means we will mount two ISO files, the --cdrom parameter defines the boot CD and can be used only once, but the second CD can be defined with the --disk parameter. Details on the other parameters can be found with man virt-install.

sudo virt-install \
    --connect qemu:///system \
    --name windows7 \
    --boot cdrom,hd,menu=on \
    --cpu host-model \
    --vcpus 2 \
    --memory 2048 \
    --cdrom /var/lib/libvirt/images/windows7.iso \
    --disk path=/var/lib/libvirt/images/virtio-win-0.1.126.iso,device=cdrom \
    --disk path=/var/lib/libvirt/images/windows7.qcow2,bus=virtio \
    --network bridge=brdefault,model=virtio \
    --graphics type=vnc,listen=0.0.0.0,keymap=en-gb \
    --video cirrus \
    --memballoon virtio \
    --noautoconsole \
    --os-variant=win7 \
    --print-xml \
    --dry-run

If virt-install is successful the domain will start and the console can be accessed via a VNC client on port 5900. Further management of the domain with libvirt can be done through the sudo virsh command. This must be run as root as we used --connect qemu:///system to create the domain which uses the root libvirt instance.

After the OS install is complete we can remove the CDROM devices from the domain using virt-xml. This utility can edit the configuration of an existing domain, the alternative is to manually edit the XML using sudo virsh edit windows7. The --print-diff parameter gives a helpful diff of the changes that will be made.

sudo virt-xml windows7 --remove-device --device cdrom --print-diff
sudo virt-xml windows7 --remove-device --device cdrom

One final useful parameter for virt-install is --import. This is used to skip the OS install process and create a domain using an existing disk image where a bootable OS is already available.

sudo virt-install \
    --connect qemu:///system \
    --name server2012 \
    --boot cdrom,hd,menu=on \
    --cpu host-model \
    --vcpus 2 \
    --memory 2048 \
    --disk path=/var/lib/libvirt/images/server2012_os.qcow2,bus=virtio \
    --disk path=/var/lib/libvirt/images/server2012_data.qcow2,bus=virtio \
    --network bridge=brdefault,model=virtio \
    --graphics type=vnc,listen=0.0.0.0,keymap=en-gb \
    --video cirrus \
    --memballoon virtio \
    --noautoconsole \
    --os-variant=win2k12r2 \
    --import

libvirt can also manage host storage and networking through the virsh pool-, iface- and net- commands. However this is optional, and a domain can be configured with direct references to the storage or networking as we did above. There are also some scenarios where configuration directly on the host is preferable, for instance when using a bridged network (as opposed to a host only network).

Additional Networks

It's likely we will want to connect the host or guest machines to more than just the default VLAN, this means configuring VLAN tagging (IEEE 802.1q). We can do this with an ifcfg file by creating a child interface with a suffix (in this case .5) that defines the VLAN ID, here referencing the team0 interface.

We could also set a specific firewalld zone for this interface as traffic is terminating on it. Traffic being forwarded through a bridge device (or interface) is not effected by firewalld rules as the sysctl variable net.bridge.bridge-nf-call-iptables is set to 0 by default.

So, create an interface tagged for VLAN 5, with an IP address (connecting to a storage network for example) using the below.

sudo vi /etc/sysconfig/network-scripts/ifcfg-team0.5
DEVICE=team0.5
BOOTPROTO=none
IPADDR=192.168.5.11
PREFIX=24
VLAN=yes
ZONE=trusted
ONBOOT=yes

To connect virtual machines to a VLAN we need a bridge device to associate the domain with. Below we use VLAN 20.

sudo vi /etc/sysconfig/network-scripts/ifcfg-brvlan20
DEVICE=brvlan20
TYPE=Bridge
BOOTPROTO=none
DELAY=0
ONBOOT=yes

Then a child interface connected to the bridge. This tags the traffic for VLAN 20 as it passes to the physical network via the team0 interface. No IP address is needed in this instance as traffic will be terminating on the domain.

sudo vi /etc/sysconfig/network-scripts/ifcfg-team0.20
DEVICE=team0.20
BOOTPROTO=none
VLAN=yes
BRIDGE=brvlan20
ONBOOT=yes

During domain creation with virt-install or when editing with virt-xml the bridge can be specified to place it into the required VLAN.

Finished

We now have a virtual machine host that is stable (minimal package updates) and easy to maintain on the command line. The next steps could be a VM for management with tools like virt-manager, and securing console access as VNC is unencrypted.


Comments

comments powered by Disqus