I got tired of the version of Xen in Debian stable being full of bugs, so decided to try KVM instead.
I know that Debian stable can preserve old bugs beyond their natural lifetime, and there are doubtless plenty of bugs in KVM, but (i) I was running into bugs that actually mattered to me and (ii) my attempts to use later kernels and versions of Xen, from backports and unstable, were a dead loss.
I decided to use virt-manager. This didn’t prove to be a bad choice, although there’s some issues worth mentioning to do with creating new VMs.
Firstly, when I wanted to create a new 32-bit VM the i686 option of the “CPU architecture” pulldown seemed like the right answer. It isn’t: selecting this on a 64-bit physical CPU condemns you to software emulation of a 32-bit CPU using QEMU, which (experimentally) is about factor of ten slower. Notice in the screenshots below that when I selected i686, the hypervisor button quietly changes to qemu.
In retrospect, it was a silly mistake to make: it’s asking about the CPU architecture to emulate, not what your OS will require. Nevertheless, I ended up creating my first VM using QEMU rather than KVM. This wasn’t hard to fix, without having to start from scratch; this is discussed below.
Still, as a UI design suggestion, a better approach would be to make the hypervisor setting be first, and for it to constain the possible values of the CPU architecture, rather than the other way round.
Secondly, when doing a network install, it wasn’t very clear what to put in the “Install Media URL” box. After some experimentation it turned out that a good answer was http://debian.virginmedia.com/dists/lenny/main/installer-i386/. (Obviously anyone reading this should replace the mirror hostname with one close to them.) As you see in the screenshots, there are some tips, but they don’t really say anything useful.
Thirdly, though, it only copes with 32-bit installers for Debian. If you try to do a 64-bit network install by this method then it looks for x86_64 where it should be looking for amd64, and therefore fails. In the end this was only a speedbump, not a brick wall, as discussed below.
Converting a QEMU VM to a KVM VM
I did this by editing the .xml file for it in /etc/libvirt/qemu as follows:
- Change the domain type from qemu to kvm.
- Change the architecture from i686 to x86_64.
- Change the emulator from /usr/bin/qemu to /usr/bin/kvm.
- Add <model type='virtio'> to the end of the <interface> element.
Here’s what the changes amounted to in practice. (I bumped the amount of RAM allocated to the VM at the same time.)
--- leucomorph.xml_SAVE 2010-05-02 15:58:30.000000000 +0100 +++ leucomorph.xml 2010-05-02 16:00:58.000000000 +0100 @@ -1,11 +1,11 @@ -<domain type='qemu'> +<domain type='kvm'> <name>leucomorph</name> <uuid>4818eeda-4919-8f90-1196-cc109d0ee487</uuid> - <memory>196608</memory> - <currentMemory>196608</currentMemory> + <memory>524288</memory> + <currentMemory>262144</currentMemory> <vcpu>1</vcpu> <os> - <type arch='i686' machine='pc'>hvm</type> + <type arch='x86_64' machine='pc'>hvm</type> <boot dev='hd'/> </os> <features> @@ -18,7 +18,7 @@ <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> - <emulator>/usr/bin/qemu</emulator> + <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <source file='/var/lib/libvirt/images/leucomorph.img'/> <target dev='hda' bus='ide'/> @@ -26,6 +26,7 @@ <interface type='bridge'> <mac address='54:52:00:6d:e5:8c'/> <source bridge='br0'/> + <model type='virtio'/> </interface> <serial type='pty'> <target port='0'/>
This resulted in a greater than ten times speedup (on an admittedly rather noddy benchmark) so was well worthwhile. I had to /etc/init.d/libvirt-bin restart to make it pick up the changes.
Installing Ubuntu from a CD Image Into a VM
The workaround for not being able to use 64-bit network installs is to download a 64-bit ISO and install from that. I used the alternate installer ISO of Lucid Lynx (10.10). It was necessary to add fb=false to the command line as otherwise the installer hung trying to enable the kernel frame buffer; I don’t know if this is an Ubuntu bug or a KVM bug. It didn’t stop the eventual installed system having working X (which I disabled after install by removing the X server; I don’t plan to use the desktops of these systems, after all).
This section is short because, basically, it all just worked.
Converting a Xen VM to a KVM VM
Firstly, my Xen VMs all have one image file per partition, whereas KVM wants one image file for an emulated disk (containing multiple partitions). I did the conversion as follows.
1. Create a suitably large image file in /var/lib/libvirt/images.
2. Map a loopback device onto it.
# losetup -v -f /var/lib/libvirt/images/deodand.img Loop device is /dev/loop1
3. Create partitions. cfdisk /dev/loop1 will work, but don’t use the “Maximize” option - when you come to install Grub2 it will complain about it. I found this the hard way so did everything twice.
4. Create device nodes for the new partitions. They will appear as /dev/mapper/loop1p1, loop1p2 etc. (Those are the digit for one, not lower case “L”).
# kpartx -v -a /dev/loop1 add map loop1p1 (254:8): 0 1992060 linear /dev/loop1 31583790 add map loop1p2 (254:9): 0 31583727 linear /dev/loop1 63
- I understand the 63 to be critical: if the start point of the first partition is any smaller, Grub2 will tell you off.
- The observant will notice that I added my partitions back to front. Don’t do that, it only leads to confusion.)
5. Create filesystems, swap space, etc according to taste.
6. Mount the old and new filesystems and copy from one to the other.
# mount -o loop /xen/domains/deodand/disk.img /mnt/old-deodand # mount /dev/mapper/loop1p1 /mnt/new-deodand # really rsync -v --archive --numeric-ids /mnt/old-deodand/. /mnt/new-deodand/. ...go make a cup of tea... # umount /mnt/old-deodand
At this point I have a new whole-disk image with the same contents as the old one. It’s a good time to chroot into the new filesystem and install any extra packages you want, edit /etc/fstab to reflect the new UUID of the root filesystem and swap space, etc. I didn’t do this and fixed it up later, but you can learn from my experience.
I created the new VM’s /etc/libvirt/qemu/deodand.xml by hand, by copying it from the one generated automatically for the Ubuntu install above. It might well be that it’s also convenient to do through the GUI. Here’s what my configuration looks like:
<domain type='kvm'> <name>deodand</name> <uuid>bff8ab0a-6c05-4b8b-946b-8b452018b8e3</uuid> <memory>524288</memory> <currentMemory>262144</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <source file='/var/lib/libvirt/images/deodand.img'/> <target dev='hda' bus='ide'/> </disk> <disk type='file' device='cdrom'> <source file='/var/www/software/linux/ubuntu-10.04-alternate-amd64.iso'/> <target dev='hdc' bus='ide'/> <readonly/> </disk> <interface type='bridge'> <mac address='54:52:00:42:60:3c'/> <source bridge='br0'/> <model type='virtio'/> </interface> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-gb'/> </devices> </domain>
Remember to change the MAC address to an unused one.
The CDROM deliberately points to an Ubuntu ISO image. The reason for this is that my Xen paravirtual domains didn’t contain a boot loader and although it was possible to install the Grub packages while chrooted into the new filesystem, I didn’t manage to figure out how to actually install it into the virtual hard disk’s boot sector. Instead I booted off the install CD and used it in rescue mode (remembering to set fb=false as described above) and ran grub-install and update-grub from there. It doesn’t have to be an Ubuntu image in particular, anything that will get you a root shell in the target system will do.
(This is a familiar operation with non-virtual hardware, of course, originally performed with bootable floppies.)
I went round this several times, as I discovered the various pitfalls described above, but ended up with a working VM.